Next Article in Journal
High-Resolution Remote Sensing Data Classification over Urban Areas Using Random Forest Ensemble and Fully Connected Conditional Random Field
Previous Article in Journal
Wicked Water Points: The Quest for an Error Free National Water Point Database
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Nationwide Point Cloud—The Future Topographic Core Data

1
School of Engineering, Aalto University, P.O. Box 14100, FI-00076 Aalto, Finland
2
Finnish Geospatial Research Institute FGI, Geodeetinrinne 2, FI-02430 Masala, Finland
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(8), 243; https://doi.org/10.3390/ijgi6080243
Submission received: 7 June 2017 / Revised: 7 July 2017 / Accepted: 3 August 2017 / Published: 8 August 2017

Abstract

:
Topographic databases maintained by national mapping agencies are currently the most common nationwide data sets in geo-information. The application of laser scanning as source data for surveying is increasing. Along with this development, several analysis methods that utilize dense point clouds have been introduced. We present the concept of producing a dense nationwide point cloud, produced from multiple sensors and containing multispectral information, as the national core data for geo-information. Geo-information products, such as digital terrain and elevation models and 3D building models, are produced automatically from these data. We outline the data acquisition, processing, and application of the point cloud. As a national data set, a dense multispectral point cloud could produce significant cost savings via improved automation in mapping and a reduction of overlapping surveying efforts.

1. Introduction

Until today, the most extensive and accurate nationwide data sets have been maintained by national mapping agencies. These datasets include aerial photos, raster maps, and vector data. Commonly the vector data is stored in topographic databases (see, e.g., [1]). It includes the details of the terrain and built-up objects, divided into several object classes, in vector format. The positional accuracy of the objects typically ranges from map scales of 1:5000–1:10,000. The topographic database is applied as source material in map production. Part of the information is updated continuously (e.g., roads) or yearly (buildings), with more extensive updating carried out at intervals of 5–10 years [1,2]. Creation and updating of topographic databases involves a large amount of manual work.
The most common input data for production and updating of a topographic database is aerial imagery, but airborne laser scanning (ALS) is increasingly used. Common laser scanning (LS) methods include terrestrial laser scanning (with a stationary instrument) (TLS), mobile laser scanning (MLS) [3], airborne laser scanning (ALS), and scanning from an unmanned aerial vehicle (UAV-LS) [4]. The position and orientation of mobile, airborne, and UAV systems are solved using a combination of Global Navigation Satellite System (GNSS) and the inertial measurement unit (IMU). Recently, multispectral airborne laser data has also been used to provide point clouds with radiometrically calibrated intensities allowing single-sensor mapping solutions [5,6,7,8]. In addition to LS, point clouds can be obtained with stereophotogrammetry, depth camera based techniques [9], or synthetic aperture radar (SAR) [10]. Other imaging sensors, such as thermal cameras, can add information to point clouds [11].
Dense point clouds and fully automated processing can be applied to forestry [12], road infrastructure maintenance and monitoring [13,14], city modeling [15], construction [16], fluvial studies [17,18], and autonomous driving [19]. Several countries are applying ALS for statewide elevation modeling; the benefits include 5–10 times better elevation model accuracy (when compared to photogrammetry), highly increased automation in processing, and significantly decreased costs. Methods for automated object reconstruction have been reported for city environments [20], roads [21], and forests [22]. ALS-based forest inventory is operational in at least Scandinavia, the Baltic countries, Spain, Switzerland, the USA, Canada, Australia, and New Zealand. In addition to reconstruction and analysis methods, direct visualization of point clouds [23], annotation of point clouds with sematic data [24], and online storage and retrieval of massive point clouds [25] have been developed. Point clouds have even been suggested to replace existing 3D city models [26].
With the development of measuring technology, applications, and utilization methods, point clouds become source data for an increasing number of applications and processes. Their wider application and acquisition has been suggested by several authors, especially using airborne sensors [27,28]. In this article, we present the concept of producing a dense nationwide point cloud, originating from multiple sensors and containing multispectral information, operating as the core data for geo-information. The surveying activities focus on producing a dense point cloud, which is then applied in further processes. Geo-information products, such as digital terrain and elevation models and 3D building models, are produced automatically as needed.

2. Data Acquisition

As measuring techniques have varying properties in terms of viewpoint (terrestrial, airborne), efficiency, and accuracy, it makes sense to combine them when measuring complex environments [29]. Since roadside data has high importance to each country, the future nationwide point cloud should consist of car-based MLS data integrated with a national coverage of ALS. In the following, we review some relevant developments in measuring systems.

2.1. Mobile Laser Scanning

MLS has proven to be very efficient in measuring road and city environments [30]. After the introduction of multi-platform MLS, the use cases have expanded to natural environments, industrial installations, and urban environments that cannot be easily accessed by car [3]. With the development of algorithms that allow simultaneous localization and mapping (SLAM), MLS has also advanced to indoor environments, where GNSS are not available [31]. UAV-LS permits a very fine scale mapping of both urban and natural environments from a less occluded viewpoint than that of TLS or MLS [4]. In addition to laser scanners, UAVs can carry cameras to obtain data for dense point cloud calculation [32].
The advantage of working with point clouds is that point density can vary depending on the importance of the target, allowing a different level of detail to be extracted from the point cloud data [25]. In urban areas, a higher level of detail can be obtained with UAV-LS or backpack laser scanning [3,33]. For examples of MLS and UAV-LS data the reader is referred to Section 4.2.1 and Section 4.2.2.

Autonomous Vehicles for Crowdsourced Mapping

Autonomous-driving technologies have attracted considerable academic and industrial interests in recent years. Following the success of the autonomous car technology competition, the DARPA Grand Challenge [19], many companies have announced their future performance providing automatic or automated driving based on advanced sensor technology. Therefore, most cars in the future will likely carry a mapping system similar to the ones currently installed in high-end MLS cars collecting roadside data. In practice, all future autonomous cars will be equipped with mapping sensors producing point clouds (e.g., lidars, cameras, radars, sonars). See Figure 1. As a result, huge amounts of point cloud data will be acquired from urban and road environments on a continuous basis. Current procedures in mapping are based on defined measuring campaigns, with map updating left unsolved due to high costs. Continuous flows of car-based data could revolutionize these procedures by providing very interesting alternative solutions for keeping map databases up-to-date. In addition to vehicles, pedestrians having mapping sensors in their smart phones can contribute to mapping [34]. Combined, this technology disruption means that all centralized mapping will be turned into decentralized, distributed mapping. Maps can be frequently updated.

2.2. Airborne Multispectral Laser Scanning

LS technology is also developing with the increase of spectral information recorded. The first example of this was the recording of laser backscatter intensity and the use of intensity values in the visualization of point clouds and some classification tasks. The recording of the return waveform has been a significant enabler in forestry applications [35]. The role of laser intensity has been relatively small in automated classification until multispectral LS (e.g., [36,37]), the geometric information of LS having been of higher importance.
The emerging multispectral LS (e.g., Optech Titan for ALS) increases the amount of spectral information obtainable with LS. Earlier intensity studies (for example, [38,39]) showing the concepts for radiometric calibration of ALS intensity have been precursors for multispectral ALS. Briese et al. realized one of the first multispectral point clouds by using three separate ALS systems [40]. Wang et al. used two separate ALS systems to acquire dual-wavelength data and found that the use of dual-wavelength data can substantially improve classification accuracy compared to one-wavelength data [41]. More extensive discussions and reference lists on recent multispectral airborne laser scanning point clouds are available from [5,6,7,8,42].
Results with the first multispectral ALS systems have been promising. For example, the overall accuracy of the land cover classification results with six classes (building, tree, asphalt, gravel, rocky, low vegetation) can be achieved at the 96% level compared with validation points [8]. An example of multispectral data from the built environment is given in Figure 2.
Spectral information can also be produced with the means of sensor integration by combining passive imaging sensors to LS [43], but the quality of active multispectral point clouds is significantly better since the errors due to measuring geometry leading to a bidirectional reflectance distribution of passive sensors are overcome with active backscatter data. Thus, the classification accuracy of active multispectral LS data is always higher than with passive techniques using the same wavelengths.

2.3. Airborne Single Photon Systems

There is currently a coming technology breakthrough in the collection of airborne laser data in the form of using single-photon (and Geiger-mode) technology. Single-photon systems require only one detected photon compared to hundreds or even thousands of photons needed in conventional laser pulse time of flight (TOF) or waveform lidars [44]. As a result, the pulse density in a single-photon system can be 10–100 times higher than in conventional LS. Due to the use of a single photon, the systems are easy to make eye-safe and the maximum range is higher than in conventional LS. However, the number of noise points is also increased.
In Degnan (2016) [44], high-range resolution was achieved through sub-nanosecond laser pulse widths, detectors, and timing receivers. Additionally, efficient noise filters, suitable for near real-time imaging, have effectively eliminated the solar background during daytime operations. In Degnan, elevation errors were at the sub-decimeter level [44]. In Tang et al., canopy heights derived using the single photon lidar had a strong agreement with field-measured heights, and automated spatial filtering algorithms can support large-scale, canopy-structure mapping from airborne single photon laser data [45]. In Stoker et al., Harris Corporation’s Geiger-mode IntelliEarth sensor and Sigma Space Corporation’s Single Photon HRQLS sensor were evaluated and compared to large area elevation modeling, and, although not directly applicable for the specifications, were found to possess much potential for increasing the efficiency of data acquisition [46].

3. Data Integration and Processing

3.1. Co-Registration

As not all measuring methods produce a georeferenced point cloud (e.g., indoor slam without GNSS), other methods are needed for co-registration to prevent bias between data sets. Rönnholm identified thirteen types of orientation methods developed by the participants for solving general registration problems [47]. Major strategy types included the extraction of corresponding 3D features from both data sources, the extraction of 3D features from ALS data and the corresponding 2D features from images, or the creation of a synthetic image from LS data and the extraction of corresponding 2D features from both synthetic laser-derived images and aerial images. The types of tie features included points, lines, surfaces, unfiltered laser point clouds, and a combination of lines and surfaces. One of the most promising solutions include the matching of 3D surfaces to each other. Measuring methods that can cover large areas in high detail and produce georeferenced data are a good starting point for producing multi-source data of large areas. The non-georeferenced data sets can be co-registered with this large, georeferenced ‘block’, thus bringing the entire multi-source point cloud to a single, known coordinate system [48]. In addition, it is possible to utilize ALS data for improving the registration of MLS systems, operating in varying GNSS visibility [49].

3.2. Downsampling

In LS, the density of the resulting point cloud is not homogenous, but depends on the operating geometry of the system [50]. Typically, the TLS instruments operate in a fixed angular step, producing a point density that follows the inverse square law, dropping dramatically when the distance to the scanner grows. In MLS, the same effect is visible in the density being reduced when going further away from the system’s path [50]. This can be countered with suitable downsampling methods that aim to produce an even point density in the data. This serves two purposes: firstly, to reduce the amount of redundant data that is typically located right at the foot of the scanner; and secondly, to produce data sets where the variation of density does not interfere with applications, such as modeling [50,51].
In addition, sampling is required for producing a single cloud from a set of overlapping data. In practice, the data has to be analyzed to identify overlapping regions, which then have to be quantified in terms of density and downsampled [51].

3.3. Data Integration

3.3.1. Temporal Information

Point clouds depict the environment as it was at the time of acquisition. Multi-temporal point clouds have been applied for change detection in forestry [52] and fluvial studies [17]. Temporal information can be integrated with the point data [53], allowing temporal filtering of the data (e.g., to obtain points from the latest scans only) or to allow extraction of point cloud pairs for change detection. For example, in some forestry applications, the data acquired in the leaf-on state is significantly more useful than one in the leaf-off state [54].

3.3.2. Accuracy Information

The sensors that are used have an impact on the quality of the point cloud. LS instruments have a limited ranging and angular measurement accuracy, and other error sources also affect the final accuracy [55]. In MLS systems, the accuracy of the platform localization depends on the GNSS visibility and quality of IMU used. GNSS occlusions cause a momentary deterioration in data [3,48]. For some applications, high dimensional accuracy is required. Therefore, the accuracy information should also be included in the point cloud, ideally stated as the global positional accuracy. This would allow the setting of application-specific accuracy demands, the estimation of data reliability, and more intelligent updating of data by omitting less accurate points if better ones are available.

3.3.3. Spectral Information

Point RGB color is commonly used in visualizing large point cloud data sets. However, spectral information present in point clouds, obtained by either sensor integration [37], or by laser backscatter intensity [56,57] and waveform [35] can be applied in analysis and segmentation [57]. In particular, multispectral sensors have attracted interest for improving classification results [41,42].

3.3.4. Semantic Point Clouds

Classification and segmentation of the point cloud is a prerequisite for successfully extracting object parameters or reconstructing a model of an object. The term “semantic point cloud” refers to a cloud where every point is assigned a label describing the object the point is representing (e.g., “building”, “tree”) [58]. Different classification methods can be applied to identify different objects from the point cloud, gradually producing more and more semantic information. Object-specific semantic information can also be included in the point cloud by, for example, adding a national building ID all points belonging to corresponding building, after it has been segmented. This allows binding the point cloud into other data sets containing object specific information, e.g., building characteristics.
By producing a semantic point cloud and segmenting accordingly, object type specific analysis and reconstruction algorithms can be applied. In addition, the semantic information allows filtering points in visualization and analysis applications, for example by omitting vegetation in urban environments.

4. Application of Dense Point Cloud Data

Several applications for dense point clouds are reported in research literature, across several fields. In several cases, point clouds have been used in analysis of the environment: In construction, they facilitate validation of as-built models [59], and construction progress monitoring [60] through integration with planning data. In forestry, the point clouds have been used for estimating foliage structures [56] and gaps [61], or estimating forest inventory parameters [59,62].
In robotics, point clouds can be applied as a 3D map of the environment. They have been applied for path planning for autonomous vehicles [63], or for estimating accessibility in urban environments [64], using the point cloud to identify passable areas.
In a similar manner, point clouds allow localization of imaging sensors [65] if they contain matching key points obtained through a structure-from-motion algorithm. This is useful for creating markerless augmented reality applications in indoor environments [66], where the localization of the device has to be solved from its onboard camera view.

4.1. Visualization of Dense Point Clouds

In addition to analyses and segmentation methods, visualization of point clouds (rather than more processed models) has been developed, with the advanced systems being capable of visualizing data sets of billions of points [23]. Commercial software dedicated to visualizing large point clouds has emerged (e.g., Bentley Pointools [67], Euclideon Geoverse MDM [68]), and open source projects for visualization also exist (e.g., Potree [69]). In addition, tools that allow the user to study the point cloud with CAD software have been developed. De Haan argues that dense point cloud visualization may contain less errors than the results of automated modeling in some cases [70]. Visualization methods that utilize immersive display devices, such as CAVEs, have also been developed [71].
For maintaining and distributing large point clouds, tools that allow the storage of point clouds in a database and spatial queries have to be applied. Oosterom et al. have carried out a benchmark of several hosting systems, including Oracle Spatial and PostgreSQL, using a dense aerial data set of the Netherlands as test material [25].
In addition to dedicated visualization tools, existing game engines may be leveraged. Figure 3 shows a point cloud in the Unity 5 engine. An octree data structure is used to achieve fast point retrieval. Visualization is enhanced by calculating estimated normal information for points. The point cloud shown is produced with photogrammetry, using a combination of UAV and terrestrial images.

4.2. Spatial Information Products

Several geo-information products are currently produced from point cloud data sets. These include building models [72,73], DTM & DSM [74], and individual tree characterization [30]. Several computational methods have been developed for extracting object parameters from dense point clouds and automatically reconstructing the models of objects. These methods are typically specific to object types (e.g., trees [75] or buildings [76]) and are therefore spread across a number of research disciplines. These computational methods can be applied automatically, permitting frequent updating of data sets produced with these methods. In practice, the analysis can be performed synchronously with measuring: every time new data is obtained, the resulting geoinformation data sets are regenerated.

4.2.1. Built and Road Environments

In the built environment, the automated generation of simple building models (e.g., [72,77]) has become mainstream in production of 3D city models [78]. In further research, several algorithms for more detailed building model generation have been introduced, e.g., [20,79]. For a review on urban reconstruction, see [76].
Most cities and main roads will be documented in the future with high-quality mobile point clouds (Figure 4). They allow detailed analysis of the road environment, including road surface analysis. The combination of such data with ALS data sets allows automated 3D modeling and keeping maps updated. Algorithms have been developed for extracting road markings [57], curbs [80], road edges [81], and pole-like objects [82] from MLS data. For dense MLS data sets, methods for automatic identification and segmentation of various urban furniture have been developed [83,84]. Raw point cloud data, however, is valuable for all kinds of engineering applications: direct measurements are possible from raw data. Multispectral ALS data is also feasible for automatic road detection and has a significant improvement compared to the use of optical aerial imagery. In a test using Optech Titan multispectral ALS data, 80.5% of the points representing roads were classified correctly. When aerial images were used, the corresponding percentage decreased to 71.6% [7].

4.2.2. Forests

Approaches for obtaining forest and forestry data from ALS point cloud data are divided into two groups: area-based approaches (ABA) and individual/single-tree detection approaches (ITD) [30]. ABA prediction of forest variables relies on the statistical dependency between the variables measured in the field and ALS point height metrics, which results in plot- or stand-level information. In ITD, individual trees can be detected and tree-level physical variables (such as height, crown size and, tree species) directly measured, and other variables (such stem volume, biomass and DBH) can be predicted. For inventory purposes, the stand-level forest inventory results are aggregated by summing up the ITD data. For a comparison of various ITD techniques, the reader is referred to [75,85,86,87]. An accuracy of about 1 m in tree height determination can be achieved. The majority of the trees in the dominant story can be detected.
Multispectral ALS allows discrimination of tree species. In a test by Yu et al., point cloud features described tree species with 76.0% accuracy, where the use of intensity features of all Optech Titan channels resulted in 85.6% accuracy [5]. Isolated and dominant trees can be detected with a detection rate of 91.9% and classified with high overall accuracy of 90.5%. The corresponding detection rate and accuracy are 81.5% and 89.8% for a group of trees, 26.4% and 79.1% for trees next to a bigger tree, and 7.2% and 53.9% for trees under a bigger tree, respectively.
Since diameter cannot be directly measured from ALS, TLS and MLS have been studied to provide diameter information at the tree and plot levels. In Liang et al., the DBH estimation from TLS data resulted in an RMSE of 1.29 cm [88]. In Liang et al., an MLS system was tested, and the RMSE of the DBH estimates was 2.4 cm [89]. The tree stem curve determines the tapering of the stem as a function of the height. The RMSE of the stem curve measurements was 4.7 cm when utilizing single-scan TLS data [62]. In [90], the RMSE of the stem curve estimation of the pine tree was 1.3 cm and 1.8 cm with the multi- and single-scan TLS data, respectively; and the RMSE of the curve measurement of the spruce tree was 0.6 cm for both the multi- and single-scan data.
Utilization of UAV-LS data for forest inventory has not been extensively studied yet. With a mini-UAV laser scanning system utilizing the Velodyne VLP-16, the estimation of diameter at breast height from point cloud metrics showed an accuracy of 2.6 cm, which is comparable to accuracies obtained with terrestrial surveys using MLS, TLS, or photogrammetric point clouds. An example is shown in Figure 5.

4.3. Change Detection from Multi-Temporal Point Clouds

National laser scanning is currently in progress in many countries. In the Netherlands, the whole country has already been surveyed twice. On a smaller scale, multi-temporal Lidar data sets have been applied to change detection [53]. We assume that in the coming years, national laser scanning can be performed every five years, possibly with a multispectral system producing dense point clouds. This allows spectral and geometrical change detection of unseen details at the national level. Figure 6 shows data collected in 1998 and 2000 from a Finnish forest. A powerline has been constructed between these years. Due to construction, two trees have been removed and a couple of branches have been taken away. The growth of tree crowns is also visible. Having dense point clouds with suitable additional data allows documenting of the environment and unseen automated change detection.

5. Point Clouds as National Core Data Sets

Out of the available LS methods, ALS is best suited for collecting nationwide point clouds in a repeated manner. Understandably, national laser scanning programs are emerging. Single-photon ALS systems provide the potential for producing dense point clouds at low cost. Even annual country-level data collections with such a system are feasible after the technology has matured.
For obtaining a comprehensive data set from road and urban areas, MLS can be combined with ALS (Figure 7). UAV-LS and backpack laser scanning can raise the level of detail where needed. Finally, the emerging autonomous vehicles that already employ LS are a potential future data source for decentralized mapping.
For integrating mapping data from different systems, co-registration is required. By labeling the points with temporal and sensor information, filtering by time or accuracy requirements is possible. Once produced, dense point clouds with multi-temporal information facilitate change detection. Downsampling algorithms have to be applied to remove overlapping points and produce a more homogenous point density. With classification and segmentation methods, the point cloud is segmented to individual objects. After this, information concerning these objects can be attached to the point cloud. This data can be drawn from the topographic database, which in turn can be updated with new, automatically produced vector objects. With automated analysis methods, data can be extracted for various applications, such as forestry or surveying of road environments. For example, automated reconstruction methods produce LoD2 building models if needed.
The produced point cloud can either be visualized directly or processed further. With the developed visualization methods, data sets of billions of points can be stored and visualized interactively. Game engines are also applicable for application development. This way, dense-colored point clouds are directly usable for planning and illustration, reducing the need to produce geometric models for visualization.

6. Discussion

6.1. Automation

Applying largely automated processing with point clouds reduces labor costs. For example, the current Topographic Database of Finland is maintained by 150–200 full-time workers at NLS. A dense multispectral point cloud seems to offer automated data processing possibilities in mapping products. Furthermore, the costs savings also come from the reduction in overlapping measuring campaigns. For example, city environments are currently scanned by both city- and state-level actors. If high-quality 3D model products could be developed from the same data sets, parallel mapping by NLS and cities would end, providing significant savings. In that concept, also the big cities which are now mapping their territories with about 20 points/m2 would benefit from the national core data set. The collection of dense point clouds allowing higher automation is significantly easier using single-photon (and Geiger-mode) technologies. ALS data acquisition has also been found more affordable per surface area unit in larger campaigns [28]. It should be reminded that laser scanning data costs are remarkably affected by the scanning area. If national dense point clouds were available as open data, this would also dramatically stimulate development of value-adding services by companies. While some tasks, such as automated building modeling on a low level of detail, have been possible to automate for a considerable amount of time (e.g., [72]), unsolved issues remain in the context of automatically producing geo-information assets from LS data, especially if the detail level is to be raised [91]. In some cases, the objects of interest may also be hidden from airborne and terrestrial surveying methods [92].
It is worth noting that not all topographic objects are derivable from geometric or radiometric data: for example, land and property ownership and rights are geometric, potentially 3D entities that can’t be reconstructed from geometric measurements [93]. Even if the generation and updating of other geo-information assets were automated, the changes to these would still have to be made manually.

6.2. Data Amount and Point Density

Dense point clouds easily become large in terms of data amount. The ALS data set used by Oosterom et al. contained 640 billion points [25]. With the point density of 6–10 points/m2, this was sufficient to cover the entire Netherlands, requiring slightly above 11 TBs of storage space. With single-photon systems, the point density is typically at least 10 times higher. Data amounts are not only increased by higher point densities, but also by the amount of data carried with the points. In addition to conventional classification information [94], points may also carry more detailed sematic labels [58], object ids [26], time stamps [53], or sensor information, leading to a further increase in data amount. Having a time series of multiple overlapping point clouds also multiplies the data amount. However, for point densities remaining in tens of points per square meter, the data amounts remain feasible with current technology.
In MLS data sets, the density is of a different magnitude (over 1000 points/m2). If the point density is increased to MLS-grade, storage space requirements in excess of 100s of TBs can be expected. Several authors have discussed challenges of utilizing DB for storage of large point clouds [25,53]. For coping with large data amounts, pre-generated LoDs and geometric segmentation are suggested [25]. Having a point cloud with more point specific information would allow more sophisticated optimization methods, for example, by retrieving terrain points in low LoD from a large area, and building points from nearby regions only. Further, it would even be possible to separate the cloud to different storage servers as per object type, maintaining the MLS-based detailed clouds as smaller regional “blocks”.
For dense point clouds, downsampling is beneficial for reducing data amounts, and for producing a homogenous point density [25,50]. At the same time, lowering the point density limits potential application. Even removing overlapping data may hinder some processes, if the overlaps are used in, for example, co-registration [49]. Generally speaking, the higher the density, the easier the reconstruction of the object automatically with high detail. For classification tasks, a density below 5 points/m2 [94] has been found to reduce performance. In single tree detection, increase above 10 points/m2 did not significantly improve results. In building modeling, densities of 4–20 points/m2 are used in [95]. For automatically identifying building components from MLS data sets [20,82], significantly higher point densities have been used. In [20] the data set consisted 340 M points from an area of 180 by 280 m, resulting in roughly approximated point density of over 6000 points/m2. These results are, however, also dependent on the characteristics of the environment [86].
Based on existing research, a point density of 20 points/m2 has sufficed in ALS cases, whereas higher point densities are required for more detailed reconstruction from MLS. Comparing to [25], it seems that the current technology is applicable for maintaining nationwide ALS clouds suited for automatically producing geo-information assets for a topographic DB. For utilization of MLS data, a higher point density and, therefore, a revised storage solution would be required.

6.3. Multi-Sensor Integration

Combining data from different measuring devices and acquisition times is not completely straightforward if a high-quality result is to be obtained. For example, MLS systems typically experience a reduction in positioning accuracy in poor GNSS visibility, such as dense urban environments and forest [3]. Co-registration by data becomes increasingly difficult if the viewpoints have less overlap [96]. Not only can data from several platforms be co-registered [97], but ALS can be used to remedy some positioning accuracy issues of overlapping MLS [49]. In a research setting it is possible to ensure that the environment remains sufficiently unchanged to facilitate co-registration. For larger campaigns, change detection methods are needed prior to co-registration.
Ideally, the integration should be carried out in terms of all data associated with points, not only geometry. While radiometric calibration of ALS can be performed to some extent even with natural targets [98], the radiometric calibration of TLS (and MLS) is more challenging, both due to instrumental effects and highly altering environments and ranges encountered [99]. Further, if imaging sensors are used to produce spectral information, their performance limitations affect the final result [100]. While multispectral systems are already available for ALS, they are still absent from TLS and MLS. As some applications utilize multispectral data [42,101], they cannot utilize the monospectral parts of a multi-sensor data set, the same applying for full waveform Lidar data.
Finally, even the geometric quality assurance of multi-sensor data may become problematic. If, for example, an indoor data set is registered using correspondences with a georeferenced MLS data set, the absolute accuracy of indoor data depends on both the accuracy of the indoor measuring method used and the accuracy of the georeferenced point cloud used. This creates a point cloud with varying accuracy, dependent on the accuracy of other data.

6.4. Emerging Applications

The point cloud allows accurate distance measurements for arbitrary visible objects. In addition, the visualization capability of a dense colored point cloud is significantly higher than for polygon data only. From the application development point of view, the emergence of 3D game engines [102] that utilize mesh models has been a strong driving force for development of other 3D applications as well. For point clouds, some libraries [103] and platforms that allow the storage and maintaining of large point cloud data sets [69] have emerged. If the point clouds are to contain a large amount of additional data, the choice of platforms is even more limited. For GIS data, there are also standardized APIs for requesting and transferring data online, e.g., WMS and WFS [104]. For point clouds, suitable, easily applicable development environments, open tools, and standardized interfaces have not yet fully emerged. These, along with increasing availability of data, can be expected to stimulate application development in the future. Whole new applications would be enabled by the availability of dense, frequently updated point cloud data with large coverage. We would be able to document the impact of climate change on Finnish nature, e.g., using laser-based indicators for global change. We would be able to create intelligent services for road safety, such as warnings about visual barriers and bad road conditions attached to information where our forest animals are living. There are numerous applications based on such country-wide data.
Point clouds can also be applied for localization of imaging sensors [65]. In augmented reality applications, point clouds are already being applied for markerless localization of mobile devices in indoor environments [66]. Having a nationwide point cloud would also enable its use for GNSS-free localization. In a similar manner, the autonomous vehicles commonly utilizing LS and SLAM [105] may also utilize an existing point cloud for path planning [63]. In this respect, dense MLS from road environments is actually becoming an asset for autonomous vehicles in the future.
Secondly, the emergence of national, openly available point cloud data would stimulate application business. When the national multispectral point cloud is given to the public as an open data set, like the current national laser scanning data, companies would be able to create an industry ecosystem achieving significant turnover. Data would also have an impact on individual industry sectors: the data would allow the precision forest concept in Finland, optimizing the cross stumpage earnings and optimization of wood products. The electronic wood trade would be based on accurate species-specific data using multispectral ALS. In the same way, national road point clouds could be made a national infrastructure.

7. Conclusions

Topographic databases maintained by national mapping agencies and based on aerial images are currently the most common nationwide data sets in geo-information. Currently, the maintenance of these includes significant amounts of manual work. However, the application of laser scanning as source data for surveying is increasing. Along with this development, several analysis methods that utilize dense point clouds as source data have been developed. As more applications utilize these dense point clouds, they become more important. We suggest the use of a dense multi-sensor point cloud as the national core data set in geo-information. Upcoming single-photon technology has the most potential as a sensor solution for providing dense point clouds with low unit costs for country-level data acquisition.
Multiplatform laser scanning should be applied to obtain data from both airborne and terrestrial perspectives. Processing that includes co-registration, integration of temporal data, and segmentation is required to produce a data set applicable to analysis, automated reconstruction, and change detection. In some applications, the dense colored point cloud can also be used directly, without modeling. It should be noticed that for many applications, the visualization capability of a dense colored point cloud is significantly higher than for polygon data only. As a national data set, a dense multispectral point cloud could produce significant cost savings via improved automation in mapping, and a reduction in overlapping surveying efforts by cities and the state. Potentially, a large amount of data in a topographic DB could be produced and updated automatically. As open data, dense, high-quality point clouds possess a significant business potential for improved forestry and road infrastructure maintenance, and they operate as a “platform” for several novel applications.
To realize the concept, further research and development is needed for coping with data amounts in MLS, for determining the needed point densities for more detailed reconstruction tasks, and for ensuring and characterizing the dimensional quality of multi-sensor point clouds. In addition, the development of analysis methods utilizing point clouds further increases their applicability.

Acknowledgments

We are grateful to the Academy of Finland for their support of the Center of Excellence in Laser Scanning Research (CoE-LaSR, 292735 and 307362) and project “COMBAT” (293389). The Finnish Funding Agency for Innovation is acknowledged for support in project “VARPU” (7031/31/2016). Solid Potato Ltd. is acknowledged for Figure 1 and Figure 4. Leena Matikainen is acknowledged for Figure 2.

Author Contributions

Juho-Pekka Virtanen and Juha Hyyppä envisioned and wrote the article. Antero Kukko, Harri Kaartinen, and Anttoni Jaakkola contributed to passages concerning MLS systems, autonomous vehicles, and road and forest environments. Tuomas Turppa contributed to the section on point cloud visualization. All authors participated in finalizing the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Topographic Database. Available online: http://www.maanmittauslaitos.fi/en/maps-and-spatial-data/expert-users/product-descriptions/maastotietokanta (accessed on 4 July 2017).
  2. Matikainen, L. Object-Based Interpretation Methods for Mapping Built-Up Areas. Ph.D. Dissertation, Aalto University School of Science, Espoo, Finland, 28 September 2012. [Google Scholar]
  3. Kukko, A.; Kaartinen, H.; Hyyppä, J.; Chen, Y. Multiplatform mobile laser scanning: Usability and performance. Sensors 2012, 12, 11712–11733. [Google Scholar] [CrossRef]
  4. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  5. Yu, X.; Hyyppä, J.; Litkey, P.; Kaartinen, H.; Vastaranta, M.; Holopainen, M. Single-Sensor Solution to Tree Species Classification Using Multispectral Airborne Laser Scanning. Remote Sens. 2017, 9, 108. [Google Scholar] [CrossRef]
  6. Fernandez-Diaz, J.C.; Carter, W.E.; Glennie, C.; Shrestha, R.L.; Pan, Z.; Ekhtari, N.; Singhania, A.; Hauser, D.; Sartori, M. Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar. Remote Sens. 2016, 8, 936. [Google Scholar] [CrossRef]
  7. Karila, K.; Matikainen, L.; Puttonen, E.; Hyyppä, J. Feasibility of Multispectral Airborne Laser Scanning Data for Road Mapping. IEEE Geosci. Remote Sens. Lett. 2017, 14, 294–298. [Google Scholar] [CrossRef]
  8. Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating. ISPRS J. Photogramm. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
  9. Khoshelham, K.; Elberink, S.O. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef] [PubMed]
  10. Hyyppä, J.; Karjalainen, M.; Liang, X.; Jaakkola, A.; Yu, X.; Wulder, M.; Hollaus, M.; White, J.C.; Vastaranta, M.; Karila, K.; et al. Remote Sensing of Forests from Lidar and Radar. In Land Resources Monitoring, Modeling, and Mapping with Remote Sensing; CRC Press: Boca Raton, FL, USA, 2015; pp. 397–427. [Google Scholar]
  11. Vaaja, M.T.; Kurkela, M.; Virtanen, J.P.; Maksimainen, M.; Hyyppä, H.; Hyyppä, J.; Tetri, E. Luminance-Corrected 3D Point Clouds for Road and Street Environments. Remote Sens. 2015, 7, 11389–11402. [Google Scholar] [CrossRef]
  12. Hyyppä, J.; Yu, X.; Hyyppä, H.; Vastaranta, M.; Holopainen, M.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Vaaja, M.; Koskinen, J.; et al. Advances in forest inventory using airborne laser scanning. Remote Sens. 2012, 4, 1190–1207. [Google Scholar] [CrossRef]
  13. Jaakkola, A.; Hyyppä, J.; Hyyppä, H.; Kukko, A. Retrieval algorithms for road surface modelling using laser-based mobile mapping. Sensors 2008, 8, 5238–5249. [Google Scholar] [CrossRef] [PubMed]
  14. Pu, S.; Rutzinger, M.; Vosselman, G.; Elberink, S.O. Recognizing basic structures from mobile laser scanning data for road inventory studies. ISPRS J. Photogramm. Remote Sens. 2011, 66, S28–S39. [Google Scholar] [CrossRef]
  15. Vosselman, G.; Dijkman, S. 3D building model reconstruction from point clouds and ground plans. ISPRS Arch. 2001, 34, 37–44. [Google Scholar]
  16. Bosché, F. Automated recognition of 3D CAD model objects in laser scans and calculation of as-built dimensions for dimensional compliance control in construction. Adv. Eng. Inf. 2010, 24, 107–118. [Google Scholar] [CrossRef] [Green Version]
  17. Vaaja, M.; Hyyppä, J.; Kukko, A.; Kaartinen, H.; Hyyppä, H.; Alho, P. Mapping topography changes and elevation accuracies using a mobile laser scanner. Remote Sens. 2011, 3, 587–600. [Google Scholar] [CrossRef]
  18. Heritage, G.; Hetherington, D. Towards a protocol for laser scanning in fluvial geomorphology. Earth Surf. Process. Landf. 2007, 32, 66–74. [Google Scholar] [CrossRef]
  19. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  20. Zhu, L.; Hyyppä, J.; Kukko, A.; Kaartinen, H.; Chen, R. Photorealistic building reconstruction from mobile laser scanning data. Remote Sens. 2011, 3, 1406–1426. [Google Scholar] [CrossRef]
  21. Boyko, A.; Funkhouser, T. Extracting roads from dense point clouds in large scale urban environment. ISPRS J. Photogramm. Remote Sens. 2011, 66, S2–S12. [Google Scholar] [CrossRef]
  22. Nilsson, M. Estimation of tree heights and stand volume using an airborne lidar system. Remote Sens. Environ. 1996, 56, 1–7. [Google Scholar] [CrossRef]
  23. Richter, R.; Döllner, J. Out-of-core real-time visualization of massive 3D point clouds. In Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, Franschhoek, South Africa, 21–23 June 2010; ACM: New York, NY, USA, 2010; pp. 121–128. [Google Scholar]
  24. Koppula, H.S.; Anand, A.; Joachims, T.; Saxena, A. Semantic labeling of 3D point clouds for indoor scenes. In Advances in Neural Information Processing Systems, Proceedings of the Neural Information Processing Systems Conference, Granada, Spain, 12–15 December 2011; Curran Associates Inc.: Red Hook, NY, USA, 2011; pp. 244–252. [Google Scholar]
  25. van Oosterom, P.; Martinez-Rubi, O.; Ivanova, M.; Horhammer, M.; Geringer, D.; Ravada, S.; Tijssen, T.; Kodde, M.; Gonçalves, R. Massive point cloud data management: Design, implementation and execution of a point cloud benchmark. Comput. Graph. 2015, 49, 92–125. [Google Scholar] [CrossRef]
  26. Nebiker, S.; Bleisch, S.; Christen, M. Rich point clouds in virtual globes–A new paradigm in city modeling? Comput. Environ. Urban Syst. 2010, 34, 508–517. [Google Scholar] [CrossRef]
  27. Vosselman, G. Automated planimetric quality control in high accuracy airborne laser scanning surveys. ISPRS J. Photogramm. Remote Sens. 2012, 74, 90–100. [Google Scholar] [CrossRef]
  28. Jakubowski, M.K.; Guo, Q.; Kelly, M. Tradeoffs between lidar pulse density and forest measurement accuracy. Remote Sens. Environ. 2013, 130, 245–253. [Google Scholar] [CrossRef]
  29. Lerma, J.L.; Navarro, S.; Cabrelles, M.; Villaverde, V. Terrestrial laser scanning and close range photogrammetry for 3D archaeological documentation: the Upper Palaeolithic Cave of Parpalló as a case study. J. Archaeol. Sci. 2010, 37, 499–507. [Google Scholar] [CrossRef]
  30. Hyyppä, J.; Inkinen, M. Detecting and estimating attributes for single trees using laser scanner. Photogramm. J. Finl. 1999, 16, 27–42. [Google Scholar]
  31. Lehtola, V.V.; Virtanen, J.P.; Kukko, A.; Kaartinen, H.; Hyyppä, H. Localization of mobile laser scanner using classical mechanics. ISPRS J. Photogramm. Remote Sens. 2015, 99, 25–29. [Google Scholar] [CrossRef]
  32. Fritz, A.; Kattenborn, T.; Koch, B. UAV-based photogrammetric point clouds—Tree stem mapping in open stands in comparison to terrestrial laser scanner point clouds. ISPRS Arch. 2013, 40, 141–146. [Google Scholar] [CrossRef]
  33. Nocerino, E.; Menna, F.; Remondino, F.; Toschi, I.; Rodríguez-Gonzálvez, P. Investigation of indoor and outdoor performance of two portable mobile mapping systems. SPIE Opt. Metrol. 2017, 103320I. [Google Scholar] [CrossRef]
  34. Diakité, A.A.; Zlatanova, S. First experiments with the tango tablet for indoor scanning. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 4, 67–72. [Google Scholar] [CrossRef]
  35. Reitberger, J.; Schnörr, C.; Krzystek, P.; Stilla, U. 3D segmentation of single trees exploiting full waveform LIDAR data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 561–574. [Google Scholar] [CrossRef]
  36. Hug, C. Extracting artificial surface objects from airborne laser scanner data. In Automatic Extraction of Man-Made Objects from Aerial and Space Images (II); Gruen, A., Baltsavias, E., Henricson, O., Eds.; Birkhäuser Verlag: Basel, Switzerland, 1997; pp. 203–212. [Google Scholar]
  37. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne lidar and multispectral image data for urban scene classification using Random Forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66. [Google Scholar] [CrossRef]
  38. Ahokas, E.; Kaasalainen, S.; Hyyppä, J.; Suomalainen, J. Calibration of the Optech ALTM 3100 laser scanner intensity data using brightness targets. ISPRS Arch. 2006, 36, 1–6. [Google Scholar]
  39. Höfle, B.; Pfeifer, N. Correction of laser scanning intensity data: Data and model-driven approaches. ISPRS J. Photogramm. Remote Sens. 2007, 62, 415–433. [Google Scholar] [CrossRef]
  40. Briese, C.; Pfennigbauer, M.; Ullrich, A.; Doneus, M. Multi-wavelength airborne laser scanning for archaeological prospection. ISPRS Arch. 2013, 40, 119–124. [Google Scholar] [CrossRef]
  41. Wang, C.K.; Tseng, Y.H.; Chu, H.J. Airborne dual-wavelength lidar data for classifying land cover. Remote Sens. 2014, 6, 700–715. [Google Scholar] [CrossRef]
  42. Matikainen, L.; Hyyppä, J.; Litkey, P. Multispectral Airborne Laser Scanning for Automated Map Updating. ISPRS Arch. 2016, 323–330. [Google Scholar] [CrossRef]
  43. Breidenbach, J.; Næsset, E.; Lien, V.; Gobakken, T.; Solberg, S. Prediction of species specific forest inventory attributes using a nonparametric semi-individual tree crown approach based on fused airborne laser scanning and multispectral data. Remote Sens. Environ. 2010, 114, 911–924. [Google Scholar] [CrossRef]
  44. Degnan, J. Scanning, Multibeam, Single Photon Lidars for Rapid, Large Scale, High Resolution, Topographic and Bathymetric Mapping. Remote Sens. 2016, 8, 958. [Google Scholar] [CrossRef]
  45. Tang, H.; Swatantran, A.; Barrett, T.; DeCola, P.; Dubayah, R. Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar. Remote Sens. 2016, 8, 771. [Google Scholar] [CrossRef]
  46. Stoker, J.M.; Abdullah, Q.A.; Nayegandhi, A.; Winehouse, J. Evaluation of Single Photon and Geiger Mode Lidar for the 3D Elevation Program. Remote Sens. 2016, 8, 767. [Google Scholar] [CrossRef]
  47. Rönnholm, P. Registration quality-towards integration of laser scanning and photogrammetry. In EuroSDR Official Publication No. 59; EuroSDR: Leuven, Belgium, 2011. [Google Scholar]
  48. Hauglin, M.; Lien, V.; Næsset, E.; Gobakken, T. Geo-referencing forest field plots by co-registration of terrestrial and airborne laser scanning data. Int. J. Remote Sens. 2014, 35, 3135–3149. [Google Scholar] [CrossRef]
  49. Teo, T.A.; Huang, S.H. Surface-based registration of airborne and terrestrial mobile LiDAR point clouds. Remote Sens. 2014, 6, 12686–12707. [Google Scholar] [CrossRef]
  50. Puttonen, E.; Lehtomäki, M.; Kaartinen, H.; Zhu, L.; Kukko, A.; Jaakkola, A. Improved sampling for terrestrial and mobile laser scanner point cloud data. Remote Sens. 2013, 5, 1754–1773. [Google Scholar] [CrossRef]
  51. Wang, J.; Lindenbergh, R.C.; Menenti, M. Evaluating voxel enabled scalable intersection of large point clouds. In Proceedings of the ISPRS Geospatial Week 2015, La Grande Motte, France, 28 September–3 October 2015. [Google Scholar]
  52. Yu, X.; Hyyppä, J.; Kaartinen, H.; Hyyppä, H.; Maltamo, M.; Rönnholm, P. Measuring the growth of individual trees using multi-temporal airborne laser scanning point clouds. In Proceedings of the ISPRS Workshop Laser Scanning; Copernicus GmbH: Göttingen, Germany, 2005; pp. 204–208. [Google Scholar]
  53. Rieg, L.; Wichmann, V.; Rutzinger, M.; Sailer, R.; Geist, T.; Stötter, J. Data infrastructure for multitemporal airborne LiDAR point cloud analysis–Examples from physical geography in high mountain environments. Comput. Environ. Urban Syst. 2014, 45, 137–146. [Google Scholar] [CrossRef]
  54. Béland, M.; Widlowski, J.L.; Fournier, R.A.; Côté, J.F.; Verstraete, M.M. Estimating leaf area distribution in savanna trees from terrestrial LiDAR measurements. Agric. For. Meteorol. 2011, 151, 1252–1266. [Google Scholar] [CrossRef]
  55. Lichti, D.D.; Gordon, S.J.; Tipdecho, T. Error models and propagation in directly georeferenced terrestrial laser scanner networks. J. Surv. Eng. 2005, 131, 135–142. [Google Scholar] [CrossRef]
  56. Béland, M.; Baldocchi, D.D.; Widlowski, J.L.; Fournier, R.A.; Verstraete, M.M. On seeing the wood from the leaves and the role of voxel size in determining leaf area distribution of forests with terrestrial LiDAR. Agric. For. Meteorol. 2014, 184, 82–97. [Google Scholar] [CrossRef]
  57. Yang, B.; Fang, L.; Li, Q.; Li, J. Automated extraction of road markings from mobile LiDAR point clouds. Photogramm. Eng. Remote Sens. 2012, 78, 331–338. [Google Scholar] [CrossRef]
  58. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  59. Anil, E.B.; Tang, P.; Akinci, B.; Huber, D. Deviation analysis method for the assessment of the quality of the as-is Building Information Models generated from point cloud data. Autom. Constr. 2013, 35, 507–516. [Google Scholar] [CrossRef]
  60. Kim, C.; Son, H.; Kim, C. Automated construction progress measurement using a 4D building information model and 3D data. Autom. Constr. 2013, 31, 75–82. [Google Scholar] [CrossRef]
  61. Gaulton, R.; Malthus, T.J. LiDAR mapping of canopy gaps in continuous cover forests: A comparison of canopy height model and point cloud based techniques. Int. J. Remote Sens. 2010, 31, 1193–1211. [Google Scholar] [CrossRef]
  62. Maas, H.G.; Bienert, A.; Scheller, S.; Keane, E. Automatic forest inventory parameter determination from terrestrial laser scanner data. Int. J. Remote Sens. 2008, 29, 1579–1593. [Google Scholar] [CrossRef]
  63. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonom. Robots 2013, 34, 189–206. [Google Scholar] [CrossRef]
  64. Serna, A.; Marcotegui, B. Urban accessibility diagnosis from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 84, 23–32. [Google Scholar] [CrossRef] [Green Version]
  65. Lim, H.; Sinha, S.N.; Cohen, M.F.; Uyttendaele, M. Real-time image-based 6-DOF localization in large-scale environments. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Rhode Island, USA 16–21 June; IEEE: New York, NY, USA, 2012; pp. 1043–1050. [Google Scholar]
  66. Woodward, C.; Kuula, T.; Honkamaa, P.; Hakkarainen, M.; Kemppi, P. Implementation and evaluation of a mobile augmented reality system for building maintenance. In Proceedings of the 14th International Conference on Construction Applications of Virtual Reality (CONVR2014), Sharjah, UAE, 16–18 November 2014; Dawood, N., Alkass, S., Eds.; Teesside University: Middlesbrough, UK, 2014; pp. 306–315. [Google Scholar]
  67. Point-cloud Processing Software. Available online: https://www.bentley.com/en/products/product-line/reality-modeling-software/bentley-pointools (accessed on 30 May 2017).
  68. Euclidion Geoverse MDM. Available online: http://www.euclideon.com/home/geoverse-mdm/ (accessed on 6 July 2017).
  69. Potree 1.3. Available online: http://www.potree.org/ (accessed on 30 May 2017).
  70. De Haan, G. Scalable visualization of massive point clouds. In Management of Massive Point Cloud Data: Wet and Dry; van Oosterom, P.J.M., Vosselman, M.G., van Dijk, Th.A.G.P., Uitentuis, M., Eds.; Nederlandse Commissie voor Geodesie: Delft, The Netherlands, 2009; Volume 49, p. 59. [Google Scholar]
  71. Kreylos, O.; Bawden, G.W.; Kellogg, L.H. Immersive Visualization and Analysis of LiDAR Data. In Advances in Visual Computing ISVC 2008; Lecture Notes in Computer Science; Bebis, G., Boyle, R., Parvin, B., Koracin, D., Remagnino, P., Porikli, F., Peters, J., Klosowski, J., Arns, L., Chun, Y.K., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5358. [Google Scholar]
  72. Maas, H.G.; Vosselman, G. Two algorithms for extracting building models from raw laser altimetry data. ISPRS J. Photogramm. Remote Sens. 1999, 54, 153–163. [Google Scholar] [CrossRef]
  73. Suveg, I.; Vosselman, G. Reconstruction of 3D building models from aerial images and maps. ISPRS J. Photogramm. Remote Sens. 2004, 58, 202–224. [Google Scholar] [CrossRef]
  74. Kraus, K.; Pfeifer, N. Advanced DTM generation from LIDAR data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2001, 34, 23–30. [Google Scholar]
  75. Kaartinen, H.; Hyyppä, J. EuroSDR/ISPRS Project, Commission II, “Tree Extraction”. Final Report. In EuroSDR Official Publication No. 53; EuroSDR: Leuven, Belgium, 2008. [Google Scholar]
  76. Musialski, P.; Wonka, P.; Aliaga, D.G.; Wimmer, M.; Gool, L.V.; Purgathofer, W. A survey of urban reconstruction. Comput. Graph. Forum 2013, 6, 146–177. [Google Scholar] [CrossRef]
  77. Haala, N.; Brenner, C. Generation of 3D city models from airborne laser scanning data. In Proceedings of the EARSEL Workshop on LIDAR Remote Sensing on Land and Sea, Tallinn, Estonia, 17–19 July 1997; EARSeL: Münster, Germany, 1997. [Google Scholar]
  78. Gröger, G.; Plümer, L. CityGML–Interoperable semantic 3D city models. ISPRS J. Photogramm. Remote Sens. 2012, 71, 12–33. [Google Scholar] [CrossRef]
  79. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  80. Rodríguez-Cuenca, B.; García-Cortés, S.; Ordóñez, C.; Alonso, M.C. Morphological operations to extract urban curbs in 3D MLS point clouds. ISPRS Int. J. Geo-Inf. 2016, 5, 93. [Google Scholar] [CrossRef]
  81. Cabo, C.; Kukko, A.; García-Cortés, S.; Kaartinen, H.; Hyyppä, J.; Ordoñez, C. An Algorithm for Automatic Road Asphalt Edge Delineation from Mobile Laser Scanner Data Using the Line Clouds Concept. Remote Sens. 2016, 8, 740. [Google Scholar] [CrossRef]
  82. Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sens. 2010, 2, 641–664. [Google Scholar] [CrossRef]
  83. Yang, B.; Dong, Z.; Zhao, G.; Dai, W. Hierarchical extraction of urban objects from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2015, 99, 45–57. [Google Scholar] [CrossRef]
  84. Cabo, C.; Ordoñez, C.; García-Cortés, S.; Martínez, J. An algorithm for automatic detection of pole-like street furniture objects from Mobile Laser Scanner point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 87, 47–56. [Google Scholar] [CrossRef]
  85. Kaartinen, H.; Hyyppä, J.; Yu, X.; Vastaranta, M.; Hyyppä, H.; Kukko, A.; Holopainen, M.; Heipke, C.; Hirschugl, M.; Morsdorf, F.; et al. An International Comparison of Individual Tree Detection and Exctraction Using Airborne Laser Scanning. Remote Sens. 2012, 4, 950–974. [Google Scholar] [CrossRef] [Green Version]
  86. Vauhkonen, J.; Ene, L.; Gupta, S.; Heinzel, J.; Holmgren, J.; Pitkänen, J.; Solberg, S.; Wang, Y.; Weinacker, H.; Hauglin, K.; et al. Comparative testing of single-tree detection algorithms under different types of forest. Forestry 2011, 85, 27–40. [Google Scholar] [CrossRef]
  87. Wang, Y.; Hyyppä, J.; Liang, X.; Kaartinen, H.; Yu, X.; Lindberg, E.; Holmgren, J.; Qin, Y.; Mallet, C.; Ferraz, A.; et al. International benchmarking of the individual tree detection methods for modelling 3-D canopy structure for silviculture and forest ecology using airborne laser scanning. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5011–5027. [Google Scholar] [CrossRef]
  88. Liang, X.; Hyyppä, J.; Kaartinen, J.; Holopainen, M.; Melkas, T. Detecting Changes in Forest Structure over Time with Bi-Temporal Terrestrial Laser Scanning Data. ISPRS Int. J. Geo-Inf. 2012, 1, 242–255. [Google Scholar] [CrossRef]
  89. Liang, X.; Hyyppä, J.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Yu, X. The use of a mobile laser scanning for mapping large forest plots. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1504–1508. [Google Scholar] [CrossRef]
  90. Liang, X.; Hyyppä, J.; Kankare, V.; Holopainen, M. Stem curve measurement using terrestrial laser scanning. In Proceedings of the Silvilaser, Tasmania, Australia, 16–20 October 2011. [Google Scholar]
  91. Haala, N.; Kada, M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2010, 65, 570–580. [Google Scholar] [CrossRef]
  92. Schall, G.; Mendez, E.; Kruijff, E.; Veas, E.; Junghanns, S.; Reitinger, B.; Schmalstieg, D. Handheld augmented reality for underground infrastructure visualization. Pers. Ubiquitous Comput. 2009, 13, 281–291. [Google Scholar] [CrossRef]
  93. Jazayeri, I.; Rajabifard, A.; Kalantari, M. A geometric and semantic evaluation of 3D data sourcing methods for land and property information. Land Use Policy 2014, 36, 219–230. [Google Scholar] [CrossRef]
  94. Richter, R.; Behrens, M.; Döllner, J. Object class segmentation of massive 3D point clouds of urban areas using point cloud topology. Int. J. Remote Sens. 2013, 34, 8408–8424. [Google Scholar] [CrossRef]
  95. Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008, 8, 7323–7343. [Google Scholar] [CrossRef] [PubMed]
  96. Kedzierski, M.; Fryskowska, A. Methods of laser scanning point clouds integration in precise 3D building modelling. Measurement 2015, 74, 221–232. [Google Scholar] [CrossRef]
  97. Chuang, T.Y.; Jaw, J.J. Multi-Feature Registration of Point Clouds. Remote Sens. 2017, 9, 281. [Google Scholar] [CrossRef]
  98. Kaasalainen, S.; Pyysalo, U.; Krooks, A.; Vain, A.; Kukko, A.; Hyyppä, J.; Kaasalainen, M. Absolute radiometric calibration of ALS intensity data: Effects on accuracy and target classification. Sensors 2011, 11, 10586–10602. [Google Scholar] [CrossRef] [PubMed]
  99. Kaasalainen, S.; Jaakkola, A.; Kaasalainen, M.; Krooks, A.; Kukko, A. Analysis of incidence angle and distance effects on terrestrial laser scanner intensity: Search for correction methods. Remote Sens. 2011, 3, 2207–2221. [Google Scholar] [CrossRef]
  100. Kehl, C.; Buckley, S.J.; Viseur, S.; Gawthorpe, R.L.; Howell, J.A. Automatic illumination-invariant image-to-geometry registration in outdoor environments. Photogramm. Rec. 2017. [Google Scholar] [CrossRef]
  101. Gaulton, R.; Danson, F.M.; Ramirez, F.A.; Gunawan, O. The potential of dual-wavelength laser scanning for estimating vegetation moisture content. Remote Sens. Environ. 2013, 132, 32–39. [Google Scholar] [CrossRef]
  102. Trenholme, D.; Smith, S.P. Computer game engines for developing first-person virtual environments. Virtual Real. 2008, 12, 181–187. [Google Scholar] [CrossRef] [Green Version]
  103. Rusu, R.B.; Cousins, S. 3D is here: Point cloud library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and automation (ICRA); IEEE: New York, NY, USA, 2011; pp. 1–4. [Google Scholar]
  104. Chen, D.; Shams, S.; Carmona-Moreno, C.; Leone, A. Assessment of open source GIS software for water resources management in developing countries. J. Hydro-Environ. Res. 2010, 4, 253–264. [Google Scholar] [CrossRef]
  105. Cole, D.M.; Newman, P.M. Using laser range data for 3D SLAM in outdoor environments. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006 (ICRA 2006); IEEE: New York, NY, USA, 2006; pp. 1556–1563. [Google Scholar]
Figure 1. An example of the point cloud collected from an urban environment by an autonomous car using a Velodyne VLP-16 sensor (Velodyne LIDAR, San Jose, CA, USA). A wall of a nearby building, a passing vehicle and city trees can be observed from the sample point cloud. By 2030, such data will be collected by several sensors in each car, and such cars are provided with more than 10 M units per year.
Figure 1. An example of the point cloud collected from an urban environment by an autonomous car using a Velodyne VLP-16 sensor (Velodyne LIDAR, San Jose, CA, USA). A wall of a nearby building, a passing vehicle and city trees can be observed from the sample point cloud. By 2030, such data will be collected by several sensors in each car, and such cars are provided with more than 10 M units per year.
Ijgi 06 00243 g001
Figure 2. Example raw data of multispectral ALS for built areas. (The plot is in from WGS84 coordinates 60°09′03″ N, 24°38′08″ E).
Figure 2. Example raw data of multispectral ALS for built areas. (The plot is in from WGS84 coordinates 60°09′03″ N, 24°38′08″ E).
Ijgi 06 00243 g002
Figure 3. A game engine based visualization of a dense point cloud showing (in red) the octree structure used.
Figure 3. A game engine based visualization of a dense point cloud showing (in red) the octree structure used.
Ijgi 06 00243 g003
Figure 4. Example road environment data collected with the Riegl VUX-1HA (RIEGL Laser Measurement Systems GmbH, Horn, Austria) with NovAtel SPAN (UIMU-LCI and Flexpak6) IMU (NovAtel Inc., Calgary, AB, Canada). The point cloud has been colorized with laser backscatter intensity.
Figure 4. Example road environment data collected with the Riegl VUX-1HA (RIEGL Laser Measurement Systems GmbH, Horn, Austria) with NovAtel SPAN (UIMU-LCI and Flexpak6) IMU (NovAtel Inc., Calgary, AB, Canada). The point cloud has been colorized with laser backscatter intensity.
Ijgi 06 00243 g004
Figure 5. UAV-based point cloud from a forest. (The depicted plot is located in WGS84 coordinates 61°11′ N, 25°07′ E).
Figure 5. UAV-based point cloud from a forest. (The depicted plot is located in WGS84 coordinates 61°11′ N, 25°07′ E).
Ijgi 06 00243 g005
Figure 6. Example of changes that can be mapped at the national level. Laser scanning data from the year 1998 (a), 2000 with a powerline (b), and corresponding change (c) (two trees and some branches removed).
Figure 6. Example of changes that can be mapped at the national level. Laser scanning data from the year 1998 (a), 2000 with a powerline (b), and corresponding change (c) (two trees and some branches removed).
Ijgi 06 00243 g006
Figure 7. Overview: acquisition, processing, and application of a dense point cloud.
Figure 7. Overview: acquisition, processing, and application of a dense point cloud.
Ijgi 06 00243 g007

Share and Cite

MDPI and ACS Style

Virtanen, J.-P.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Turppa, T.; Hyyppä, H.; Hyyppä, J. Nationwide Point Cloud—The Future Topographic Core Data. ISPRS Int. J. Geo-Inf. 2017, 6, 243. https://doi.org/10.3390/ijgi6080243

AMA Style

Virtanen J-P, Kukko A, Kaartinen H, Jaakkola A, Turppa T, Hyyppä H, Hyyppä J. Nationwide Point Cloud—The Future Topographic Core Data. ISPRS International Journal of Geo-Information. 2017; 6(8):243. https://doi.org/10.3390/ijgi6080243

Chicago/Turabian Style

Virtanen, Juho-Pekka, Antero Kukko, Harri Kaartinen, Anttoni Jaakkola, Tuomas Turppa, Hannu Hyyppä, and Juha Hyyppä. 2017. "Nationwide Point Cloud—The Future Topographic Core Data" ISPRS International Journal of Geo-Information 6, no. 8: 243. https://doi.org/10.3390/ijgi6080243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop