2. Background
Within the field of vision-based structural health monitoring, damage detection from point clouds has been investigated as a nondestructive assessment technique through multiple workflows. These workflows are developed based on pattern recognition, supervised or unsupervised machine learning, or change detection analysis approaches. Therefore, these studies can be classified into four broad groups based on the approach used to detect damaged areas discussed in this section. In addition to the approaches used to detect damaged areas, multiple studies use both active (i.e., lidar) and passive (i.e., imagery) remote sensing technologies to collect data from the damaged areas and further process these data to detect and quantify damaged areas and compare each technology. As one of the early studies, Olsen et al. collected lidar point clouds of a full-scale beam-column joint [
2]. Within the study, the authors quantified the volumetric losses by adding the cross-sectional areas at multiple locations and studied the application of the GBL platform to perform crack mapping via collected colored images (i.e., color information) and intensity return values. However, as Olsen et al. reported, the crack mapping developed through color and intensity information was not able to represent the exact location of the cracks due to parallax [
2]. Laefer et al. investigated the application of GBL and high-resolution photogrammetry to detect cracking and compared the results with a manual or visual inspection. To collect lidar data, the authors positioned the GBL platform at various distances, which resulted in point clouds with point-to-point distances of 1.00 to 1.75 mm within the ROI. The research concluded that lidar-derived point clouds with such point-to-point spacing were not reliable nor efficient data sources to detect cracking in comparison to digital images [
11]. Laefer et al. further investigated the application of GBL-derived point clouds in detecting cracks by developing mathematical equations that characterize the minimum detectable crack widths based on orthogonal distance offset between GBL platform and ROI, interval scan angle, the crack orientation, and crack depth [
12]. Based on the developed equations and the verification study, the authors have reported that the GBL-derived point clouds collected at a distance of fewer than 10.0 m can reliably detect the vertical cracks of at least 5 mm or larger. This was supported by the earlier study of Laefer et al. [
11]. However, the authors observed that quantified crack widths were consistently overestimated. In addition, it is where the crack detection was performed using a semi-manual process. More recently, Chen et al. developed an experiment similar to that created by Laefer et al. in 2014 to evaluate the Structure-from-Motion (SfM)-derived point clouds [
13]. Within this study, the authors varied the camera locations to study the effects of different angles and distances to ROI in the final SfM-derived point cloud. To compare the accuracy of the SfM-derived point cloud, five features were selected with the test specimen and then shifted, and the displacement of these features was measured and compared to the values computed from the lidar-derived point clouds. The authors reported that the SfM-derived point cloud that is created based on images collected at multiple distances and angles resulted in the most accurate measurements.
The studies that incorporate a pattern recognition approach are comprised of two main components. The first component of these workflows includes the feature extractors that analyze input data based on its properties and provide features useful for the classification of damaged and undamaged areas. The second component of these workflows is the classifier which analyzes the extracted features and assigns a label to the input instant. The main difference between pattern recognition and machine learning approaches is that the machine learning workflows employ feedback from the predicted labels to the classifier and, in some cases, to the feature extractors (e.g., convolutional neural network (CNN)) to develop the classifier. Several studies have used pattern recognition to detect damage from point clouds. For example, Torok et al. introduced a damage detection method based on a point cloud of a column with planar surfaces [
14]. Within the introduced workflow, a mesh representation of the input point cloud was identified and transformed such that the vertical direction of the column is parallel with the global vertical direction. Within this study, the features were computed by calculating the angle between each mesh element normal vector and the selected reference normal vector. Torok et al. used a straightforward classifier to identify damaged regions, where a region is considered damaged if the corresponding calculated angle was within the predefined threshold limit. Kim et al. presented a method to detect and spalling damage of a flat concrete surface [
15]. Like Torok et al., the proposed workflow used a variation of normal vectors with respect to a reference vector as one of the damage-sensitive features. However, the normal vectors were computed for each point through Principal Component Analysis (PCA) approach. In addition to normal vectors, Kim et al. used the variation of vertical distances between each point and best-fitted plane to the concrete block surface as the second damage-sensitive feature. Lastly, the classifier combined the result of each damage sensitive feature through an equation to identify the damaged areas. Valenca et al. presented a more advanced workflow to detect cracks on a concrete surface through combining detection results from 2D images in addition to computing the distance variation of points to a reference plane [
16]. To detect damaged areas, the proposed method initially evaluated the distance variations to identify damaged areas based on point clouds. Then, image-based damage analysis results were supplemented with the results of distance variation to improve the damage detection accuracy. Erkal and Hajjar proposed workflow identified damage areas through various methods, including the variation of point normal vectors as well as supplementing normal vector based results with damage detection result based on color and/or intensity information [
3]. Within this study, the damage detection results were based on normal vector followed a similar procedure to that of Kim et al. [
15]. However, Erkal and Hajjar recommended three different options to determine the reference vector, which provides more flexibility than previous studies [
3]. The damage identification using intensity or color information was conducted by classifying the points based on each point’s selected neighbors’ threshold value.
The next group of studies is developed based on the supervised machine learning approaches, where the input instances are associated with known labels. The main component in the machine learning-based workflows is to develop a classifier that learns the mapping from the input instances based on the extracted features to the output correctly by updating the learnable classifier parameters. The features within supervised machine learning approaches are identified based on engineered feature extractors or learned during developing the classifier (i.e., training) and the classifier. For example, Vetrivel et al. used SfM-derived point clouds and oblique aerial images to detect damaged areas using multiple kernels supervised learning approaches [
17]. Within this study, Vetrivel et al. extracted features from point cloud instances through PCA and further analyze the features using CNN. In parallel, the corresponding image of 3D point cloud instances was analyzed by a support vector machine classifier to identify the damaged areas. This proposed workflow was developed and tested at the building level [
18]. More recently, Nasrollahi et al. utilized a well-established deep learning network, PointNet [
19], to detect cracks in a flat concrete block [
20]. While the deep learning-based models eliminated the need to develop and design damage-sensitive features, these models’ success was tied to a large number of training instances. To train the classifier, Nasrollahi et al. normalized the coordinates and incorporated the color information to improve the detection results and concluded that the developed model could achieve a detection accuracy of 88%. More recently, Haurum et al. investigated the application of two well-established deep learning classifiers, namely Dynamic Graph CNN (DGCNN) [
21] and PointNet [
19], to detect damaged areas within synthetic point cloud datasets that simulate damage within sewer pipelines [
22]. In this study, the point cloud instances were initially preprocessed to remove the erroneous points based on statistical outlier removal presented by Barnett and Lewis [
23]. As the machine learning-based models required the input instances to have a consistent number of instances, the input instances were either downsampled or upsampled to meet this criterion. Haurum et al. labeled the points based on three types of defects predominantly observed within the sewer pipelines and reported that DGCNN could detect damaged labels with precision and recall accuracy of 60%.
A fourth group of studies is developed based on unsupervised learning algorithms. Contrary to supervised learning algorithms, the labels are not known in an unsupervised learning approach. Therefore, the unsupervised learning algorithm’s main component is identifying the criteria or regulations that categorize the input instances into separate groups or clusters. In these workflows, the spatial location of points, color information, and/or intensity return values of points are utilized directly as features or processed to provide a more robust set of discriminative features to classify points within the damaged areas. For example, Kashani and Graettinger (2015) introduced a damage detection method k-mean clustering algorithm [
24]. The authors used various regulations to cluster points based on their intensity information and features computed from color information and reported that the developed method could achieve an accuracy of 80%. Hou et al. (2017) used a set of features similar to Kashani and Graettinger to detect metal corrosion, loss of section within walls or structural elements, and water staining marks on the walls [
7,
24]. However, Hou et al. evaluated multiple clustering algorithms, including k-means, fuzzy c-means, subtract, and density-based spatial clustering algorithms. The authors reported that the k-means and fuzzy c-means clustering algorithms outperformed other unsupervised learning algorithms, and intensity information is more sensitive for detecting damage points within their dataset than color information under varying lighting conditions [
7].
The last group of studies uses a change detection analysis to identify the temporal changes for a selected ROI by comparing the point cloud datasets that are collected at different intervals or times. The main components of performing change detection analysis are to align the datasets for different times to the reference dataset with the desired level of accuracy and quantify changes by comparing the corresponding areas in two different datasets. One of the early studies that investigated the application of change detection analysis to identify the temporal changes was conducted by Girardeau-Montaut et al. [
25]. Within this study, the point clouds of different time intervals were initially organized through an octree data structure by assigning a code to each point that is calculated based on the maximum subdivision level of the octree data structure. Afterward, the corresponding cells were compared based on three methods: average distance, best fitting plane orientation, and Hausdorff distance. Girardeau-Montaut et al. reported that the results change analysis based on Hausdorff distances resulted in the most accurate change detections. Following Girardeau-Montaut et al., Lague et al. introduced a change detection algorithm to quantify the temporal changes based on a direct comparison between two point clouds [
26]. To perform a direct comparison between the two-point cloud datasets, the point cloud data were initially divided into multiple small segments, and the normal vector for each segment and its corresponding orientation is identified. Afterward, the corresponding segments were compared to identify the surface changes along the direction of the normal vector. Lague et al. reported that the developed method could be used to detect changes as small as 6 mm over a distance of 50 m. Olsen introduced a more comprehensive change detection analysis workflow based on georeferenced point clouds collected at different time intervals [
8]. Within this study, the point clouds datasets were initially segmented into smaller cells and organized using the hashtable data structure for efficient access during the comparison process. Afterward, the datasets were transferred into a unified coordinate system through a georeferencing process, and the corresponding cells were compared, and changes within the cells were identified. Within this study, the cells’ dimension can be adjusted based on the desired LOD and accuracy of the georeferencing process. Lastly, the author reported that the developed workflow can detect changes within mm level within a controlled environment.
As discussed, previous studies have proposed various methods based on different approaches and properties of lidar- and SfM-derived point clouds to detect damaged areas with varying degrees of efficiency, flexibility, and scalability for real-world applications. The studies that use pattern recognition or unsupervised learning workflows are mainly limited by the features used as the damage-sensitive features. On the contrary, supervised machine learning and deep learning-based approaches are mainly limited by the number of training instances and limitations associated with classifier or model. For example, the input instances’ number of points shall be consistent within supervised learning classifiers, requiring either a downsampling or upsampling process. This process can affect the instance accuracy, in particular during the upsampling process. As for features used as damage-sensitive features, the studies that use color information can be limited due to environmental and lighting conditions. The intensity information has been reported to be less affected by the environmental or lighting conditions, but in multiple lidar scans, the intensity information can have a different value for the same object, which requires a calibration process. Besides color and intensity information, multiple studies use local geometric features as the damage-sensitive features. However, the methods based on these features are limited to evaluating planar surfaces or, at the highest point, clouds representing single geometry (e.g., cylindrical shape). Additionally, the point density varies throughout a dataset and may result in geometric features that are similar to those representing damaged areas, therefore reducing the overall accuracy of the workflow. Damage data are commonly illustrated by random and unique shapes and dimensions. Therefore, instances that represent damage are rarely represented in a dataset. As pattern recognition approaches can be efficiently optimized for the classification of datasets with rare instances, these methods can be used for the task of damage detection.
5. Conclusions
This manuscript analyzed the lidar-derived point clouds of a culturally prominent structure,
Palazzo Vecchio, through evaluating and expanding the damage detection and characterization workflow proposed by Mohammadi et al. [
9]. The lidar data were collected from one room, known as
Sala degli Elementi. The damage detection method evaluated within this study uses three damage-sensitive features to detect surface damage and cracks from the point clouds at the three separate resolutions. The method combines these three surface feature descriptors that are invariant to the point cloud’s underlying geometry, which results in a more scalable damage detection algorithm that does not rely on color or intensity data from a lidar scanner or supplemental sources. Furthermore, the developed method classifies the detected damaged areas into a selected number of confidence intervals. Through detailed evaluation of results based on each damage-sensitive feature, it was observed that only the normal-based damage-sensitive features were able to detect the cracking and minimize the presence of surface anomalies and bulges. Therefore, only normal-based damage-sensitive results were used within this study. Furthermore, to separate the detected cracking from the surface anomalies that represent feature values similar to that of cracking areas, a workflow based on the
OPTICS clustering method was used.
To validate the workflow’s performance and scalability, two-point cloud segments of the east wall of
Sala degli Elementi that sustained heavy cracking were evaluated at two resolutions and qualitatively compared with crack mapping conducted based images through manual analysis. This was done as the team could not perform crack assessment and quantify data due to access limitations. The selected segment represents a predominately planar surface. As reported by Laefer et al. and observed based on the analysis results of the wall segments within this study, it can be concluded that GBL-derived point clouds can be used to detect cracking of size 5 mm or higher that results in a change in local geometry [
11,
12]. However, various nonuniformities existed throughout its surface. The damage detection analysis was performed at two different spatial resolutions of 5 mm and 10 mm. The detected damaged areas are further classified into multiple confidence intervals. The detected damaged areas demonstrated that the developed method could detect not only the cracking but also all minor surface anomalies, making visualizing and detecting cracking difficult at this initial stage. As a result, the various confidence intervals were investigated. It was determined that the first three intervals for both spatial resolutions could depict the cracking while minimizing the presence of detected surface anomalies. Moreover, the identified damaged areas included cracking and other surface nonuniformities, were further separated using the
OPTICS clustering algorithm for both segments at a 10 mm resolution, which enabled direct comparison of the detected crack with manual crack mapping results.
The analysis results demonstrated if the cracking results in local geometric changes equal to or greater to that of the voxelization grid step, the proposed method can be used to detect these defected areas. However, it was noted that minor surface anomalies that represent feature values similar to that of cracking could be classified with cracking, in particular for resolution or voxelating gird steps of 5 mm. A segmentation step was proposed to combat the issue and isolate the larger cracks of interest. This workflow enables a more objective comparison between the manual cracking mapping and identified areas using the workflow. However, a number of limitations were identified within this study. The first limitation of this study corresponds to the type of features used within this study. As the damage-sensitive features used within this study are developed based on identifying changes in local geometric features, these are susceptible to point density and its variation within a point cloud. While the point density variation effect is minimized through the voxelating process (as suggested by Mohammadi et al. [
9]), the smaller voxelating grid steps may not reduce the point density variation in comparison to a larger grid step. This can be observed based on comparing the damage detection results of segment A at 5 mm to 10 mm resolutions. The second limitation of this study corresponds to isolating the cracks from surface nonuniformities. While the
OPTICS clustering algorithm separated the primary shear cracking from other defects, most sparse detections were grouped with cracked areas. As a result, the primary future research direction includes developing a method to improve the clusters via region growing. This can be done by considering the location and other surface similarities and enabling direct quantifying the damaged area, including area, depth, length, and width based on the segmented damaged regions.