Next Article in Journal
The Effect of Daylight Illumination in Nursing Buildings on Reading Comfort of Elderly Persons
Previous Article in Journal
Critical Experiments for Structural Members of Micro Image Strain Sensing Sensor Based on Smartphone and Microscope
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction

1
School of Water Conservancy Engineering, Zhengzhou University, Zhengzhou 450001, China
2
National Local Joint Engineering Laboratory of Major Infrastructure Testing and Rehabilitation Technology, Zhengzhou 450001, China
3
Collaborative Innovation Center of Water Conservancy and Transportation Infrastructure Safety, Zhengzhou 450001, China
4
School of Civil Engineering, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(2), 213; https://doi.org/10.3390/buildings12020213
Submission received: 3 January 2022 / Revised: 9 February 2022 / Accepted: 11 February 2022 / Published: 15 February 2022
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

:
The urban drainage system is an important part of the urban water cycle. However, with the aging of drainage pipelines and other external reasons, damages such as cracks, corrosion, and deformation of underground pipelines can cause serious consequences such as urban waterlogging and road collapse. At present, the detection of underground drainage pipelines mostly focuses on the qualitative identification of pipeline damage, and it is impossible to quantitatively analyze pipeline damage. Therefore, a method to quantify the damage volume of concrete pipes that combines surface segmentation and reconstruction is proposed. An RGB-D sensor is used to collect the damage information of the drainage pipeline, and the collected depth frame is registered to generate the pipeline’s surface point cloud. Voxel sampling and Gaussian filtering are used to improve data processing efficiency and reduce noise, respectively, and the RANSAC algorithm is used to remove the pipeline’s surface information. The ball-pivoting algorithm is used to reconstruct the surface of the segmented damage data and pipe’s surface information, and finally to obtain the damage volume. In order to evaluate, we conducted our research on real-world materials. The measurement results show that the method proposed in this paper measures an average relative error of 7.17% for the external damage volume of concrete pipes and an average relative error of 5.22% for the internal damage measurements of concrete pipes.

1. Introduction

As an important guarantee for the construction of urban civilization and healthy human life, urban drainage systems can isolate sewage and clean water, thereby improving the quality of human life [1]. With economic growth and the continuous expansion of urban scale, the total length of drainage pipelines in China continues to grow. Consequently, the aging problem of the pipeline system is becoming more and more serious. Road collapse and environmental pollution caused by pipeline aging have caused considerable social impact [2]. Therefore, municipal departments need to spend a lot of money and resources on the maintenance of sewage pipelines [3]. In order to reduce the serious consequences caused by drainage pipe defects, it is very important to detect and evaluate the defects as soon as possible [4]. Currently, the main methods of pipeline inspection include sonar inspection [5], pipeline periscope inspection [6], ground-penetrating radar systems [7], pipeline closed-circuit television inspection [4], etc. Sonar detection can detect mud and foreign matter in pipelines well, but its ability to detect the structural defects of pipelines is poor. Pipeline periscope detection has the advantages of low detection cost and high detection speed. For short drainage pipelines, clear impact data can be obtained, but the software cannot be run in pipelines with high water levels. Ground-penetrating radar (GPR) images can better represent the subsidence or collapse around the pipeline, but the detection effect of this method is poor for other pipeline diseases. Closed-circuit television inspection of pipelines has been widely used in domestic and foreign pipeline detection [5]. Compared with traditional detection methods, closed-circuit television (CCTV) has a higher intelligence level and can provide more concise and obvious image results compared with laser detection and radar tests. The above four methods have the advantages of good accuracy and little influence from testing environments. However, due to the limitations of the instrument itself, these four methods can only undertake qualitative analysis for pipeline damage, not quantitative assessments.
In recent years, the use of 3D information for structural health analysis has become a new trend. 2D images can be used for some structural damage detection, such as crack detection. 3D information provides solutions for other types of detection; for example, the depth and volume of the damaged part can be calculated according to the 3D information of the damaged part. Some authors used Structure-from-Motion (SfM) [8] to fuse two-dimensional images and some scale parameters to establish a 3D virtual model. Mahami et al. [9] used SfM combined with an MVS algorithm to detect the target so as to realize automatic detection of the whole project’s progress. Golparvar-fard et al. [10] verified that SfM can complete remote assessment of infrastructure before and after disasters. Torok et al. [11] developed a new crack recognition algorithm based on the SfM method, which used robots to collect post-disaster information in order to complete 3D surface damage detection and analysis. Nowak et al. [12] used TLS to complete the overall scanning of a historic building structure and obtained a nearly complete architectural geometry.
Other authors have used 3D laser scanning equipment to quickly capture infrastructure’s structural information to build 3D point cloud models. Youn et al. [13] used 3D scanners and Revit to build a platform containing various pieces of historical information that can be used for digital twin to record wood deformation and crack information. Zeibak-shini et al. [14] used laser scanning to compare a generated damage BIM model with a built BIM model to complete a preliminary estimation of the damaged parts of a reinforced concrete frame. Wang et al. [15] used a 3D point cloud as the quality detection of assembled building wall panels, which can quickly and efficiently classify and segment the wall panels that need to be corrected. Turkan et al. [16] completed concrete crack detection by combining a wavelet neural network with 3D ground laser scanning data. Liu et al. [17] used a 3D camera combined with a classical edge detection algorithm and fuzzy logic detection algorithm to perform clear edge detection on installed panels.
Another way to obtain 3D point clouds is to use a depth camera (RGB-D) that incorporates depth information. Some authors put forward the method of obtaining 3D point cloud data from cheap depth cameras to quantify the damage of road potholes [18]. All of these methods quantify the damage by fixing the distance and angle between the camera and the measured object, which limits their use in other scenes and is not reliable for automatic detection.
3D background removal is an important problem in surface damage research. At present, 3D point cloud segmentation methods mainly include the method based on regional growth, the method based on cluster features, and the method based on model fitting. The region growth method proposed by Besl and Jain (1988) [19] is mainly divided into two stages: firstly, seed points are selected; secondly, adjacent points are merged according to certain standards (normal vectors within a certain threshold range). Tovari and Pfeifer [20] proposed the point-based region growth algorithm which combined adjacent points into the same set according to their normal vectors and distance thresholds. The method based on clustering features divides the data set into different classes according to certain standards (distance or normal vectors). Biosca and Lerma [21] developed a fuzzy clustering segmentation method to merge neighboring points whose distance is less than the set threshold into the nearest cluster. Methods based on model fitting mainly have two algorithms, namely, the Hough Transform (HT) algorithm proposed by Ballard et al. [22] and the random sample consistency algorithm (RANSAC) proposed by Fischler et al. [23]. The HT algorithm uses a voting method to identify parameterized models. Rabbani et al. [24] completed automatic detection of cylinder models in point clouds based on the HT algorithm. Although the HT algorithm can segment 3D point clouds well, it has problems such as the consumption of a lot of memory and computing time [25]. The RANSAC algorithm randomly selects data points at first, estimates model parameters according to the selected data points, then puts the remaining points into the model, and, finally, selects the model with the maximum number of points as the best model. The RANSAC algorithm, firstly, randomly selects data points, estimates model parameters according to the selected data points, then puts the remaining points into the model, and, finally, selects the model with the maximum number of points as the best model. Chen et al. [26] segmented polyhedral roofs using an improved RANSAC algorithm and classified primitive elements using a region growth algorithm.
Surface reconstruction is an important method for obtaining dimension data from damage data after segmentation. At present, the common surface reconstruction methods are polygonal mesh reconstruction, parametric surface reconstruction, and implicit surface reconstruction. The most widely used method is polygon mesh reconstruction. The polygon mesh reconstruction method uses a simple mathematical model to describe object surfaces with points, lines, and planes. Delaunay triangulation [27] is a classical method of polygonal mesh reconstruction that was first proposed by the Russian mathematician Boris Delaunay. Delaunay triangle networks have two characteristics; each Delaunay triangle’s outer circle does not contain other points in the plane domain, namely, the empty outer circle characteristic; and, after mutual exchange, the minimum angle of the six interior angles will not increase, namely, the minimum angle maximization characteristic. Delaunay triangulation has the advantages of regularity and optimality, and many researchers have developed polygon mesh reconstruction methods based on Delaunay triangulation. Boissonnat and Cazals [28] used natural neighborhood interpolation to construct smooth surfaces based on Delaunay triangulation and Voronoi diagrams. Amenta et al. [29] reconstructed surfaces from disordered point clouds based on Voronoi diagrams. Recently, Bernardini et al. [30] proposed a new polygonal mesh reconstruction method, the ball-pivoting algorithm (BPA). The basic principle of the algorithm is to set a ball with radius ρ, which contains only three data points forming a triangle. The ball continues to rotate around the surface of the point cloud and generates the next triangle until all the data in the data set are calculated. The BPA method has the advantages of strong robustness and high efficiency.
In order to quantify the damage volume of underground pipelines under the interference of a complex environment, we propose a quantitative method of assessing the damage volume of underground drainage pipelines integrating 3D point cloud surface segmentation and reconstruction. On the basis of damage segmentation, damage reconstruction and surface reconstruction were carried out with the help of pipeline surface information, and the algorithm had strong portability. The method mainly consisted of four parts: (1) conversion from 2D depth frames to 3D point gathering was completed according to the conversion relationship between the internal coordinates of the acquisition instrument and the world coordinates; (2) the data set was preprocessed by integrating voxel sampling and a Gaussian filter; (3) the parameters of the surface model were estimated by the random sampling consensus algorithm, and the point cloud of the pipeline surface was removed; (4) after the damage data were reconstructed with the surface point cloud, the BPA algorithm was used to complete the surface reconstruction in order to obtain the real damage volume. The rest of this paper includes Section 2: Concrete Pipeline Damage Volume Quantitative Detection Framework, Section 3: Experiments, Section 4: Performance Analysis, Section 5: Discussion, Section 6: Conclusion.

2. Concrete Pipeline Damage Volume Quantitative Detection Framework

The main objective of this study was to accurately quantify the concrete pipe shedding disease using an inexpensive depth camera. As shown in Figure 1, the algorithm can be divided into four steps: (1) depth data acquisition based on a RGB-D sensor; (2) preprocessing point cloud data; (3) pipeline surface segmentation and damage clustering; (4) obtaining the the surface reconstruction of the entire damage point cloud from the surface and convex damage point cloud provided in Step 3, and quantifying the volume.

2.1. Data Acquisition

2.1.1. RGB-D Camera

A Microsoft Azure Kinect DK depth camera was used in this study, and its technical parameters are shown in Table 1. The device provided 3840 × 2160 pixel RGB images and 1024 × 1024 pixel depth images. The depth camera realized the concept of time of flight (ToF) and was composed of an infrared emitter and an infrared sensor. The infrared emitter transmitted a light pulse to be observed by the continuous object, and then to allow the infrared sensor to receive from the object a light pulse. The distance between the infrared sensor and infrared emitter was known, and, therefore, the equipment could, according to the infrared light from the transmitter, sense time spent on the 3D coordinates of each pixel sensor.

2.1.2. Depth Frame-Mapping 3D Point Cloud

3D point cloud data was obtained through coordinate-system transformation of original depth data. The space point Q(x, y, z) and its mapping point q(u, v, d) on the depth image (d is the depth data of this point) is shown in Figure 2. OXCYCZC was the world coordinate system, OWXWYWZW was the camera coordinate system, and UV was the depth image coordinate system. Formulas (1)–(3) were obtained according to the position relations in space.
u = x · f x z + c x
v = y · f y z + c y
d = z · s
where fx and fy represented the focal length of the camera on the x-axis and y-axis, cx and cy represented the position of the center point of the camera’s aperture, and s were the scale factors of the depth map. Formulas (1)–(3) were transformed into matrix expressions as expressed in Formula (4).
z [ u v 1 ] = C [ R T ] ( x y z 1 )
C = [ f x 0 c x 0 f y c y 0 0 1 ]
R = [ 1 0 0 0 1 0 0 0 1 ]
T = [ 0 0 0 ]
As shown in Formulas (5)–(7), the matrix C was the internal parameter matrix of the camera. Because the world coordinate origin and the camera origin coincided, the rotation matrix was R. The translation vector was T. MATLAB provided good raw deep data access. The depth data are transformed from a 2D image in the depth camera coordinate system to a 3D point cloud in the world coordinate system.

2.2. Data Preprocessing

2.2.1. Voxel Sampling

In the process of scanning, 3D laser scanners will generate point clouds of different densities according to different scanning distances, which will bring difficulties to subsequent 3D point cloud processing (such as point cloud registration). Moreover, due to the excessive number of sampling point clouds, the subsequent calculation will be complicated, and the calculation efficiency will be affected. The main methods for downsampling point clouds are the random downsampling method and the voxelized grid method. The random downsampling method has high computational efficiency but a poor retention effect for point cloud shapes. The research focus of this paper was obtaining the volume levels of pipeline damage, which puts forward higher requirements for the detailed expression of 3D point clouds. Therefore, the downsampling method that pays more attention to the preservation of 3D point cloud shapes, the voxel mesh method [31], was selected in this study.
The original point cloud M = {mi}, i = 1, …, p, mi = (xi, yi, zi) was input to determine its location range in space, and the spatial voxel grid was divided according to the appropriate size. After the division, the point cloud was wrapped by the cubic grid. Figure 3 is the schematic diagram of the collective pixel segmentation of point cloud data, and Figure 4 is the point cloud in the voxel.
After the voxel meshes were divided, the intact damaged point cloud was divided into several point cloud subsets. The point cloud data in the voxel should be compressed according to certain standards, and the regional characteristics of the point cloud subset should be retained. This algorithm selected the point mj(x, y, z) closest to the center of gravity of the voxel G(x, y, z) as the characteristic information of the single voxel. As shown in Figure 5, DX, DY, and DZ were the length, width, and height of the unit voxel, respectively. The distance between points mp, mq, mk, ms, and mj and the center of gravity of the voxel G was judged in a single voxel, and, finally, mj was selected as the feature point to describe the single voxel.

2.2.2. Gaussian Filtering Reduces Surface Noise

Due to the influence of equipment accuracy, operator experience, environmental factors, and other factors, as well as the diffraction characteristics of electromagnetic waves, the change of the surface properties of the measured object, and the influence of data stitching and registration operation, some noise points inevitably appear when obtaining point cloud data. In fact, in addition to random error noise points caused by the interference of external factors such as line-of-sight occlusion and obstacles, there are usually some discrete points, namely, outliers, which are far away from the object (i.e., the point cloud of the measured object) in point cloud data. Different acquisition devices will produce different point cloud noise structures. Other tasks that can be accomplished through filtering resampling include hole repair, information loss minimization and mass point cloud data compression processing, etc.
In order to eliminate outlier points that do not conform to the neighborhood caused by 3D point cloud acquisition equipment when sampling the underlying environment, a statistical outlier elimination method [32] was used to remove outlier points. First, for the input point cloud M = {mi}, i = 1, …, p, mi = (xi, yi, zi), the distance (dij) from each point to the surrounding neighborhood points was calculated. Secondly, the distance parameters were modeled according to the Gaussian distribution d~N(μ, δ ), and the mean value μ and standard deviation δ of the distance between the point and its neighborhood were calculated, as shown in Formulas (8) and (9). Finally, the average nearest neighbor distance of each point are checked. If it was greater than μ , it was taken as an outlier point and removed from the point cloud data set. In short, this method reduced the number of point clouds, shortened the processing time of subsequent steps, and improved the accuracy of processing.
μ = 1 n k i = 1 m j k d i j
δ = 1 n k i = 1 m j = 1 k ( d i j μ ) 2

2.3. Surface Segmentation

2.3.1. RANSAC Algorithm to Remove Surface Point Clouds

The RANSAC algorithm considers a sampling data set with size n for initial model building. For the plane model, three-point cloud data should be randomly sampled to determine the parameters a, b, and c of the plane equation in Formula (10). The cylindrical model equation is shown in Formula (11); the parameters to be determined were the cylinder center point (x0, y0, z0), the axis direction vector (a, b, c) and the radius of the cylinder r0, which were determined by random sampling of seven data.
a x + b y + c z + d = 0
( x x 0 ) 2 + ( y y 0 ) 2 + ( z z 0 ) 2 [ a ( x x 0 ) + b ( y y 0 ) + c ( z z 0 ) ] 2 = r 0 2
According to the model parameters of random sampling, the remaining data points were substituted into the model, the error calculation was carried out, and the remaining data points were screened by controlling the appropriate distance threshold. For the plane model, the distance D was derived, as shown in Formula (12) and if the error D of the insertion point was less than the threshold σ , the point was the interior point and was included in the model. The cylindrical model is similar to the plane model, and the error f is shown in Formula (13); f is the difference between the square of the distance between the point and the specified cylinder after the point is substituted into the cylinder equation. If the number of interior points was greater than 60% of the total data set, it was used as a backup model parameter. We repeated the above operations until the end of iteration, and, finally, selected the model with the most interior points as the model segmentation parameter.
D = | a x + b y + c z + d a 2 + b 2 + c 2 |
f = ( x x 0 ) 2 + ( y y 0 ) 2 + ( z z 0 ) 2 [ a ( x x 0 ) + b ( y y 0 ) + c ( z z 0 ) ] 2 r 2
As the RANSAC algorithm is more obvious for threshold setting, if the threshold is set too small, the algorithm will be unstable, and if the threshold is set too large, the algorithm will fail. For the cost value set in the RANSAC algorithm, when the error was greater than the threshold, the value assigned to cost by 1 was changed to the value assigned to cost by the threshold.
Figure 6 and Figure 7 are the schematic diagrams of surface segmentation based on planes and cylinders, respectively.

2.3.2. Damage Clustering Segmentation

The Euclidean clustering method was used for damage clustering, and the KD tree algorithm sped up the Euclidean clustering algorithm by grouping and numbering 3D point cloud data. The algorithm aggregated the threshold data points set at a certain distance into the same set through the KD tree.
The damage passed through Euclidean clustering results is shown in Figure 8 and Figure 9.

2.4. Volume Quantization

2.4.1. Damage Reconstruction

After we obtained the damage point cloud, it was reconstructed according to the surface segmentation model. This paper adopted the method of parametric model projection. For the plane model, the damage point cloud obtained needed to be mapped according to Formula (10). The three parameters set were a, b, and c, respectively, to map 3D data into the two-dimensional plane. If the damage point cloud was segmented according to the cylinder model, parameters x0, y0, z0, a, b, c, and r0 were set again according to Formula (11) to transform it into a cylinder point cloud. In order to form complete damage data, damaged surface data and damage data were reconstructed.

2.4.2. Surface Reconstruction

The key step of volume quantization was 3D point cloud surface reconstruction using the Alpha Shapes algorithm as a 3D point cloud contour detection algorithm. As shown in Figure 10, the main flow of the algorithm is as follows:
(1) The algorithm selects any point q in the point cloud data and sets the rolling circle radius α . All data points within the range of point q 2 α are recorded as set Q.
(2) Select a data point q1 in the set Q, q, and q 1 . The line is the chord length, and there are two circles with a radius of α . The equation of the circle can be calculated through the position relationship in space.
(3) Calculate the distance between other points in set Q and the centers of the two circles after removal. If the distance is greater than α (there are no other points in the two circles), it indicates that point q is the boundary point. Then judge the next point.
(4) If the distance between some points in Q and the center of the circle is less than α (there are other points in both circles), the data point q1 is rotated to other points in the set Q for the above calculation and comparison. If there is a point that makes (2) and (3) valid, it indicates that point Q is the boundary point and the next point is judged.
(5) If there is no such point, it is proved that point Q is not the boundary point, and the next point is judged.
After the contour of the dataset was detected by the Alpha Shapes algorithm, edge points were connected to generate a triangulation net, and 3D point cloud surface reconstruction was completed, as shown in Figure 11. Damage volume was finally obtained.

3. Experiment

3.1. Damage Real Volume Measurement Experiment

3.1.1. Damage Setting

In order to measure the errors of the depth camera and algorithm, we developed damage of different materials, different surface shapes and different sizes. Firstly, polystyrene foam board with a size of 200 cm × 100 cm × 5 cm and concrete board with a size of 50 × 40 × 8 cm were used as the basic materials for the damage volume measurement of the flat plate. Eight and five damages were made in the foam board and concrete, respectively. The damage sizes were randomly assigned. Heat treatment was required for the damaged inner surface of the foam board to eliminate roughness, as shown in Figure 12. Secondly, a concrete pipe with a diameter of 800 mm and 1000 mm was used as the basic material for the quantitative experiment of concrete pipe damage volume. Three damages were respectively set outside and inside the pipe, and the damage size was randomly set, as shown in Figure 13.

3.1.2. Measurement of Real Damage Volume

To measure the true volume of damage, it was necessary to choose good impression materials first. In addition to ensuring good elasticity, fluidity, and plasticity, the ideal impression material also needed to ensure its chemical stability and be easy to separate from the model. Alginate material was selected as the damage impression material; alginate material is mainly composed of alginate, talc, diatomite, and other inert fillers. Its impression is clear, high in precision, and has a strong ability to measure the real volume of damage.
A drainage method was used to measure the real damage volume. After the alginate material was fully mixed with water, it was filled into the damage to generate an impression of the damage. The generated impression was placed into the overflow cup, and the volume of the overflow cup’s drained boiling water was the damage volume. In order to prevent measurement errors caused by material smoothing and air bubbles in the process of impression preparation, three impressions and drainage measurements were carried out.

3.2. Volume Quantization of 3D Point Cloud Damage Test

3.2.1. 3D Point Cloud Damage Shooting

The depth camera was used to take pictures of the foam board damage and concrete board damage, respectively, under the conditions of light and distance limitation. The distance was set to 100–200 cm, and a camera position was set every 25 cm. Because the depth camera was sensitive to sunlight, this paper’s experiment was carried out under indoor lighting.
On the basis of the distance and light condition, the angle setting was added to the damage shooting of concrete pipe.
We adjusted the height of the depth camera to the same height as the damage site and aligned the infrared device on its head with the damage center of the pipeline. First, we set a camera position every 25 cm at a distance of 50–175 cm, as shown in Figure 14. Second, at the same distance from the camera position, we set camera position angles of −6°, −3°, 0°, 3°, and 6°, respectively, with the front of the damaged part being 0° and the right side being forward. The angle setting was based on the angle’s change between the depth camera and the head. Finally, on the basis of the same angle, we set settings of light and no light.
Due to the space limitations of concrete pipes, damage shooting in concrete pipes was mainly set according to angle and light conditions. Hot-melt adhesive was used to fix the depth camera to the inner wall of the pipe on the other side of the damaged part, the depth camera was moved a certain distance to the inside of the pipe along the inner wall of the pipe, and the damaged part was photographed in the angle directions of 0°, 6°, 12°, −6°, and −12°, respectively, as shown in Figure 15 and Figure 16.

3.2.2. Volume Quantization Results of 3D Point Cloud Damage Test

In this paper, about 270 experiments were carried out on 19 pits of different materials and sizes at different distances and angles with different illumination. Table 2 shows the total test volume values of foam board damage, concrete board damage, damage outside concrete pipe, and damage inside concrete pipe when the shooting distance was 100 cm (the distance inside concrete pipes was 50 cm), the shooting Angle was 0°, and there was light. It can be concluded from the table that the test effect of this method for the analysis of damage outside the concrete pipe is the best with a total error value of 0.87%, followed by a total error value of 2.17% for concrete pipe. The damage test effect of foam board is similar to that of concrete.

4. Performance Analysis

4.1. Performance of Foam Board

The shooting distance had a great influence on point cloud sparsity. Table 3 and Figure 17 show the results for the foam board at different shooting distances in this study. In the case of shooting distance from 100 cm to 200 cm, the overall relative error range of damage was 7.56% from 100 cm to 24.65% from 200 cm, and the average accuracy error (MPE) at different distances was 16.45%. Damage 6 had the smallest error, when the test volume decreased by 0.28% compared with the real volume in the case of the 100 cm shooting distance. As can be seen from Figure 17, the relative error of the same damage kept increasing with the increase in distance under other conditions. As the shooting distance increased and the pixel of the depth camera remained unchanged, the shooting range increased and the number of pixels describing damage forms decreased, resulting in a decrease in the number of generated point clouds and a gradual increase in the error of the final test volume value. In the figure, when the shooting distance is 100 cm to 175 cm, the relative error of the damage volume changes greatly, while the relative error of the damage volume changes little between 175 cm and 200 cm. As possible reasons for shooting distance increased, the representational ability of the three dimensional point cloud weakened. The 3D point cloud data were a relatively clear characterization of polyethylene foam on the surface of the damage of the tongue and grooves, the shooting distance was far, the 3D point cloud data’s representation ability was abate, and smaller pit slots could not show. However, the damage relative error remained stable after all the small pits were erased from the 3D point cloud data.

4.2. Performance of Concrete Slab

Further, we took pictures of different damages of concrete slab at distances of 100 cm to 200 cm. The test volumes obtained from 3D point cloud and its relative error with the real volume are shown in Table 4 and Figure 18. As shown in Table 4, when the shooting distance was 100 cm to 200 cm, the total relative error range for concrete slab ranged from 7.68% for 100 cm to 28.13% for 200 cm, and the average accuracy error (MPE) at different distances was 17.38%. The damage test results of concrete slab were similar to those of polyethylene foam slab, and the relative error increased with the increase in shooting distance. Overall, the gap between the damage error for concrete slab compared to foam board was not big. Errors mainly occurred because of the material’s surface; if the material’s surface was relatively coarse, more segmentation was needed to set the threshold, which can lead to larger quantitative impact injury as a result. If there are some small grooves and damage to the inner surface distortion after filming it was concluded that the quantization results of damage volume were affected.

4.3. Performance of Outside the Concrete Pipe

The damage shooting outside the concrete pipe was different from the foam board and the concrete board. The shooting angle and lighting conditions were added on the basis of the distance. The real damage volume data and the test damage volume data are shown in the Table 5 and Figure 19. The total relative error interval was 2.43–14.41%, and the minimum relative error was 2.43% when the shooting condition was 100 cm with light.
With other conditions unchanged, the relative error increased with the increase in the shooting angle. When the shooting distance was 50 cm with light, the relative error of the damage was 3.63% when the shooting angle was 0°, and it increased to 5.63% when the shooting angle increased to 3°. When the shooting angle was 6°, the maximum relative error was 10.79%. This is because with the rising of the shooting angle, the lack in the point cloud affects the measurement accuracy. The farther away the ground 3D laser scanner is from the concrete surface, the greater the angle of incidence; consequently, the point cloud was thin, increasing relative error.
Under the condition of constant angles and illumination, with the increase in distance, the relative error of damage volume decreased first and then increased, and the relative error of 75–100 cm shooting was small. The main reason for this situation is that the depth camera used in this paper will miss aspects of the point cloud when the working distance is too small under the condition of a narrow angle of view, resulting in error. The point cloud’s sparsity will also increase in error when the distance increases.
In the case of the same shooting distance and angle, more specific effects of different lighting conditions on the damage volume need to be further tested.
In general, the measured volume of external damage to concrete pipeline was more accurate than that of foam board and concrete board, and the external surface of concrete pipeline was smoother than that of foam board and concrete board. In addition, the method of normal estimation was added to the segmentation of external surfaces of concrete pipeline to improve the segmentation and fitting accuracy.

4.4. Internal Performance of Concrete Pipes

The real damage volume data and test damage volume data in the concrete pipeline are shown in Table 6 and Figure 20. Under light conditions, the average relative error of the three damages was 6.03%, and under no light conditions, the average relative error of the three damages was 4.41%.
As shown in Figure 20, when the shooting angle was 0°, that is, when the depth camera was directly facing the damage position, the relative error was minimal. When the shooting angle kept increasing, the relative error increased accordingly. The results were analyzed as above. On the one hand, the point clouds were missing due to the occlusion of damaged edges; on the other hand, the point clouds obtained would become more sparse as the shooting angle increases.

5. Discussion

This paper presents a method using an inexpensive depth sensor as a pothole scanner. Compared with the existing literature on measuring pits using Kinect, the following are the main contributions of this paper.
Different materials and shapes were used as reference materials to evaluate the performance of sensors.
Joubert et al. [33] used RANSAC for surface fitting and then manually selected damage locations for size calculation. In this paper, surface reconstruction was carried out based on segmentation and combined with drainage pipe surface information to improve damage detection efficiency.
Compared with Kamal et al. [18], who used average-filtering depth images to calculate volume through a method using pixel point integration for depth distance, this paper used improved RANSAC surfaces to segment damaged surface point clouds and the Alpha Shapes algorithm to detect the external contour of point clouds and reconstruct the surface to finally complete volume calculation.
For the methods of collecting and processing concrete pipeline damage data, this paper adopted fixed camera shooting distance and angle to collect concrete pipeline damage measurements and combined the RANSAC segmentation algorithm and the Alpha surface reconstruction algorithm to complete static data processing. Our future research direction is to develop pipeline robots equipped with depth cameras for dynamic acquisition of damage data and real-time damage detection and processing, combined with a deep-learning method in view of the complex state of active concrete pipeline.

6. Conclusions

With the continuous improvement of vision sensors, it is possible to realize volume quantization in 3D point clouds. Concrete drainage pipe breakage is a common structural damage. In order to evaluate this structural damage accurately, the broken volume should be quantified. In this paper, we proposed a 3D point cloud volume quantification method for concrete drainage pipe damage integrating surface segmentation and reconstruction. We tested the accuracy of the depth camera Microsoft Azure Kinect DK with an RGB-D sensor in the quantification of concrete pipe damage volumes. The equipment has the advantages of high precision, real-time data transmission, and a low price and can be used to detect and quantify the damage volume of concrete pipeline. Meanwhile, the method provides ideas for other depth cameras to quantify the volume of damage in concrete pipeline. The experimental results show that this method has great potential in the measurement of the damage volumes of drainage pipeline and can provide support for system decisions and quantitative repair materials for drainage pipeline.
Although this study has demonstrated the potential of automatically quantifying damage volumes in drainage lines, there are still some limitations. For example, in our study, only drainage pipes with a single diameter were studied. Therefore, it is necessary to further study the damage of drainage pipes with different diameters. In addition, the underground drainage pipeline service environment is complex; sewage, uneven light, fog, and blockage will affect data collection. The automatic segmentation and adaptive reconstruction of drainage pipe surface point clouds is a challenging task. In this regard, the development of a calculation system that can automatically identify, segment, and quantify drainage pipeline in the complex working environment is our future research direction.

Author Contributions

Conceptualization, N.W.; methodology, G.P. and N.W.; software, F.H.; validation, G.P. and F.H.; formal analysis, N.W.; investigation, G.P.; resources, H.F.; data curation, G.P.; writing—original draft preparation, G.P.; writing—review and editing, G.P. and H.L.; visualization, G.P.; supervision, N.W.; project administration, H.F.; funding acquisition, H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2017YFC1501200) and the National Natural Science Foundation of China (No. 51978630, 52108289). This project was supported by the Outstanding Young Talent Research Fund of Zhengzhou University (1621323001); the Postdoctoral Science Foundation of China (2020M672276, 2021T140620); the Key Scientific Research Projects of Higher Education in Henan Province (21A560013); and the Open Fund of Changjiang Institute of Survey, Planning, Design and Research (CX2020K10).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This research was funded by the National Key Research and Development Program of China (No. 2017YFC1501200) and the National Natural Science Foundation of China (No. 51978630, 52108289). This project was supported by the Outstanding Young Talent Research Fund of Zhengzhou University (1621323001); the Postdoctoral Science Foundation of China (2020M672276, 2021T140620); the Key Scientific Research Projects of Higher Education in Henan Province (21A560013); and the Open Fund of Changjiang Institute of Survey, Planning, Design and Research (CX2020K10). The authors would like to thank them for these financial supports.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meeker, E. The improving health of the United States, 1850–1915. Explor. Econ. Hist. 1971, 9, 353–373. [Google Scholar] [CrossRef]
  2. Xu, M.; Shen, D.; Rakitin, B. The longitudinal response of buried large-diameter reinforced concrete pipeline with gasketed bell-and-spigot joints subjected to traffic loading. Tunn. Undergr. Space Technol. 2017, 64, 117–132. [Google Scholar] [CrossRef]
  3. Sinha, S.K.; Knight, M.A. Intelligent system for condition monitoring of underground pipelines. Comput.-Aided Civ. Infrastruct. Eng. 2004, 19, 42–53. [Google Scholar] [CrossRef]
  4. Cheng, J.C.P.; Wang, M. Automated detection of sewer pipe defects in closed-circuit television images using deep learning techniques. Autom. Constr. 2018, 95, 155–171. [Google Scholar] [CrossRef]
  5. Duran, O.; Althoefer, K.; Seneviratne, L.D. State of the art in sensor technologies for sewer inspection. IEEE Sens. J. 2002, 2, 73–81. [Google Scholar] [CrossRef]
  6. Jun, Z. The Detection, Evaluation, and Repair Technology Application of Drainage Pipeline. In Proceedings of the International Conference on Pipelines and Trenchless Technology, Wuhan, China, 19–22 October 2012. [Google Scholar]
  7. Ékes, C.; Neducza, B.; Takacs, P. Pipe Penetrating Radar inspection of large diameter underground pipes. In Proceedings of the 15th International Conference on Ground Penetrating Radar, Brussels, Belgium, 30 June–4 July 2014. [Google Scholar]
  8. Snavely, N. Scene reconstruction and visualization from internet photo collections: A survey. IPSJ Trans. Comput. Vis. Appl. 2011, 3, 44–66. [Google Scholar] [CrossRef] [Green Version]
  9. Mahami, H.; Nasirzadeh, F.; Ahmadabadian, A.H.; Nahavandi, S. Automated progress controlling and monitoring using daily site images and building information modelling. Buildings 2019, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  10. Golparvar-Fard, M.; Thomas, J.; Peña-Mora, F.; Savarese, S. Remote assessment of pre-and post-disaster critical physical infrastructures using mobile workstation chariot and D4AR models. In Proceedings of the International Conference on Computing in Civil and Building Engineering, Nottingham, UK, 30 June 2010; pp. 63–69. [Google Scholar]
  11. Torok, M.M.; Golparvar-Fard, M.; Kochersberger, K.B. Image-based automated 3D crack detection for post-disaster building assessment. J. Comput. Civ. Eng. 2014, 28, A4014004. [Google Scholar] [CrossRef]
  12. Nowak, R.; Orłowicz, R.; Rutkowski, R. Use of TLS (LiDAR) for building diagnostics with the example of a historic building in Karlino. Buildings 2020, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  13. Youn, H.C.; Yoon, J.S.; Ryoo, S.L. HBIM for the Characteristics of Korean Traditional Wooden Architecture: Bracket Set Modelling Based on 3D Scanning. Buildings 2021, 11, 506. [Google Scholar] [CrossRef]
  14. Zeibak-Shini, R.; Sacks, R.; Ma, L.; Filin, S. Towards generation of as-damaged BIM models using laser-scanning and as-built BIM: First estimate of as-damaged locations of reinforced concrete frame members in masonry infill structures. Adv. Eng. Inform. 2016, 30, 312–326. [Google Scholar] [CrossRef]
  15. Wang, M.; Wang, C.C.; Zlatanova, S.; Sepasgozar, S.; Aleksandrov, M. Onsite Quality Check for Installation of Prefabricated Wall Panels Using Laser Scanning. Buildings 2021, 11, 412. [Google Scholar] [CrossRef]
  16. Turkan, Y.; Hong, J.; Laflamme, S.; Puri, N. Adaptive wavelet neural network for terrestrial laser scanner-based crack detection. Autom. Constr. 2018, 94, 191–202. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, C.; Shirowzhan, S.; Sepasgozar, S.M.E.; Kaboli, A. Evaluation of classical operators and fuzzy logic algorithms for edge detection of panels at exterior cladding of buildings. Buildings 2019, 9, 40. [Google Scholar] [CrossRef] [Green Version]
  18. Kamal, K.; Mathavan, S.; Zafar, T.; Moazzam, I.; Ali, A.; Ahmad, S.U.; Rahman, M. Performance assessment of Kinect as a sensor for pothole imaging and metrology. Int. J. Pavement Eng. 2018, 19, 565–576. [Google Scholar] [CrossRef]
  19. Besl, P.J.; Jain, R.C. Segmentation through variable-order surface fitting. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 167–192. [Google Scholar] [CrossRef] [Green Version]
  20. Tóvári, D.; Pfeifer, N. Segmentation based robust interpolation-a new approach to laser data filtering. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, 79–84. [Google Scholar]
  21. Biosca, J.M.; Lerma, J.L. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS J. Photogramm. Remote Sens. 2008, 63, 84–98. [Google Scholar] [CrossRef]
  22. Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef] [Green Version]
  23. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  24. Rabbani, T.; Van Den Heuvel, F. Efficient hough transform for automatic detection of cylinders in point clouds. In Proceedings of the ISPRS WG III/3, III/4, V/3 Workshop “Laser scanning 2005”, Enschede, The Netherlands, 12–14 September 2005; pp. 60–65. [Google Scholar]
  25. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  26. Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4199–4217. [Google Scholar] [CrossRef]
  27. Lee, D.T.; Schachter, B.J. Two algorithms for constructing a Delaunay triangulation. Int. J. Comput. Inf. Sci. 1980, 9, 219–242. [Google Scholar] [CrossRef]
  28. Boissonnat, J.D.; Cazals, F. Smooth surface reconstruction via natural neighbour interpolation of distance functions. Comput. Geom. 2002, 22, 185–203. [Google Scholar] [CrossRef] [Green Version]
  29. Amenta, N.; Bern, M.; Kamvysselis, M. A new Voronoi-based surface reconstruction algorithm. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 19–24 July 1998; pp. 415–421. [Google Scholar]
  30. Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 1999, 5, 349–359. [Google Scholar] [CrossRef]
  31. Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (pcl). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  32. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  33. Joubert, D.; Tyatyantsi, A.; Mphahlehle, J.; Manchidi, V. Pothole tagging system. In Proceedings of the 4th Robotics and Mechatronics Conference of South Africa (RobMech 2011), CSIR International Conference Centre, Pretoria, South Africa, 23–25 November 2011. [Google Scholar]
Figure 1. Quantitative detection framework of concrete pipe damage volume.
Figure 1. Quantitative detection framework of concrete pipe damage volume.
Buildings 12 00213 g001
Figure 2. Mapping relationship between spatial points and depth map.
Figure 2. Mapping relationship between spatial points and depth map.
Buildings 12 00213 g002
Figure 3. Schematic diagram of voxel segmentation.
Figure 3. Schematic diagram of voxel segmentation.
Buildings 12 00213 g003
Figure 4. Point cloud of unit voxel.
Figure 4. Point cloud of unit voxel.
Buildings 12 00213 g004
Figure 5. Downsampling diagram of 3D point cloud.
Figure 5. Downsampling diagram of 3D point cloud.
Buildings 12 00213 g005
Figure 6. Foam board segmentation. (a) Point cloud of foam board before segmentation. (b) Point cloud of foam board after segmentation.
Figure 6. Foam board segmentation. (a) Point cloud of foam board before segmentation. (b) Point cloud of foam board after segmentation.
Buildings 12 00213 g006
Figure 7. Concrete pipe segmentation. (a) Point cloud outside the concrete pipe before segmentation. (b) Point cloud outside the concrete pipe after segmentation.
Figure 7. Concrete pipe segmentation. (a) Point cloud outside the concrete pipe before segmentation. (b) Point cloud outside the concrete pipe after segmentation.
Buildings 12 00213 g007
Figure 8. Foam panel damage clustering. (a) Foam panel damage clustering 3D diagram. (b) Foam panel damage clustering plane diagram.
Figure 8. Foam panel damage clustering. (a) Foam panel damage clustering 3D diagram. (b) Foam panel damage clustering plane diagram.
Buildings 12 00213 g008
Figure 9. Concrete pipe damage clustering. (a) Concrete pipe damage clustering 3D diagram. (b) Concrete pipe damage clustering plan diagram.
Figure 9. Concrete pipe damage clustering. (a) Concrete pipe damage clustering 3D diagram. (b) Concrete pipe damage clustering plan diagram.
Buildings 12 00213 g009
Figure 10. Alpha Shapes algorithm.
Figure 10. Alpha Shapes algorithm.
Buildings 12 00213 g010
Figure 11. Reconstruction results of damaged surface.
Figure 11. Reconstruction results of damaged surface.
Buildings 12 00213 g011
Figure 12. (a) Foam panel damage. (b) Concrete panel damage.
Figure 12. (a) Foam panel damage. (b) Concrete panel damage.
Buildings 12 00213 g012
Figure 13. (a) Concrete pipe outside damage. (b) Concrete pipe inside damage.
Figure 13. (a) Concrete pipe outside damage. (b) Concrete pipe inside damage.
Buildings 12 00213 g013
Figure 14. Camera layout outside concrete pipe.
Figure 14. Camera layout outside concrete pipe.
Buildings 12 00213 g014
Figure 15. Camera arrangement in concrete pipe.
Figure 15. Camera arrangement in concrete pipe.
Buildings 12 00213 g015
Figure 16. Schematic diagram of camera layout in concrete pipe.
Figure 16. Schematic diagram of camera layout in concrete pipe.
Buildings 12 00213 g016
Figure 17. Relative error of each damage of polyethylene foam board at different distances.
Figure 17. Relative error of each damage of polyethylene foam board at different distances.
Buildings 12 00213 g017
Figure 18. Relative errors of concrete slab damage at different distances.
Figure 18. Relative errors of concrete slab damage at different distances.
Buildings 12 00213 g018
Figure 19. (a) Outside damage of concrete pipe 1’s relative error, (b) Outside damage of concrete pipe 2’s relative error, (c) Outside damage of concrete pipe 3’s relative error.
Figure 19. (a) Outside damage of concrete pipe 1’s relative error, (b) Outside damage of concrete pipe 2’s relative error, (c) Outside damage of concrete pipe 3’s relative error.
Buildings 12 00213 g019
Figure 20. Relative error of damage in concrete pipe.
Figure 20. Relative error of damage in concrete pipe.
Buildings 12 00213 g020
Table 1. Microsoft Azure Kinect DK parameters.
Table 1. Microsoft Azure Kinect DK parameters.
Sensor PictureThe Technical SpecificationMicrosoft Azure Kinect DK
Buildings 12 00213 i001RGB camera3840 × 2160 pixels
The depth of the camera1024 × 1024 pixels
Maximum depth range5.46 m
Minimum depth distance0.25 m
Vertical field angle120°
Horizontal field angle120°
Table 2. Actual and measured volumes of total damage.
Table 2. Actual and measured volumes of total damage.
Volume (cm3)
Damage of MaterialsReal VolumeCalculated ValueError (%)
Foam board1297.31199.27.56
Concrete slab1014.7936.767.68
Outside the concrete pipe390.7387.30.87
Concrete pipe604.3617.422.17
Table 3. Quantization results of foam board damage.
Table 3. Quantization results of foam board damage.
Damage 12345678Total
Real volume (cm3) 37.752.4207.843.251.9216.2405.8282.31297.3
Distance (cm)
100test volume37.452.85201.4541.3348.75213.9348.9254.621199.2
error (%)0.80.863.064.336.070.2813.859.817.56
125test volume35.3645.81180.6839.3545.37201.92313.37264.531126.39
error (%)6.2112.5813.058.9112.586.622.626.2913.17
150test volume35.1541.91168.7135.6343.74185.43285.68235.751032
error (%)6.7620.0218.8117.5215.7214.2329.4616.4920.45
175test volume31.7540.11152.4930.3438.64160.75314.95212.73981.76
error (%)15.7823.4526.6229.7725.5525.6522.2324.6424.33
200test volume30.7338.62155.4331.2538.92158.63313.54210.43977.55
error (%)18.4926.325.227.6625.0126.6322.7425.4624.65
Table 4. Quantification results of concrete slab damage.
Table 4. Quantification results of concrete slab damage.
Damage 12345Total
Real volume (cm3) 74.8580.3166.85350.2342.51014.7
distance (cm)
100test volume69.378.27147.27327.35314.57936.76
error (%)7.412.5311.746.528.157.68
125test volume59.0170.17147.05302.15288.16866.54
error (%)21.1612.6211.8713.7215.8714.60
150test volume65.7773.07145.27298.7280.56863.37
error (%)12.13912.9314.7118.0814.91
175test volume56.7156.36138.61238.65267.97758.3
error (%)24.2429.8116.9331.8521.7625.27
200test volume55.4359.75137.31226.28250.47729.24
error (%)25.9525.5917.735.3926.8728.13
Table 5. Quantified results of concrete pipe damage outside the pipe.
Table 5. Quantified results of concrete pipe damage outside the pipe.
Damage 123Total
Real volume (cm3) 50.1126.1214.51953.5
Distance (cm) Angle
−3°−6°−3°−6°−3°−6°
50lighttest volume48.748.6144.9447.355.25121.52119.0112.5119.58114.40217.88205.17194.32199.32190.61839.09
error (%)2.792.9710.35.5910.283.635.6310.795.179.281.584.359.417.0811.145.86
darktest volume48.3945.7844.3247.056.16130.59119.90113.39119.15109.34204.72207.19190.41200.60239.751876.69
error (%)3.418.6211.546.1912.13.564.9210.085.5113.294.563.4111.236.4811.773.93
75lighttest volume46.5845.5858.8043.5759.6129.71118.37119.13121.31118.55215.14218.43199.75204.41198.221897.15
error (%)3.204.117.925.948.642.866.135.533.805.990.301.836.884.707.592.88
darktest volume48.6747.9151.8948.4748.01122.5132.03117.9120.50133217.5202.70199.48203.05199.521893.13
error (%)2.854.373.573.254.172.854.706.504.445.471.405.507.005.346.983.09
100lighttest volume50.8249.7853.6850.1954.35125.7126.57127.1126.48127.06210.78206.37201.53205.34190.281906.03
error (%)1.440.647.150.188.480.320.370.790.300.761.733.86.054.2711.292.43
darktest volume51.8546.3745.6854.4655.48123.54122.73118.65120.73115.68205.49202.7190.47200.82193.251847.9
error (%)3.497.458.828.710.742.032.675.914.258.264.25.511.26.389.915.41
125lighttest volume48.2845.6355.7254.6355.72117.96111.48110.28110.27110.79202.68185.11181.75187.34180.231757.87
error (%)3.638.9211.229.0411.226.4611.5912.5412.5512.145.5113.7015.2712.6615.9810.01
darktest volume53.044.6344.5655.3843.28115.36113.52107.43111.74109.48200.43192.64176.33196.38250.491814.65
error (%)5.7910.9211.0610.5413.618.529.9814.8111.3913.186.5610.1917.798.4516.787.11
150lighttest volume54.7556.3441.1744.8058.26115.76108.44104.95105.96147.45198.32196.72183.55193.75178.431788.65
error (%)9.2812.4617.8210.5716.298.21417.5715.9716.937.548.2914.439.6716.828.44
darktest volume53.444.8957.4245.0243.14115.27110.99145.79108.78106.79196.04192.57185.90194.84181.691782.53
error (%)6.5910.414.6110.1413.898.5911.9815.6113.7415.318.610.2213.339.1715.308.75
175lighttest volume42.8657.6841.2942.3440.34114108.43102.63105.72101.41194.75185.65171.91187.31175.631671.95
error (%)14.4515.1217.5815.4919.489.6014.0118.6116.1619.589.2113.4519.8612.6818.1214.41
darktest volume44.2343.3942.0643.7540.69114.85109.27105.44106.39104.31195.05188.37178.62190.56178.351685.33
error (%)11.7213.4516.0512.6718.788.9213.3516.3815.6317.289.0712.1816.7311.1616.8513.73
Table 6. Quantification results of concrete pipe internal damage.
Table 6. Quantification results of concrete pipe internal damage.
Damage 123Total
Real volume (cm3) 265155.1184.23021.5
Distance (cm) Angle
12°−6°−12°12°−6°−12°12°−6°−12°
50lighttest volume276.95273.41228.2256.83235.65153.43148.85145.15130.64106.25186.94199.63163.43165.47168.332839.16
error (%)4.513.1713.893.0811.081.084.0310.934.8112.801.498.3811.2810.178.626.03
darktest volume271.72294.88240.29259.31238.39148.76134.02129.81133.65114.18185.65194.2207.84168.13167.332888.16
error (%)2.5411.289.322.1510.044.096.5011.795.4513.490.795.4312.838.729.164.41
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pang, G.; Wang, N.; Fang, H.; Liu, H.; Huang, F. Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction. Buildings 2022, 12, 213. https://doi.org/10.3390/buildings12020213

AMA Style

Pang G, Wang N, Fang H, Liu H, Huang F. Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction. Buildings. 2022; 12(2):213. https://doi.org/10.3390/buildings12020213

Chicago/Turabian Style

Pang, Gaozhao, Niannian Wang, Hongyuan Fang, Hai Liu, and Fan Huang. 2022. "Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction" Buildings 12, no. 2: 213. https://doi.org/10.3390/buildings12020213

APA Style

Pang, G., Wang, N., Fang, H., Liu, H., & Huang, F. (2022). Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction. Buildings, 12(2), 213. https://doi.org/10.3390/buildings12020213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop