Next Article in Journal
Using the Groundwater Cooling System and Phenolic Aldehyde Isolation Layer on Building Walls to Evaluation of Heat Effect
Previous Article in Journal
Investigation into the Operating Performance of a Novel Direct Expansion-Based Air Conditioning System
Previous Article in Special Issue
MSFA-Net: A Multiscale Feature Aggregation Network for Semantic Segmentation of Historical Building Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithm-Driven Extraction of Point Cloud Data Representing Bottom Flanges of Beams in a Complex Steel Frame Structure for Deformation Measurement

1
School of Intelligent Manufacturing and Intelligent Transportation, Suzhou City University, Suzhou 215104, China
2
Design School, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
3
Suzhou SITRI Integrated Infrastructure Technology Research Institute Co., Ltd., Suzhou 215131, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(9), 2847; https://doi.org/10.3390/buildings14092847
Submission received: 2 August 2024 / Revised: 30 August 2024 / Accepted: 8 September 2024 / Published: 10 September 2024
(This article belongs to the Special Issue Big Data and Machine/Deep Learning in Construction)

Abstract

:
Laser scanning has become a popular technology for monitoring structural deformation due to its ability to rapidly obtain 3D point clouds that provide detailed information about structures. In this study, the deformation of a complex steel frame structure is estimated by comparing the associated point clouds captured at two epochs. To measure its deformations, it is essential to extract the bottom flanges of the steel beams in the captured point clouds. However, manual extraction of numerous bottom flanges is laborious and the separation of beam bottom flanges and webs is especially challenging. This study presents an algorithm-driven approach for extracting all beams’ bottom flanges of a complex steel frame. RANdom SAmple Consensus (RANSAC), Euclidean clustering, and an originally defined point feature is sequentially used to extract the beam bottom flanges. The beam bottom flanges extracted by the proposed method are used to estimate the deformation of the steel frame structure before and after the removal of temporary supports to beams. Compared to manual extraction, the proposed method achieved an accuracy of 0.89 in extracting the beam bottom flanges while saving hours of time. The maximum observed deformation of the steel beams is 100 mm at a location where the temporal support was unloaded. The proposed method significantly improves the efficiency of the deformation measurement of steel frame structures using laser scanning.

1. Introduction

Three-dimensional (3D) laser scanning has emerged as a powerful technology for rapidly capturing the as-is conditions of real-world objects and scenes [1]. The dense 3D point clouds obtained by laser scanners enable detailed analyses and measurements of scanned structures, making the technology highly suitable for structural health monitoring applications [2]. Compared to traditional monitoring techniques using strain gauges or total stations, which provide merely sparse measurements, laser scanning can provide a comprehensive assessment of the full-field deformation of a structure. However, raw point clouds contain millions of 3D points and require further processing to extract actionable information. Therefore, one of the key challenges is extracting salient structural components that serve as accurate and reliable deformation markers. In this study, beam bottom flanges are the salient structural components to be extracted.
The extraction of beam bottom flanges is especially challenging. The point cloud of a cross-section of a steel beam encountered in this study is shown in Figure 1. The beam top flange (in red) and the beam bottom flange (in green) are similar in dimension and shape. Therefore, it is difficult to define a geometric feature that can distinguish between the beam top and bottom flanges. Thus, manual extraction should be considered. Even though the beam web and beam bottom flanges look separated in Figure 1 due to data occlusion, the space between the beam web (in blue) and beam bottom flange is merely 0.064 m, which is minimal compared to the dimension of the whole steel frame structure. Therefore, manual separation of the beam webs and bottom flanges would require precise observation and control for each of the individual beams. In other words, the manual separation should be simultaneously conducted on multiple beams in batch. The steel frame structure investigated in this study consists of many individual beams. Repeatedly conducting manual extraction for a large number of beams is very time-consuming. However, manually separating the beam webs and beam bottom flanges is challenging.
In many previous studies (e.g., [3,4,5,6,7]), the point clouds of structural components were manually extracted. However, such manual extraction is laborious when repetitively applied to a large number of structural components.
Many algorithms are also already devised to extract representation of structural components from point clouds of various structures, such as stones from masonry walls [8]; piers and slabs from bridges [9]; beam lines from steel buildings [10]; struts, connection plates, and chords from steel structures [11]; building facades from masonry buildings [12]; ceilings, walls, and floors from buildings [13,14,15,16,17]; decks from steel girder bridges [18]; and stones from stone columns [19]. These algorithms typically involve downsampling, clutter removal, and the extraction of geometric primitives. These general procedures are followed in this study. However, it is found that a specific algorithm should be defined for extracting specific structural components from structures, according to their geometric characteristics. These algorithms alter in each of the aforementioned studies. Therefore, those methods are only applicable to the specific structures for which they were originally designed. Moreover, these methods merely extract instances of structural components. None of the aforementioned methods further break down the structural element into distinctive parts (i.e., beam webs and beam bottom flanges), as required in this study. Here, an algorithm to extract the beam bottom flanges from a steel frame structure must be originally proposed.
Recent advancements in deep learning for point cloud segmentation have enabled the extraction of certain structural components [20,21,22,23,24,25]. Again, these methods do not apply to the extraction of beam bottom flanges because they are trained to extract other kinds of a structure components. Among them, the work of Lee, Rashidi, Talei, and Kong [25] is most relevant to this study. Lee, Rashidi, Talei, and Kong [25] used a deep neural network to semantically segment a light steel framing system consisting of c-beams into studs, top tracks, bottom tracks, noggins, and bracings. Their classification is not specific enough to extract the bottom flange of steel beams, as required in this study. However, deep learning models can potentially be adapted via transfer learning for the extraction of beam bottom flanges from steel frame structures. Such adaption requires multiple labelled data of beam bottom flanges and other parts of steel frame structures as training data. In our study, data from only one steel frame structure are acquired, which cannot sufficiently serve as training data. The application of deep learning segmentation methods is limited by the scarcity of labelled training data for specific structural elements [25]. In summary, existing deep learning methods are developed for other scenarios and are not directly applicable to the extraction of beam bottom flanges from a steel frame structure.
To avoid the necessity of manual extraction, this paper aims to propose of an algorithm-driven approach that specifically targets the extraction of the bottom flanges of steel beams from a complex steel framework for measuring vertical deformations. The algorithm specifically incorporates an originally designed point feature, namely the ‘local difference in z-axis’ to separate beam bottom flanges and beam webs. The proposed method significantly improves the efficiency in the extraction of beam bottom flanges compared to manual extraction and makes monitoring steel frame deformation using laser scanning affordable.
The method is demonstrated using point clouds captured at two stages: before and after the unloading of the temporary supporting lattice columns. Initially, the RANdom SAmple Consensus (RANSAC) [26] algorithm is used to perform coarse extraction of the planes representing data points at the approximate level of the bottom flange of the steel beams. Euclidean clustering [27] is then applied both globally and locally to eliminate noise. Finally, filtering based on point normals and local differences in the z-axis is employed to accurately separate the bottom flanges and webs of the steel beams. The accuracy of the extraction is assessed by visually comparing the algorithm-driven results with manually extracted data. Deformations are calculated by measuring the distances between corresponding bottom flanges.
The structure of the paper is as follows: Section 2 covers the site conditions and data acquisition. Section 3 details the proposed method. Section 4 compares the bottom flanges extracted using the proposed method with those obtained through manual extraction. Section 5 presents the deformation calculation results. Section 6 discusses the implications and limitations of the research and suggests future research directions. Finally, Section 7 concludes the paper.

2. Site Condition and Data Collection

The measured object is a level of a steel frame structure under construction, initially supported by fourteen lattice columns, as shown in Figure 2a. These temporary supporting lattice columns are labelled and their locations are marked with lowercase letters in yellow squares in Figure 2b. The columns were progressively unloaded, leading to expected downward vertical deformations of the beams following the start of the unloading process. The steel frame structure was also supported vertically by concrete-filled steel tubes, marked with numbers 1 to 6 in black circles in Figure 2b, and the main reinforced concrete structure surrounding it. However, a detailed structural analysis is beyond the scope of this research.
In this study, point clouds of the site were acquired using a Leica P40 laser scanner [28]. The 3D laser scanner was positioned beneath the steel frame structure, as shown in Figure 2c. Three terrestrial laser scanning stations labelled A, B, and C in red squares in Figure 2b, were used. Each station collected approximately 5 billion points with a scanning resolution of 2.8 mm dot-spacing at a distance of 10 m.
This data acquisition process was repeated twice: once before and once after the start of unloading two temporary supporting columns. By the time of the second data acquisition, columns h and m (see Figure 2b) had been unloaded, while the others remained loaded.
Point clouds collected from multiple stations were registered using reference targets. Circular black-and-white targets, 6 inches in diameter, were printed on A4 paper and adhered to the concrete-filled steel tube columns with structural adhesive, as shown in Figure 2d. These tubes were selected for their minimal deformation. A total of six concrete-filled steel tube columns were used, marked 1 to 6 in black circles as seen in Figure 2b. Each tube had two targets attached, resulting in twelve targets overall. These targets were strategically placed to ensure good spatial distribution both horizontally and vertically, covering the entire steel frame structure when projected onto a horizontal plane. According to Fan, et al. [29], registration error increases with distance from the centre of mass of reference targets. Therefore, we endeavoured to distribute the reference targets evenly around the observed steel frame structure. In this case, the centre of mass of the reference targets is approximately in the middle of the steel frame structure, thus minimising registration error.
Initial coarse registration of the multi-station point clouds was performed using the Leica Cyclone 9.1.3 software [30], based on the twelve target points from each station. Fine registration was then conducted using the concrete-filled steel tube columns as reference, applying the Iterative Closest Point (ICP) method [31]. Additionally, the point cloud of the primary reinforced concrete beams, as indicated in Figure 2d, was used to verify the registration accuracy due to its minimal deformation.

3. Method for Extracting Beam Bottom Flanges

3.1. Overview

With over 15 billion points collected at each epoch, sub-sampling was necessary to manage the data volume before applying the proposed method. The downsampling tool in CloudCompare 2.9.1 [32] was used to reduce the point count from 15 billion to 121 million.
The workflow of the method is illustrated in Figure 3. The proposed method involves four main steps. First, RANSAC is applied to the entire point cloud to coarsely extract the steel frame structure through plane fitting. This process is iterated until a plane fitting the steel frame structure is identified. However, the point clouds extracted by this plane still contain many clutter points, as RANSAC includes all points that fit within a specified threshold. Second, Euclidean clustering is applied to segregate the point clouds within the fitted plane into clusters representing the steel frame structure and clusters of clutter, removing the latter. This clustering is done at two scales: globally on the initial RANSAC planes and locally on subsets to preserve detailed features. Third, the bottom flanges and webs of the beams are distinguished based on their orientation and vertical position. This step finalizes the extraction of the bottom flanges. Finally, the extracted bottom flanges are compared between the two epochs to determine the vertical deformations of the beams.

3.2. Coarse Extraction of Steel Frame Structure Using RANSAC

The point cloud from the first epoch, shown in Figure 4, represents a multistorey structure with elevations ranging from −3.81 m to 11.46 m. Given RANSAC’s effectiveness with noisy datasets [33], it was chosen for plane extraction to isolate the storey containing the steel frame structure. Key parameters for the algorithm need to be set before application.
First, a distance threshold, d R , must be determined. This threshold defines the maximum allowable distance from a point to the plane. Points exceeding this distance are considered outliers and removed. The total thickness of the point cloud extracted by RANSAC plane-fitting is 2 × d R . The rationale for the determination of the value of d R is twofold: (a) The thickness of the extracted point cloud must be sufficient to incorporate all beam bottom flanges. (b) The thickness of the extracted point cloud must be limited, to avoid incorporating unnecessary points. Therefore, the spatial distribution of beam bottom flanges is examined in Figure 5. Three parts of the beam bottom flanges are segmented out in Figure 5a and their side view are zoomed in in Figure 5b. It can be seen that all of the beam bottom flanges are not coplanar. The vertical distance from the green parts to the red is approximately 0.25 m. Therefore, it is necessary to set d R 0.125   m . Moreover, due to the random sampling strategy in the RANSAC process, there should be a certain amount of tolerance for uncertainties in d R . Therefore, d R = 0.2   m is set.
Second, the maximum number of iterations needs to be specified. This parameter affects the robustness and accuracy of the extraction. While increasing the number of iterations improves robustness, it also lengthens processing time. In this study, the maximum number of iterations was set to 1000.
RANSAC is applied iteratively to reject irrelevant planes and extract the one containing the steel frame structure. For this instance, it took three iterations to extract the steel frame successfully. The detailed process is described in the remainder of Section 3.2.
The first plane extracted, shown in Figure 6a, does not include the steel frame structure and is therefore discarded. After removing this plane, RANSAC is applied again to the remaining point cloud, resulting in a second plane, depicted in Figure 6b. This plane partially captures the steel frame but also includes significant clutter. A closer view of this plane, shown in Figure 7, reveals that it contains the parts of the top flanges (in green), webs (in blue), and safety nets (in red), which are not of interest. Consequently, this plane is removed from the dataset.
Following the removal of unnecessary data, RANSAC is then applied a third time on the remaining points, as illustrated in Figure 8. This iteration successfully extracts the plane containing the bottom flanges and webs of the beams. To optimize the results, the distance threshold d R was varied and compared.
The plane with a distance threshold of d R = 0.05 is shown in Figure 8a. It reveals missing sections of the beams and some discontinuities. Some beams are missed as indicated by the white arrow in Figure 8a. Figure 8b displays the plane with d R = 0.5, which captures a continuous majority of the beams but includes excessive thickness, incorporating more safe nets, as indicated by the white arrow in Figure 8b. The plane segmented with d R = 0.2, shown in Figure 8c, provides the most accurate result. It effectively maintains the steel frame’s shape while minimizing clutter and beam webs, making it ideal for further deformation calculations.

3.3. Removal of Clutters Using Euclidean Clustering

Despite the careful selection of the distance threshold for the RANSAC algorithm, some clutter points from safety nets remain, as indicated by the red rectangle in Figure 9. To further enhance segmentation quality, additional refinements are necessary. In the proposed method, Euclidean clustering is applied after RANSAC plane extraction to remove clutter and refine the extraction of steel structures. This combination leverages the strengths of both algorithms for robust and accurate feature extraction from the point cloud data.
Clustering is a fundamental method for point cloud segmentation, dividing a point cloud into groups based on similar characteristics. Euclidean clustering is effective for segmenting point clouds based on the Euclidean distance between points [34].
The quality of Euclidean clustering segmentation depends on the distance threshold d E . This threshold controls the segmentation granularity. A large d E can preserve the continuity of the steel frame structure but might under-segment, merging points of structure components and clutters into one cluster. Conversely, a small d E , can finely separate the clutters and steel frame structure but might over-segment, splitting a single structure component into multiple parts. Therefore, both global clustering and local clustering are conducted to take advantage of the large and small values of d E . In global clustering, a large d E , G is used to partially separate the steel frame structure from the clutter while preserving the whole steel frame structure. In local clustering, a small d E , L is adopted to finely separate the steel frame structure and clutter. The advantage of local Euclidean clustering is the ability to set a smaller distance threshold, achieving more accurate segmentation without affecting other parts of the structure. Additionally, local clustering reduces computation time by involving only a small portion (approximately 1.5%) of the original point cloud. The values of d E , G and d E , L   is to be decided for both global and local clustering, empirically.
Figure 10a presents the clustering result with a distance threshold d E = 1 m, where some internal points are not effectively segmented. To address this, the threshold is decreased to d E = 0.05 m, resulting in over-segmentation where the entire steel frame structure is divided into several parts, as shown in Figure 10b. Therefore, the segmentation results with d E = 1 m is retained, but further refinement is needed to eliminate clutters between the steel beams.
In total, nine clusters are generated. The cluster with the greatest number of points is preserved, representing the largest volume of the beam structure. All points in the remaining eight clusters are removed to improve segmentation accuracy. The point cloud after removing irrelevant clusters is shown in Figure 11.
To apply local clustering, the data points are divided into smaller regions based on their x and y coordinates. Two preprocessing steps are necessary to better separate the point clouds into each region. First, the point cloud is rotated until the bottom side aligns with the x-axis. Since most primary and secondary beams are orthogonal to each other, adjusting the entire point cloud’s orientation helps evenly distribute features across regions, as shown in Figure 12. Second, the x and y values of the point cloud are redefined, setting both lower bounds to 0. This adjustment simplifies data processing and arrangement. The box slice tool in CloudCompare 2.9.1 is used to ensure the region sizes are appropriate before processing. This tool automatically divides the point cloud into small pieces of equal dimensions. The window size is set based on an x-to-y ratio of approximately 80:50, with dimensions of 8 m on the x-axis and 5 m on the y-axis. Consequently, the point cloud is sliced into 70 local regions, excluding those with zero points.
Figure 13a highlights one representative local region containing the highest volume of safety net points before local clustering. The Euclidean clustering algorithm is applied to separate the point cloud in this region. The result, shown in Figure 13b, indicates the successful separation of the beam and safety net points with a distance threshold of 0.05 m, confirming that the dimensions of 5 m × 8 m for each local region are suitable. When the Euclidean clustering algorithm is applied to these local regions, most of the safety net points are effectively eliminated. The point cloud of the representative slice after removing the safety net points is shown in Figure 13c.

3.4. Separation of the Bottom Flanges and Webs of Beams

For accurate deformation calculation, the point cloud must be refined to include only the bottom flanges of beams. Despite the processes described in Section 3.2 and Section 3.3, some points from the webs of beams may remain. To achieve a precise separation of the web and the bottom flange points, filtering based on point normals and an originally defined point feature is employed.
Intuitively, the normals of web points should be approximately horizontal, while the normals of bottom flange points should be vertical. For each point, p i , normal n i ( n i x ,   n i y ,   n i z ) is estimated and normalised to unity. Under this condition, the majority of web points with n i x or n i y greater than a threshold T n = 0.1 are filtered out, as shown in Figure 14.
Due to uncertainties in point normal estimation, some web points may remain after initial filtering. Therefore, an additional point feature, namely the local difference in the z-axis z d i f f , i , is defined to further filter out the remaining web points. To note, for this step, the point clouds are voxel-downsampled using a voxel size of 0.05 m to reduce the computational cost.
It is assumed that in a local neighbourhood, the number of remaining web points is small compared to the number of bottom flange points, and the z i value of a web point i ( x i ,   y i ,   z i ) should be greater than that of the majority of other points in its neighbourhood by a certain threshold T z . Based on this assumption, a point feature, namely ‘local difference in z-axis’, z d i f f , i , is defined as follows:
z d i f f , i = z i z n e i g h b o u r h o o d , i
where z i is the coordinate in the z-axis of the point p i and z n e i g h b o u r h o o d , i is the median value of coordinate in the z-axis of all points in the neighbourhood of p i .
As presented in Figure 15, a cylindrical neighbourhood is constructed for each point P i to calculate z d i f f , i . The axis of the cylinder is vertical and goes through p i . The radius of the cylinder is R i , and the height of the cylinder is H i . By further filtering out all points with z d i f f , i > T z , the remaining points of the webs are successfully removed. The threshold T z , the radius of the cylinder R i , and the height of the cylinder H i , are empirically decided to be 0.05 m, 0.1 m, and 1 m, respectively.
In the example shown in Figure 1, there is space in the vertical direction of 0.064 m between beam webs and beam bottom flanges. Therefore, T z should be smaller than 0.064 m. T z is further determined by assessing the cumulative distribution of z d i f f , as shown in Figure 16. By setting T z = 0.05   m , 99% of the points can be preserved while filtering out the remaining web points.
Finally, Statistical Outlier Removal [27] is applied to further remove clutter points.

3.5. Experimental Setting

Since the process described in Section 3 involves many parameters. The values of these parameters are summarised in Table 1.

4. Comparison of the Proposed Method and Manual Extraction

In this study, manual extraction results are essential to evaluate the quality of the proposed algorithm’s extraction results. The CloudCompare 2.9.1 software is used to manually extract data representing the bottom flanges of beams. This is found to be a time-consuming process.
Due to the large size and complex shape of the original point cloud, directly locating the bottom flange is challenging. Therefore, a rough segmentation is performed first. This point cloud includes the entire beam structure (top flange, bottom flange, and web) and a significant volume of the safety nets directly contacting the beams. Since some safety nets are fully enclosed by many beams, manually eliminating points corresponding to these safety nets is difficult. Moreover, beam webs and beam bottom flanges are spatially close and are difficult to separate. Through careful observation and manipulation, it took approximately four hours to manually extract the beam bottom flanges. During the manual extraction process, the separation of the beam webs and beam bottom flanges took the most time at three hours.
Since the point cloud from algorithm-driven extraction has been voxel-downsampled by a voxel size of 0.05 m in the procedure described in Section 3.4, the manually extracted point clouds are also voxel-downsampled by a voxel size of 0.05 m to enable a fair comparison.
Figure 17a,b compares the beam bottom flanges extracted using the proposed method with those obtained through manual extraction. Based on qualitative observation, the bottom flanges extracted by our method closely match those extracted manually.
A quantitative comparison is also conducted between the manually extracted point cloud and that derived using our algorithm. The point cloud extracted by the proposed method is named P , containing 134,413 points and the manually extracted point cloud is named Q , containing 135,053 points. The total number of points in P and Q are represented by P and Q . Q is assumed to be the ground truth and P is compared with Q to determine how many points in P are correctly extracted by the proposed method.
For any point p m in P , if there exists a point q n in Q and p m q n d , then p m is deemed correctly extracted. Since both P and Q are voxel-downsampled by a voxel size of 0.05 m, it is fair to make d = 5   m . Using Python codes based on SciPy [35], it is estimated that 120,079 points are correctly extracted by the proposed method.
The number of correctly extracted points is termed True Positives ( T P ). False Positives ( F P ) are the number of points incorrectly extracted by the proposed method. False Negatives ( F N ) are the number of points missed by the proposed method. F P and F N can be determined as follows:
F P = P T P
F N = Q T P
The results are shown in Table 2.
Following the convention of previous studies on point cloud segmentation, three metrics, i.e., precision, recall, and F1 score are adopted to quantify the accuracy of the algorithm-driven extraction. Precision is the ratio of correctly extracted points to the total points extracted by the proposed method. Recall is the ratio of correctly extracted points to the total ground truth points. F1 score is the harmonic mean of precision and recall, balancing the two metrics. The three metrics can be calculated as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
The results are shown in Table 3. The results of all three metrics show that bottom flanges extracted by our method closely match those extracted manually.
The processing time and memory usage of the proposed method are also recorded to demonstrate the effectiveness of the proposed method compared to manual extraction. For the record, the proposed method was run on a desktop consisting of an Intel i5-11400F CPU and 16.0 GB RAM. The procedures described in Section 3.2 and Section 3.3 require human observation and manipulation, and time usage cannot be accurately recorded. The separation of beam bottom flanges and beam webs is fully automated by the algorithm coded in Python, utilising the Python library Open3D and Numpy. The time and memory usage of each sub-step are recorded and presented in Table 4. The number of points in the input point cloud is also presented in Table 4, and the time and memory usage is expected to increase with the amount of input data. Peak memory usage throughout the process for an input point cloud initially containing 11,047,383 points is merely 69.22 MB, which is easily manageable on modern desktops and laptops. The total run time is 746.49 s, i.e., less than 13 min. It took approximately 180 min, which varies with the operator’s skills, to manually separate the beam bottom flanges and the beam webs in the CloudCompare 2.9.1 software. As a result, the proposed method took only 7% of the time required for manual extraction to separate the beam bottom flanges and web.

5. Deformation Calculation

The procedures outlined in Section 3 were also applied to the point cloud data from the second scan to obtain the bottom flange point clouds of the steel beams. The distance between the processed point clouds from the two scans represents the deformation of beams between the two epochs. The distance was then calculated using the Cloud-to-Cloud (C2C) method with least-square planes as local models [36].
The principle of the deformation calculation is illustrated in Figure 18. The point cloud of the beam bottom flanges acquired in the second scan is treated as the query point cloud, while the point cloud acquired in the first scan is treated as the reference point cloud. The query point cloud and reference point cloud are represented by rectangles and circles, respectively. The two dashed lines represent the assumed true position of the beam bottom flanges at two epochs, respectively. For a point, q 1 , in the query point cloud, the closest point, q 1 , in the reference point cloud is identified. q 1 and r 1 are represented by a solid rectangle and a circle, respectively. A least square plane is fitted to the r 1 , and its six nearest neighbours in the reference point cloud. The fitted plane is represented by the solid line. Finally, the distance, d 1 , from the q 1 to the fitted plane is regarded as the distance from q 1 to the reference point cloud. It can be seen then that d 1 is close to the assumed true distance d t .
However, the main limitation of the distance calculation method is that distance would be overestimated where occlusion in the reference point cloud occurs. Such an example is also shown in Figure 18. For a point q 2 (represented by a dashed rectangle) in the query point cloud, its corresponding part in the reference cloud is occluded. The closest point to q 2 in the reference cloud is r 2 , represented by a dashed circle. The fitted plane of r 2 is still the same as the fitted plane of r 1 because r 1 and r 2 share the same neighbourhood. In this case, the estimated distance d 2 is significantly overestimated compared to the assumed true distance d t . Therefore, deformation results at the occlusion of the reference point cloud (point cloud of the 2nd scan) should be disregarded.
As shown in Figure 19, the maximum observed deformation of the steel beams is 100 mm, with an average deformation of 9 mm, excluding unrealistic deformation values due to occlusion in the point cloud of 2nd scan. The deformation map indicates that the greatest deformation occurs at column h (refer to Figure 2b), which was unloaded, with deformation increasing towards this area. Although column m (see Figure 2b) was also unloaded, the steel frame structure above is near other vertical supports, including loaded temporary lattice columns, concrete-filled steel tube columns, and the main reinforced concrete structure. Consequently, minimal deformation is observed at column m.
The steel frame structure investigated in this study is a cantilever steel frame structure for roof coverings. According to standard for design of steel structures [37], the upper limit of allowable deformation of such structure is L 125 , where L is the cantilever span of the whole steel frame structure, as indicated by the red dashed line in Figure 20. In our case, L = 42   m and hence the upper limit of allowable deformation is 280 mm. Therefore, the observed deformation is within the allowable deformation range.

6. Discussion

At present, the use of 3D scanning technology for monitoring structural deformation is still developing. This study successfully extracts beam bottom flanges from a steel frame and measures the deformation of the structure. The proposed method can be adapted for similar steel frame structures with minor adjustments of parameters summarised in Table 1. In this study, only I-beams are investigated. The bottom flanges and webs of the I-beams are separated by their difference in normal orientation. Aside from I-beams, C-beams, L-beams, T-beams, box-beams, and round beams are commonly used to build steel frame structures. If the cross-section of the beams has orthogonal sides, these sides can be separated by their difference in normal orientations. Therefore, the proposed method is also applicable to steel frame structures consisting of I-beams, C-beams, L-beams, T-beams, and box-beams, but not those consisting of round beams. However, it is likely also not suitable for other structures with different geometric patterns, as noted in previous research discussed in Section 1. Future improvements should focus on broadening the applicability of the proposed method to other civil structures.
While our method eliminates the need for manual extraction of beam bottom flanges, there are areas for improvement. Specifically, the RANSAC plane-fitting and clustering process still relies on human observation to determine the correct plane and clusters, requiring manual verification of results. Future work should focus on developing a fully automatic algorithm capable of recognising data points of interest based on unique geometric features.
The values of thresholds are empirically decided. The decision requires a manual assessment of the geometry of the steel frame structure and the characteristics of the acquired point cloud. In the future, automatically adaptive thresholding should be considered.
It is suggested that deep learning point cloud segmentation models [38,39,40,41] are utilised to extract the beam bottom flanges. Deep learning models usually operate in an end-to-end manner, which avoids manual judgement during the process. Moreover, the parameters of deep learning models are determined during the training process, which alleviates the necessity of the manual adjustments of parameters. However, deep learning models can only be realised after appropriate training data have been repaired. Our method can be used to label the training data for these models.
Overall, the accuracy of the proposed method is similar to manual extraction but the proposed method is more efficient than manual extraction. Thus, the proposed method accelerates the extraction process of the beam bottom flange point clouds, which ultimately makes monitoring of steel frame deformation using laser scanning more desirable.

7. Conclusions

This study introduces an algorithm-driven method for extracting 3D point clouds of beam bottom flanges from a complex steel frame structure. It uses RANSAC plane-fitting to coarsely extract the level of steel frame structures and clustering algorithms to remove clutters. Then, the method innovatively distinguishes between webs and bottom flanges of beams based on their orientation and vertical position.
Assuming the manually extracted beam bottom flanges as ground truth, the proposed method achieved an accuracy of 0.89. For the same objective of separating beam bottom flanges and beam webs, the proposed method took only about 7% of the time compared to manual extraction. The deformation estimated from the point cloud extracted by the proposed method agrees with the site condition. The maximum value of deformation, which is 100 mm, is observed at one of the positions where temporal support was unloaded.
There are two conceived limitations of the proposed method. First, the proposed method still requires manual judgement during the RANSAC plane-fitting and clustering process. Second, all three steps, namely RANSAC plane-fitting, clustering, and separation of beam bottom flanges and webs, requires manual adjustments of the parameter values.
To address these limitations, future studies should focus on developing fully automatic methods, with automated recognition of interested data and automated adjustments of parameter values. End-to-end deep learning point cloud segmentation models can potentially meet these requirements. However, deep learning models can only be utilised after appropriate training data are prepared. Our method can be used to label the training data for deep learning models, contributing to the development of more advanced methods.
The proposed method efficiently extracts point clouds from the bottom flange of beams, making the monitoring of steel frame deformation using laser scanning more affordable. Given the common use of steel frame structures, which often require deformation monitoring, the proposed method offers an effective and valuable solution for this frequently encountered task.

Author Contributions

Conceptualization, L.F. and Y.Z.; methodology, D.W. and Y.Z.; software, Y.Z. and D.W.; validation, Y.Z.; formal analysis, Y.Z. and D.W.; investigation, Y.Z.; resources, L.F.; data curation, D.W. and Y.Z.; writing—original draft preparation, D.W., Q.Z., and Y.Z.; writing—review and editing, Q.Z., Y.Z. and L.F.; visualisation, D.W., Q.Z. and Y.Z.; supervision, L.F.; project administration, Y.B. and L.F.; funding acquisition, L.F. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Xi’an Jiaotong-Liverpool University Research Enhancement Fund, grant number REF-21-01-003 and the Suzhou City University Research Startup Fund, grant number 3110710923.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Yuanfeng Bao was employed by the company Suzhou SITRI Integrated Infrastructure Technology Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zhu, Q.; Fan, L.; Weng, N. Advancements in point cloud data augmentation for deep learning: A survey. Pattern Recognit. 2024, 153, 110532. [Google Scholar] [CrossRef]
  2. Rabi, R.R.; Vailati, M.; Monti, G. Effectiveness of Vibration-Based Techniques for Damage Localization and Lifetime Prediction in Structural Health Monitoring of Bridges: A Comprehensive Review. Buildings 2024, 14, 1183. [Google Scholar] [CrossRef]
  3. Olsen, M.J.; Kuester, F.; Chang, B.J.; Hutchinson, T.C. Terrestrial Laser Scanning-Based Structural Damage Assessment. J. Comput. Civ. Eng. 2010, 24, 264–272. [Google Scholar] [CrossRef]
  4. Yang, H.; Xu, X.; Xu, W.; Neumann, I. Terrestrial Laser Scanning-Based Deformation Analysis for Arch and Beam Structures. IEEE Sens. J. 2017, 17, 4605–4611. [Google Scholar] [CrossRef]
  5. Oskouie, P.; Becerik-Gerber, B.; Soibelman, L. Automated Measurement of Highway Retaining Wall Displacements Using Terrestrial Laser Scanners. Autom. Constr. 2016, 65, 86–101. [Google Scholar] [CrossRef]
  6. Acikgoz, S.; Soga, K.; Woodhams, J. Evaluation of the Response of a Vaulted Masonry Structure to Differential Settlements Using Point Cloud Data and Limit Analyses. Constr. Build. Mater. 2017, 150, 916–931. [Google Scholar] [CrossRef]
  7. Kalenjuk, S.; Lienhart, W.; Rebhan, M.J. Processing of Mobile Laser Scanning Data for Large-Scale Deformation Monitoring of Anchored Retaining Structures Along Highways. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 678–694. [Google Scholar] [CrossRef]
  8. Valero, E.; Bosché, F.; Forster, A. Automatic segmentation of 3D point clouds of rubble masonry walls, and its application to building surveying, repair and maintenance. Autom. Constr. 2018, 96, 29–39. [Google Scholar] [CrossRef]
  9. Lu, R.; Brilakis, I.; Middleton, C.R. Detection of Structural Components in Point Clouds of Existing RC Bridges. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 191–212. [Google Scholar] [CrossRef]
  10. Smith, A.; Sarlo, R. Automated extraction of structural beam lines and connections from point clouds of steel buildings. Comput.-Aided Civ. Infrastruct. Eng. 2022, 37, 110–125. [Google Scholar] [CrossRef]
  11. Yang, L.; Cheng, J.C.P.; Wang, Q. Semi-automated generation of parametric BIM for steel structures based on terrestrial laser scanning data. Autom. Constr. 2020, 112, 103037. [Google Scholar] [CrossRef]
  12. Hamid-Lakzaeian, F. Point cloud segmentation and classification of structural elements in multi-planar masonry building facades. Autom. Constr. 2020, 118, 103232. [Google Scholar] [CrossRef]
  13. Dimitrov, A.; Golparvar-Fard, M. Segmentation of building point cloud models including detailed architectural/structural features and MEP systems. Autom. Constr. 2015, 51, 32–45. [Google Scholar] [CrossRef]
  14. Maalek, R.; Lichti, D.D.; Ruwanpura, J.Y. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites. Sensors 2018, 18, 819. [Google Scholar] [CrossRef] [PubMed]
  15. Maalek, R.; Lichti, D.D.; Ruwanpura, J.Y. Automatic Recognition of Common Structural Elements from Point Clouds for Automated Progress Monitoring and Dimensional Quality Control in Reinforced Concrete Construction. Remote Sens. 2019, 11, 1102. [Google Scholar] [CrossRef]
  16. Xu, Y.; Ye, Z.; Huang, R.; Hoegner, L.; Stilla, U. Robust segmentation and localization of structural planes from photogrammetric point clouds in construction sites. Autom. Constr. 2020, 117, 103206. [Google Scholar] [CrossRef]
  17. Cai, Y.; Fan, L. An Efficient Approach to Automatic Construction of 3D Watertight Geometry of Buildings Using Point Clouds. Remote Sens. 2021, 13, 1947. [Google Scholar] [CrossRef]
  18. Yan, Y.; Hajjar, J.F. Automated extraction of structural elements in steel girder bridges from laser point clouds. Autom. Constr. 2021, 125, 103582. [Google Scholar] [CrossRef]
  19. Galanakis, D.; Maravelakis, E.; Pocobelli, D.P.; Vidakis, N.; Petousis, M.; Konstantaras, A.; Tsakoumaki, M. SVD-based point cloud 3D stone by stone segmentation for cultural heritage structural analysis—The case of the Apollo Temple at Delphi. J. Cult. Herit. 2023, 61, 177–187. [Google Scholar] [CrossRef]
  20. Perez-Perez, Y.; Golparvar-Fard, M.; El-Rayes, K. Scan2BIM-NET: Deep Learning Method for Segmentation of Point Clouds for Scan-to-BIM. J. Constr. Eng. Manag. 2021, 147, 04021107. [Google Scholar] [CrossRef]
  21. Lee, J.S.; Park, J.; Ryu, Y.-M. Semantic segmentation of bridge components based on hierarchical point cloud model. Autom. Constr. 2021, 130, 103847. [Google Scholar] [CrossRef]
  22. Jing, Y.; Sheil, B.; Acikgoz, S. Segmentation of large-scale masonry arch bridge point clouds with a synthetic simulator and the BridgeNet neural network. Autom. Constr. 2022, 142, 104459. [Google Scholar] [CrossRef]
  23. Mirzaei, K.; Arashpour, M.; Asadi, E.; Masoumi, H.; Mahdiyar, A.; Gonzalez, V. End-to-end point cloud-based segmentation of building members for automating dimensional quality control. Adv. Eng. Inform. 2023, 55, 101878. [Google Scholar] [CrossRef]
  24. Jing, Y.; Sheil, B.; Acikgoz, S. A lightweight Transformer-based neural network for large-scale masonry arch bridge point cloud segmentation. Comput.-Aided Civ. Infrastruct. Eng. 2024, 39, 2427–2438. [Google Scholar] [CrossRef]
  25. Lee, Y.S.; Rashidi, A.; Talei, A.; Kong, D. Innovative Point Cloud Segmentation of 3D Light Steel Framing System through Synthetic BIM and Mixed Reality Data: Advancing Construction Monitoring. Buildings 2024, 14, 952. [Google Scholar] [CrossRef]
  26. Fischler, M.A.; Bolles, R.C. Random sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography. In Readings in Computer Vision; Fischler, M.A., Firschein, O., Eds.; Morgan Kaufmann: San Francisco CA, USA, 1987; pp. 726–740. [Google Scholar]
  27. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  28. Leica ScanStation P30/P40 Product Specifications. Available online: https://leica-geosystems.com/en-us/products/laser-scanners/scanners/p-series-details-matter-white-paper (accessed on 16 June 2024).
  29. Fan, L.; Smethurst, J.A.; Atkinson, P.M.; Powrie, W. Error in target-based georeferencing and registration in terrestrial laser scanning. Comput. Geosci. 2015, 83, 54–64. [Google Scholar] [CrossRef]
  30. Leica Geosystems. Cyclone; Version 9.1.3; Windows. Leica Geosystems: Heerbrugg, Switzerland, 2015. [Google Scholar]
  31. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  32. CloudCompare (Version 2.9.1). GPL Software. Available online: https://www.cloudcompare.org/ (accessed on 17 June 2024).
  33. Gallo, O.; Manduchi, R.; Rafii, A. CC-RANSAC: Fitting planes in the presence of multiple surfaces in range data. Pattern Recognit. Lett. 2011, 32, 403–410. [Google Scholar] [CrossRef]
  34. Gamal, A.; Wibisono, A.; Wicaksono, S.B.; Abyan, M.A.; Hamid, N.; Wisesa, H.A.; Jatmiko, W.; Ardhianto, R. Automatic LIDAR building segmentation based on DGCNN and euclidean clustering. J. Big Data 2020, 7, 102. [Google Scholar] [CrossRef]
  35. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  36. Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change detection on points cloud data acquired with a ground laser scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, W19. [Google Scholar]
  37. GB 50017-2017; Standard for Design of Steel Structures. Ministry of Housing and Urban-Rural Development of PRC: Beijing, China, 2017.
  38. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the CVPR, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  39. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
  40. Thomas, H.; Qi, C.R.; Deschaud, J.; Marcotegui, B.; Goulette, F.; Guibas, L. KPConv: Flexible and Deformable Convolution for Point Clouds. In Proceedings of the ICCV, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6410–6419. [Google Scholar]
  41. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11105–11114. [Google Scholar]
Figure 1. Example cross-section of an H-beam in point cloud data (both surfaces of the web were scanned so there are two surfaces of the web. The web and bottom flange look separate due to occlusion near the connection of web and bottom flange).
Figure 1. Example cross-section of an H-beam in point cloud data (both surfaces of the web were scanned so there are two surfaces of the web. The web and bottom flange look separate due to occlusion near the connection of web and bottom flange).
Buildings 14 02847 g001
Figure 2. Site condition and laser scanning setup: (a) temporal lattice columns, (b) site layout, (c) a scanning station, (d) targets attached to columns. (For clarity, only the level of the steel frame structure is shown in Figure 2b and the point cloud has been rotated to make the principle axes of the steel frame structure in an orthogonal position).
Figure 2. Site condition and laser scanning setup: (a) temporal lattice columns, (b) site layout, (c) a scanning station, (d) targets attached to columns. (For clarity, only the level of the steel frame structure is shown in Figure 2b and the point cloud has been rotated to make the principle axes of the steel frame structure in an orthogonal position).
Buildings 14 02847 g002
Figure 3. Overall procedure of the proposed method.
Figure 3. Overall procedure of the proposed method.
Buildings 14 02847 g003
Figure 4. Original point cloud data of the first epoch.
Figure 4. Original point cloud data of the first epoch.
Buildings 14 02847 g004
Figure 5. Examination of the spatial distribution of the beam bottom flanges: (a) top view, (b) side view. For better visibility, only the beam bottom flanges, which are extracted using the proposed method, are shown in this figure. The perspective of the side view is indicated by the purple arrow in Figure 5a.
Figure 5. Examination of the spatial distribution of the beam bottom flanges: (a) top view, (b) side view. For better visibility, only the beam bottom flanges, which are extracted using the proposed method, are shown in this figure. The perspective of the side view is indicated by the purple arrow in Figure 5a.
Buildings 14 02847 g005
Figure 6. Planes extracted by RANSAC: (a) The first plane. (b) The second plane.
Figure 6. Planes extracted by RANSAC: (a) The first plane. (b) The second plane.
Buildings 14 02847 g006
Figure 7. Visualization of a small part of the second plane.
Figure 7. Visualization of a small part of the second plane.
Buildings 14 02847 g007
Figure 8. The third extracted plane using RANSAC, when (a) d R = 0.05, (b) d R = 0.5, and (c) d R = 0.2.
Figure 8. The third extracted plane using RANSAC, when (a) d R = 0.05, (b) d R = 0.5, and (c) d R = 0.2.
Buildings 14 02847 g008
Figure 9. Irrelevant points after using RANSAC.
Figure 9. Irrelevant points after using RANSAC.
Buildings 14 02847 g009
Figure 10. Segmentation result using Euclidean clustering for (a) d E = 1 and (b) d E = 0.05.
Figure 10. Segmentation result using Euclidean clustering for (a) d E = 1 and (b) d E = 0.05.
Buildings 14 02847 g010
Figure 11. Obtained point cloud data after global Euclidean clustering.
Figure 11. Obtained point cloud data after global Euclidean clustering.
Buildings 14 02847 g011
Figure 12. Rotated point cloud data.
Figure 12. Rotated point cloud data.
Buildings 14 02847 g012
Figure 13. A representative local region (a) before clustering, (b) clustering results, (c) after the removal of the clusters.
Figure 13. A representative local region (a) before clustering, (b) clustering results, (c) after the removal of the clusters.
Buildings 14 02847 g013
Figure 14. Separation of points in webs and flange based on point normal. Points with n i x > 0.1 or n i y > 0.1 are in green and others in red.
Figure 14. Separation of points in webs and flange based on point normal. Points with n i x > 0.1 or n i y > 0.1 are in green and others in red.
Buildings 14 02847 g014
Figure 15. Construction of local cylindrical neighbourhood. P i is shown in red, the extent of the cylindrical neighbourhood is in light grey, the points inside the neighbourhood are in green, and the points outside the neighbourhood are in black.
Figure 15. Construction of local cylindrical neighbourhood. P i is shown in red, the extent of the cylindrical neighbourhood is in light grey, the points inside the neighbourhood are in green, and the points outside the neighbourhood are in black.
Buildings 14 02847 g015
Figure 16. Cumulative probability of T z .
Figure 16. Cumulative probability of T z .
Buildings 14 02847 g016
Figure 17. Point cloud data of extracted plane using (a) the proposed method and (b) manual extraction.
Figure 17. Point cloud data of extracted plane using (a) the proposed method and (b) manual extraction.
Buildings 14 02847 g017
Figure 18. Principles and limitations of C2C method with least-square planes as local models.
Figure 18. Principles and limitations of C2C method with least-square planes as local models.
Buildings 14 02847 g018
Figure 19. Beam deformations after the temporal columns h and m are removed.
Figure 19. Beam deformations after the temporal columns h and m are removed.
Buildings 14 02847 g019
Figure 20. Illustration of the cantilever span of the steel frame structure.
Figure 20. Illustration of the cantilever span of the steel frame structure.
Buildings 14 02847 g020
Table 1. Summary of the values of the parameters.
Table 1. Summary of the values of the parameters.
ParameterValue
d R , distance threshold for RANSAC0.2 m
d E , G , distance threshold for global Euclidean clustering 1 m
d E , L , distance threshold for local Euclidean clustering0.05 m
R i , radius of the cylindrical neighbourhood 0.1 m
H i , height of the cylindrical neighbourhood 1 m
T z , threshold for local difference in z-axis0.05 m
Table 2. Summary of T P , F P , and F N .
Table 2. Summary of T P , F P , and F N .
T P F P F N
120,07914,33414,974
Table 3. Metrics for the comparison between beam bottom flanges extracted using the proposed method compared to those obtained through manual extraction.
Table 3. Metrics for the comparison between beam bottom flanges extracted using the proposed method compared to those obtained through manual extraction.
PrecisionRecallF1
0.890.890.89
Table 4. Summary of runtime and memory usage.
Table 4. Summary of runtime and memory usage.
ProcedureInput Point NumberRuntime (s)Peak Memory Usage (MB)
Filtering based on normal orientation11,047,38326.8169.22
Voxel-downsample5,389,4041.700.54
Filtering based on local difference in z-axis139,048717.9850.52
In totalN/A746.49N/A
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Wang, D.; Zhu, Q.; Fan, L.; Bao, Y. Algorithm-Driven Extraction of Point Cloud Data Representing Bottom Flanges of Beams in a Complex Steel Frame Structure for Deformation Measurement. Buildings 2024, 14, 2847. https://doi.org/10.3390/buildings14092847

AMA Style

Zhao Y, Wang D, Zhu Q, Fan L, Bao Y. Algorithm-Driven Extraction of Point Cloud Data Representing Bottom Flanges of Beams in a Complex Steel Frame Structure for Deformation Measurement. Buildings. 2024; 14(9):2847. https://doi.org/10.3390/buildings14092847

Chicago/Turabian Style

Zhao, Yang, Dufei Wang, Qinfeng Zhu, Lei Fan, and Yuanfeng Bao. 2024. "Algorithm-Driven Extraction of Point Cloud Data Representing Bottom Flanges of Beams in a Complex Steel Frame Structure for Deformation Measurement" Buildings 14, no. 9: 2847. https://doi.org/10.3390/buildings14092847

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop