Next Article in Journal
A PHREEQC-Based Tool for Planning and Control of In Situ Chemical Oxidation Treatment
Previous Article in Journal
Improving Agility and Reactive Agility in Basketball Players U14 and U16 by Implementing Fitlight Technology in the Sports Training Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Point Cloud Dataset of Heavy Construction Equipment

1
Department of Railroad Convergence System, Korea National University of Transportation, Uiwang-si 16106, Republic of Korea
2
Department of Railroad Infrastructure Engineering, Korea National University of Transportation, Uiwang-si 16106, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3599; https://doi.org/10.3390/app14093599
Submission received: 20 March 2024 / Revised: 16 April 2024 / Accepted: 21 April 2024 / Published: 24 April 2024

Abstract

:
Object recognition algorithms and datasets based on point cloud data have been mainly designed for autonomous vehicles. When applied to the construction industry, they face challenges due to the origin of point cloud data from large earthwork sites, resulting in high volumes of data and density. This research prioritized the development of 3D point cloud datasets specifically for heavy construction equipment, including dump trucks, rollers, graders, excavators, and dozers; all of which are extensively used in earthwork sites. The aim was to enhance the efficiency and productivity of machine learning (ML) and deep learning (DL) research that relies on 3D point cloud data in the construction industry. Notably, unlike conventional approaches to acquiring point cloud data using UAVs (Unmanned Aerial Vehicles) and UGVs (Unmanned Ground Vehicles), the datasets for the five types of heavy construction equipment established in this research were generated using 3D-scanned diecast models of heavy construction equipment to create point cloud data.

1. Introduction

1.1. Research Background and Objectives

According to a 2017 labor productivity analysis report on the construction industry, over the last 12 years the annual growth rate of global labor productivity in the manufacturing sector was 3.6%. However, the construction industry has had an average annual growth rate of 1% [1]. To address this, several efforts were made worldwide to improve labor productivity in the construction industry [1,2,3,4]. Various strategies were proposed in previous research for enhancing construction labor productivity, including innovations in regulations and increasing transparency, improving construction capabilities, and adopting new construction materials and digital technologies [1,2,3,4]. Furthermore, recent advancements in Industry 4.0 driven by the global development of Information Technology (IT) tools, have led to active research in construction automation, utilizing digital technology to increase labor productivity in the construction industry [5].
Previous construction automation research has primarily focused on hardware advancements, such as highway road surface crack inspection and sealing equipment [6,7], bricklaying robots [8], autonomous robotic excavation for dry stone construction [9], and machine guidance (MG) and machine control (MC) for excavators [10,11]. However, the rapid development of Building Information Modeling (BIM) and Artificial Intelligence (AI) technologies has resulted in a significant increase in software-centric construction automation research [12,13,14,15,16,17,18,19]. Furthermore, the technological progress of AI-equipped autonomous vehicles and various sensors has paved the way for AI technology applications in construction, particularly object recognition research [20]. Simultaneously, the availability of data collection equipment such as Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) equipped with a variety of sensors—including 2D image cameras, depth cameras, and 2D and 3D LiDAR—has made it easier to obtain terrain data in construction sites. The availability of such technologies allows the application of object recognition technology in construction [21,22]. The combined development of technologies such as AI and data collection sensors and equipment has provided a solid foundation for active research into utilizing AI in construction.
To date, AI research in the construction industry has primarily centered around computer vision technology, including 2D images and videos. This research has focused on construction safety and process monitoring, as well as concrete damage measurement and assessment [23]. Recent research has focused on object detection and tracking, utilizing images and videos for evaluating the use of safety gear, including helmets and safety belts [24]; for monitoring construction workers, the general public, and forklifts [25]; recognizing cracks in concrete structures such as roads, buildings, and bridges; and assessing potential hazards [26]. While 2D image-based object detection research shows excellent accuracy and speed, it faces challenges when capturing images in environments where 2D cameras are susceptible to changing lighting conditions, particularly in areas without direct sunlight [15]. Consequently, there has been a growing trend in the construction industry to utilize 3D Lidar sensors to overcome the limitations of data collection sensors. This is because 3D Lidar sensors are known for their ability to collect data without being affected by lighting conditions, making them suitable for construction environments exposed to varying conditions [15]. Furthermore, data collected in construction sites, especially data obtained with 2D cameras and videos, may lack Z-coordinate values due to the sensor’s characteristics, introducing errors in Z-values. In contrast, 3D Lidar sensors collect 3D point cloud data, including accurate measurements of X, Y, and Z coordinates. This in turn makes them more suitable for the construction industry, which requires accuracy [27].
However, despite these advantages, there are currently no representative 3D point cloud data-based object recognition datasets available for application in construction sites. Research institutions have been individually creating datasets due to the lack of such resources. Constructing 3D point cloud data for construction sites requires significantly more time and computing resources compared to building 2D image datasets. Construction sites are primarily outdoors and involve large-scale structures such as roads, bridges, and buildings. As a result, the 3D point cloud data collected in the field tends to be large in size and high in point density. Consequently, preprocessing tasks such as noise removal and adjustment pose significant challenges when working with collected 3D point cloud data. Furthermore, when conducting 3D scanning of heavy construction equipment such as dump trucks and excavators using equipment such as UAVs and Terrestrial Laser Scanners (TLS) on a construction site, obtaining complete and undistorted point cloud data of heavy construction equipment can be challenging due to their movements, rotations, and back-and-forth motions during operation. Data collected in construction sites targeting heavy construction equipment in operation often results in point cloud data that overlaps, creating data artifacts akin to residual images in photographs. When using this point cloud data to train ML and DL models, it can potentially degrade the performance of the heavy construction equipment prediction model. For these reasons, obtaining high-quality 3D point cloud-based heavy construction equipment object data is crucial in enhancing the performance of prediction models in 3D point cloud data-based ML and DL research.
As discussed, it is clear that for each research institution to build high-performing machine learning and deep learning models, individually constructing 3D point cloud data for construction sites requires a significant amount of time and effort. This challenge is not limited to research in the construction field but is a common experience for research institutions across various areas that conduct ML and DL research based on 2D images and 3D point cloud data. Consequently, in autonomous driving research, the representative dataset known as the KITTI dataset [28] has been developed to facilitate efficient execution of machine learning and deep learning research, and active research is being conducted using the KITTI dataset. Therefore, to enhance the productivity and efficiency of 3D point cloud data-based ML and DL research in the construction industry, this research has developed the 3D-ConHE (3D Point Clouds of Heavy construction Equipment) dataset, targeting heavy construction equipment.

1.2. Research Scope and Methods

This research has delineated its scope to encompass the process of creating 3D laser scans and constructing point cloud datasets for five distinct types of heavy construction equipment: dump trucks, rollers, graders, excavators, and dozers. The five types of heavy construction equipment selected in this research are commonly used in earthwork construction sites. Diverse manufacturers and models were utilized for each equipment type, to ensure a varied collection of point cloud data. In this research, various diecast models for each construction equipment type were 3D scanned to form a diverse dataset. This included two models of caterpillar-type graders, five models of caterpillar-type excavators, three models of dump trucks, five models of caterpillar and wheeled-type dozers, and two models of rollers.
Furthermore, the research was conducted in a structured sequence, as depicted in Figure 1: planning dataset construction, generating and collecting point cloud data, preprocessing point cloud data, and constructing and reviewing the dataset. To establish the point cloud dataset for heavy construction equipment, the research was initiated by reviewing prior studies on classification and segmentation datasets in the construction domain, with a focus on large 3D point cloud-based datasets for object recognition. Subsequently, the research assessed efficient data collection methods to build the heavy construction equipment point cloud dataset and evaluated its practicality. This led to the adoption of diecast models for 3D laser scanning of heavy construction equipment. The diecast models were selected for their ability to faithfully replicate the movements of actual heavy construction equipment. This method enabled the research to efficiently gather 3D point cloud data for heavy construction equipment. The next phase involved preprocessing the 3D-scanned point cloud data and organizing them into datasets in Mesh files and PCD files. Additionally, the scales of the scanned data for diecast models with varying scales were adjusted to match the actual scale of heavy construction equipment in the point cloud dataset. Lastly, the research categorized and adjusted the point density of the scanned data for heavy construction equipment into three levels, resulting in the 3D-ConHE dataset for the five types of heavy construction equipment. The research concluded with a comprehensive review and modification process for the constructed 3D-ConHE dataset.

2. Related Works

2.1. Review of Classification and Segmentation Dataset in the Construction Field

Most recently released classification and segmentation datasets for heavy construction equipment are primarily constructed in 2D images, such as ImageNet [29], the AIM (Advanced Infrastructure Management group (Lake Ariel, PA, USA)) dataset [30,31], the ACID (Alberta Construction Image Dataset) [32], and the Microsoft COCO (Common Objects in Context) dataset [33]. Additionally, research institutions often construct their own data for conducting research [23,34] by collecting substantial image datasets from various sources, including online and offline environments such as construction sites. Consequently, in 2D image-based ML and DL research in the construction industry, there are various methods to easily acquire the necessary datasets for training and testing.
However, finding well-established benchmark 3D point cloud-based heavy construction equipment datasets for classification and segmentation research remains a challenge. Therefore, previous research in object recognition for heavy construction equipment, based on 3D point cloud data, often involved generating the point cloud data for heavy construction equipment from CAD files or scanning heavy construction equipment using methods such as TLS or UAV to perform model training and testing [35,36]. Although not specific to heavy construction equipment, research has been conducted with datasets involving large-scale bridge components such as piers, slabs, abutments, and girders; and semantic segmentation research based on 3D point cloud data [37]. Additionally, datasets based on point cloud data were created for scaffolds in construction sites to facilitate semantic segmentation research using 3D point cloud data [38]. In addition to research in the construction field that involves 3D scanning of large-scale objects using TLS or UAV, research is also being conducted on 3D scanning of diecast models such as architectural design models, cultural heritages, and rendering models of automobile designs, to extract 3D point cloud data and construct datasets [39,40].
Furthermore, utilizing pseudo-Lidar to execute classification based on 3D point cloud data of heavy construction equipment [39], or conducting classification research through the self-constructed 2D image dataset for steel plate surface detection [40] demonstrates innovative approaches to leveraging specialized datasets for advancing construction equipment analysis. As discussed above, there are various well-established (benchmark) datasets available for 2D image-based heavy construction equipment. Conversely, finding representative datasets based on 3D point cloud data for heavy construction equipment, construction materials, workforce, and other related aspects of the construction industry is not an easy task. The absence of such representative training and testing datasets for ML and DL in the construction industry affects research efficiency from a broader perspective within the construction industry [30,32,41,42]. Furthermore, individual research institutions require significant amounts of time and effort to independently construct high-quality 3D heavy construction equipment point cloud data for ML and DL research.

2.2. Review of Large-Scale 3D Point Clouds Datasets for Classification and Segmentation

Currently, the benchmark datasets constructed from 3D point cloud data are primarily focused on indoor objects. These datasets include ModelNet40 [43], S3DIS (Stanford 3D Indoor Scene Dataset) [44], and ShapeNet [45], as shown in Table 1. However, more recently, datasets such as Semantic3D [46], Paris-Lille-3D [47], and Toronto-3D [48]—targeting urban environments—have been developed, enabling large-scale 3D point cloud data for ML and DL research. Such large-scale point cloud data classification and segmentation datasets, in contrast to datasets focused on indoor environments, are constructed by scanning large-scale urban facilities such as churches, streets, villages, soccer fields, and castles, using static Terrestrial Laser Scanning (TLS) and Mobile Laser Scanning (MLS); these share similar characteristics with data from 3D-scanned large-scale construction sites located outdoors.
In addition to the aforementioned datasets, recent developments in large-scale 3D point cloud datasets include the HLS (Helmet Laser Scanning) dataset and the MCD (Multi-Campus Dataset) [49]. The HLS dataset has been constructed by installing multisensors on helmets and applying Simultaneous Localization and Mapping (SLAM) technology in real time to build large-scale 3D datasets targeting forest areas, underground facilities, and infrastructure facilities [49]. Furthermore, the MCD dataset represents the latest in large-scale 3D point cloud datasets, generating and providing three-dimensional point cloud data for semantic segmentation of various campuses by applying SLAM technology [50].
Among these large-scale 3D point clouds datasets, the Semantic3D dataset was the pioneer and it has since been actively employed in classification and semantic segmentation research (Table 1). The Semantic3D dataset consists of eight class labels and over 4 billion points, making it ideal for high-performance large-scale 3D point clouds semantic segmentation tasks. In contrast, SemanticKITTI [46] was acquired for semantic segmentation research on autonomous vehicles equipped with devices such as Velodyne HDL-64E LiDAR sensors and Inertial Measurement Unit (IMU) to gather data from objects on the road. While both Semantic3D and SemanticKITTI datasets are collected for outdoor environments, their data collection methods differ, resulting in variations in the information they provide. SemanticKITTI includes IMU information because it uses the Mobile Laser Scanning (MLS) approach to construct data by employing LiDAR and IMU mounted on the roofs of vehicles, as opposed to the static TLS method adopted by Semantic3D.
Additionally, the HLS dataset includes infrastructure and heritage datasets, but like the SemanticKITTI dataset, it contains IMU information, offering different data compared to datasets provided by static Terrestrial Laser Scanning (TLS). These characteristics are also evident in the MCD (Multi-Campus Dataset), which was developed for use in autonomous navigation of vehicles and robots [49,50].
Consequently, the dataset characteristics can affect the suitability of different object recognition algorithms and influence research outcomes. In developing the 3D-ConHE dataset, this research considered these dataset attributes. In Table 1, a literature review of the static large-scale 3D point cloud datasets from this study is summarized, with only the information on the representative SemanticKITTI dataset added for the autonomous vehicle field.

3. Development of 3D-ConHE Dataset

3.1. 3D Point Cloud Data Generation and Collection

It is not easy to collect a large 3D point cloud dataset to scan real heavy construction equipment working on earthworks and utilize it for ML and DL. In order to solve this problem, this research established the 3D-ConHE dataset using a heavy construction equipment diecast model with the same design and mechanical characteristics as the heavy construction equipment operated at the actual construction site. Therefore, this research procured diecast models representing five distinct types of heavy construction equipment in preparation for 3D scanning. These diecast models replicate their real counterparts in terms of design, mechanical features, and motion joints. However, each model comes in different scales, as detailed in Table 2, necessitating scale adjustments in post-processing after the 3D scanning procedure to match the real-life dimensions.
Each construction heavy equipment diecast model chosen for this research has multiple motion joints (Figure 2). The selected five model types feature motion joints that enable movements in up, down, left, and right directions, offering a range of motions. However, for the reason of simplicity, this research streamlined the range of motion joints to MAX and MIN to account for the movements of construction heavy equipment during on-site operations. For example, Figure 2a depicts an excavator with four motion joints. Motion Joint 1 in Figure 2a represents the joint connecting the bucket and the arm. Motion Joint 2 pertains to the joint between the arm and the boom. Similarly, Motion Joint 3 denotes the joint between the boom and the body, with no consideration for the bucket cylinder, arm cylinder, and boom cylinders. Finally, Motion Joint 4 corresponds to the joint section where the swing motor and swing bearing are installed. Additionally, rollers in Figure 2b,c were identified by the articulation joint as Motion Joint 1, although they were scanned separately due to differing scanning directions for Motion Joint 1. And the dump truck was scanned at varying heights for Motion Joint 1 in Figure 2d. Furthermore, graders in Figure 2e,f had different numbers of motion joints, each distinctively classified. Lastly, dozers in Figure 2g–i were separately scanned as they came in three different forms with varying numbers of motion joints and movement directions.
Subsequently, the diecast model of a excavator, as depicted in Figure 2, was scanned by combining the movements of its motion joints, as showcased in Figure 3. Specifically, this research sequentially combined the motions of the model’s motion joints during scanning to create data matching the movements of heavy construction equipment on construction sites. Figure 3 presents examples of the scanning methods employed by this research. Figure 3a1–a3 represent the combination of movements of Motion Joints 1 to 3, while keeping Motion Joint 4 stationary. Figure 3a4–a6 illustrate scans at 0°, 45°, 90°, and 135° when the excavator’s Motion Joint 4 completes a 360° rotation. Moreover, as the four angles (0°, 45°, 90°, 135°) chosen for the excavator’s Motion Joint 4 overlap with 180°, 225°, 270°, and 315°, no additional scans were conducted outside these four angles.
As depicted in Table 3, the number of scans for each model was calculated, considering the motion joints of the models and the joint combination characteristics described earlier. As a result, the 3D scanning process was carried out and streamlined into a total of 669 movements, as detailed in Table 3. Specifically, the number of scans for excavators was notably higher compared to other diecast models due to the greater number of motion joints in excavators.
In this research, the SHINING 3D EinScan-SP 3D laser (Hangzhou, China) scanner was used to scan diecast models (Figure 4). The indoor portable laser scanner employed in this research consists of a 3D laser scanner, a stand, and a turntable; as illustrated in Figure 4. This 3D scanner has an accuracy within 0.05 mm, and it can scan objects up to a maximum size of 1200 mm (width) × 1200 mm (depth) × 1200 mm (height). The scanning process involved placing a diecast model on the turntable and rotating it horizontally by 40° for each scan. The 3D scanner in this research conducted one scan for each 40° horizontal rotation, with the scan data saved as ASC files. Subsequently, the scan data generated by rotating the diecast model 360° was merged to create mesh data in STL format. This scanning method led to the creation of 669 STL files for the five heavy construction equipment models.

3.2. Preprocessing of 3D Point Cloud Data

In the preprocessing phase, the scanned ASC files were converted to STL and PCD file formats. Point cloud density calculations and adjustments, as well as removal of the ground plane from the PCD files, were performed; as shown in Figure 5. Open3D library [51] was used for file conversion and point density adjustment in this preprocessing stage, and Cloudcompare 2.6.3 [Windows 64bits] [52] was used for scale adjustments and ground plane removal.
  • Converting Files: To generate PCD files from the ASC files produced during the scanning process, the ASC files for each piece of equipment were merged to create mesh files in STL format. These mesh files were further processed to obtain the final PCD files through PLY files. The 3D scanner’s software ( EinScan SE/SP Software Version 3.1.3.0) facilitated the conversion of ASC, STL, and PLY files. Open3D was then used to convert PLY files into PCD files. This conversion process yielded a total of 669 mesh-converted PLY files and 669 PCD files; all consistent with the scale of the diecast models of heavy construction equipment.
  • Adjusting Scale: The diecast models of heavy construction equipment used for 3D scanning were scaled down to between 1:32 and 1:87, compared to their real-size counterparts. To facilitate the use of these differently scaled models in this research, a scale adjustment process was executed using the Cloudcompare software. This ensured that the scanned data created in this research matched the actual scale of heavy construction equipment used on-site.
  • Removing Ground Plane: After the scale adjustment, the ground planes in the PCD files of the diecast models of heavy construction equipment were removed using the Cloudcompare software. This phase was taken into account, considering that the point cloud data of a ground plane is not captured in actual data collected using UAV and UGV equipment. The aim was to make the data more closely resemble on-site data obtained through scanning, thus enhancing the accuracy of ML and DL object detection models. This operation led to a reduction in the size of the 3D-ConHE dataset.
  • Adjusting Point Density: This research examined the point cloud data of heavy construction equipment collected by UAV to adjust the point cloud data’s point density. Typically, the point cloud data of heavy construction equipment captured by UAV, as displayed in Figure 6, features significantly lower point density compared to data generated from scanning diecast models. This discrepancy arises from the equipment’s characteristics and the data capture method. In Figure 6, the data were acquired using a DJI PHANTOM 4 RTK (Shenzhen, China), equipped with a 20-megapixel camera, flying at an altitude of 100 m. Figure 6a shows the extracted point cloud data from heavy construction equipment on a construction site captured by the UAV, while Figure 6b displays data generated from scanning heavy construction equipment diecast models indoors. When comparing the point density between Figure 6a,b, the lower point density in Figure 6a is likely attributed to the drone capturing data from an altitude of 100 m above the ground. Furthermore, a drone capturing data at a 100 m altitude can be affected by wind, causing it to fluctuate in altitude, sometimes ranging between 80 and 120 m above the ground. The unreliability of this on-site data collection method results in the point cloud data captured by UAVs having lower point densities than data generated from static scanning of diecast models indoors. Consequently, this research employed point density values at a UAV data capture height of 100 m as reference values for diecast models (Figure 6d). Additionally, data captured at altitudes of 80 and 120 m were taken into account to adjust the point density of diecast models, as depicted in Figure 6d,e; introducing a variance of +20% and −20%.
Figure 6. Comparison of UAV heavy construction equipment point cloud data and diecast model scan data.
Figure 6. Comparison of UAV heavy construction equipment point cloud data and diecast model scan data.
Applsci 14 03599 g006
In this research, using this method, we were able to construct a total of 2007 PCD files by adapting the point density of 669 PCD files into three variations. Figure 7 provides examples of the five types of heavy construction equipment with adjusted point densities from the 2007 PCD files. The point density adjustment process carried out in this research is anticipated to enhance the versatility of the 3D-ConHE dataset, enabling it to cater to diverse point density requirements for research purposes.
Figure 7. Part of our dataset ((top Left) dump truck, (top right) dozer, (bottom left) excavator, (bottom middle) roller, (bottom right) grader).
Figure 7. Part of our dataset ((top Left) dump truck, (top right) dozer, (bottom left) excavator, (bottom middle) roller, (bottom right) grader).
Applsci 14 03599 g007

3.3. Dataset Construction

Our dataset is presented in Figure 8 and is available in STL, PLY, and PCD formats. This dataset encompasses 3D scan data for five types of heavy construction equipment: dump truck, roller, grader, excavator, and dozer; and is accessible in both mesh and PCD file formats. Mesh files are provided in ‘.stl’ and ‘.ply’ formats, while PCD files are categorized into three different types of ‘.pcd’ files. Within the 3D-ConHE dataset, there are 669 STL files with a combined size of 13.6 GB, along with 669 PLY files totaling 31.4 GB. Furthermore, the ‘Before Ground Removal & Before Point Density Adjustment’ PCD files consist of 669 files, with a collective size of 8.9 GB, while the ‘After Ground Removal & Before Point Density Adjustment’ PCD files encompass 669 files amounting to 6.79 GB. The ‘After Ground Removal & After Point Density Adjustment’ PCD files include 2007 files with a total size of 809 MB. It should be noted that the variations in the sizes of PCD files before and after point density adjustments are attributable to aligning the point cloud density of diecast model scan data with the data collected from UAV scans. The 3D-ConHE dataset—dedicated to heavy construction equipment point cloud data—as developed in this research, will be made publicly available for research purposes. Our benchmarks are available online at: http://rb.gy/sli2mw/ (accessed on 18 March 2024).

4. Conclusions

The KITTI dataset, which is a representative dataset for autonomous vehicles, was introduced in 2012 and has been leading research in areas such as 3D object detection and 3D tracking for autonomous vehicles. It continues to be actively utilized in various studies even in recent times [49]. Furthermore, globally, the KITTI dataset significantly influences the development of datasets for artificial intelligence research [50]. Thus, in the construction industry as well, the development of datasets based on 3D point cloud data, similar to the KITTI dataset, is necessary for advancing AI research.
For this reason, our research has developed the 3D-ConHE dataset, focusing on five types of heavy construction equipment, and made it publicly available to advance related research. Our dataset enhanced efficiency by scanning diecast models of the heavy construction equipment. Furthermore, it was scaled to match the actual size of the equipment used in the field, scanned considering the motion joint movement directions, and constructed with three different point cloud densities, taking into account the point cloud density of the equipment captured by UAV. By considering and incorporating these various conditions, we were able to increase the utility of our dataset.
This study, notwithstanding the advantages previously described, has certain limitations. Primarily, although the 3D-ConHE dataset was developed, it was not verified and validated through field experiments. Future research is planned to verify and validate the dataset developed in this study. Additionally, the 3D-ConHE dataset created in this research does not encompass the entire range of heavy construction equipment used in the construction industry but is limited to five types of heavy construction equipment due to the nature of the research. This limitation in the scope of the dataset represents a constraint of the current study. Consequently, plans are in place to progressively increase the variety of heavy construction equipment included in the 3D-ConHE dataset in future studies.
Despite these limitations of the 3D-ConHE dataset, our dataset is anticipated to be useful for research in areas such as classification, part segmentation, and semantic segmentation related to heavy construction equipment. Furthermore, it is expected to serve as a key technology for autonomous driving in the development of unmanned heavy construction equipment, in conjunction with Machine Guidance (MG) and Machine Control (MC) systems that are actively implemented in the construction industry. We anticipate that our 3D-ConHE dataset will be extensively utilized in AI research based on point cloud data within the construction industry in the future.

Author Contributions

Literature review, S.P.; writing—original draft preparation, S.P.; proposal of the overall framework and direction of the research, S.K.; writing—review and editing, S.K.; dataset construction S.P.; dataset review, S.P. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in http://rb.gy/sli2mw/ (accessed on 18 March 2024).

Acknowledgments

This research was conducted with the support of the “National R&D Project for Smart Construction Technology (No. RS-2020-KA158708)” funded by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport, and managed by the Korea Expressway Corporation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Barbosa, F.; Mischke, J.; Parsons, M. Improving Construction Productivity; McKinsey & Company: Chicago, IL, USA, 2017. [Google Scholar]
  2. Durdyev, S.; Ismail, S. Offsite Manufacturing in the Construction Industry for Productivity Improvement. EMJ Eng. Manag. J. 2019, 31, 35–46. [Google Scholar] [CrossRef]
  3. Cho, Y.K.; Leite, F.; Behzadan, A.; Wang, C. State-of-the-Art Review on the Applicability of AI Methods to Automated Construction. In ASCE International Conference on Computing in Civil Engineering; American Society of Civil Engineers: Reston, VA, USA, 2019; pp. 105–113. [Google Scholar]
  4. Bamfo-Agyei, E.; Thwala, D.W.; Aigbavboa, C. Performance Improvement of Construction Workers to Achieve Better Productivity for Labour-Intensive Works. Buildings 2022, 12, 1593. [Google Scholar] [CrossRef]
  5. Cai, S.; Ma, Z.; Skibniewski, M.J.; Bao, S. Construction Automation and Robotics for High-Rise Buildings over the Past Decades: A Comprehensive Review. Adv. Eng. Inform. 2019, 42, 100989. [Google Scholar] [CrossRef]
  6. Lasky, T.A.; Ravani, B. Sensor-Based Path Planning and Motion Control for a Robotic System for Roadway Crack Sealing. IEEE Trans. Control Syst. Technol. 2000, 8, 609–622. [Google Scholar] [CrossRef]
  7. Bennett, D.A.; Feng, X.; Velinsky, S.A. Robotic Machine for Highway Crack Sealing. Transp. Res. Rec. 2003, 1827, 18–26. [Google Scholar] [CrossRef]
  8. Dakhli, Z.; Lafhaj, Z. Robotic Mechanical Design for Brick-Laying Automation. Cogent Eng. 2017, 4. [Google Scholar] [CrossRef]
  9. Johns, R.L.; Wermelinger, M.; Mascaro, R.; Jud, D.; Hurkxkens, I.; Vasey, L.; Chli, M.; Gramazio, F.; Kohler, M.; Hutter, M. A Framework for Robotic Excavation and Dry Stone Construction Using On-Site Materials. Sci. Robot. 2023, 8, eabp9758. [Google Scholar] [CrossRef]
  10. Lee, M.S.; Shin, Y.; Choi, S.J.; Kang, H.B.; Cho, K.Y. Development of a Machine Control Technology and Productivity Evaluation for Excavator. J. Drive Control 2020, 17, 37–43. [Google Scholar]
  11. Yeom, D.J.; Yoo, H.S.; Kim, J.S.; Kim, Y.S. Development of a Vision-Based Machine Guidance System for Hydraulic Excavators. J. Asian Archit. Build. Eng. 2022, 22, 1564–1581. [Google Scholar] [CrossRef]
  12. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef]
  13. Pu, S.; Vosselman, G. Knowledge Based Reconstruction of Building Models from Terrestrial Laser Scanning Data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  14. Park, S.; Kim, S.; Seo, H. Study on Representative Parameters of Reverse Engineering for Maintenance of Ballasted Tracks. Appl. Sci. 2022, 12, 5973. [Google Scholar] [CrossRef]
  15. Park, S.Y.; Kim, S. Analysis of Overlap Ratio for Registration Accuracy Improvement of 3D Point Cloud Data at Construction Sites. J. KIBIM 2021, 11, 1–9. [Google Scholar]
  16. Park, S.Y.; Kim, S. Performance Evaluation of Denoising Algorithms for the 3D Construction Digital Map. J. KIBIM 2020, 10, 32–39. [Google Scholar]
  17. Choi, Y.; Park, S.; Kim, S. GCP-Based Automated Fine Alignment Method for Improving the Accuracy of Coordinate Information on UAV Point Cloud Data. Sensors 2022, 22, 8735. [Google Scholar] [CrossRef] [PubMed]
  18. Choi, Y.; Park, S.; Kim, S. Development of Point Cloud Data-Denoising Technology for Earthwork Sites Using Encoder-Decoder Network. KSCE J. Civ. Eng. 2022, 26, 4380–4389. [Google Scholar] [CrossRef]
  19. Zhang, H.; Wang, L.; Shi, W. Seismic Control of Adaptive Variable Stiffness Intelligent Structures Using Fuzzy Control Strategy Combined with LSTM. J. Build. Eng. 2023, 78, 107549. [Google Scholar] [CrossRef]
  20. Singh, K.B.; Arat, M.A. Deep Learning in the Automotive Industry: Recent Advances and Application Examples. arXiv 2019, arXiv:1906.08834. [Google Scholar]
  21. Axelsson, M.; Holmberg, M.; Serra, S.; Ovren, H.; Tulldahl, M. Semantic Labeling of Lidar Point Clouds for UAV Applications. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 4309–4316. [Google Scholar] [CrossRef]
  22. Yoon, S.; Kim, J. Efficient Multi-Agent Task Allocation for Collaborative Route Planning with Multiple Unmanned Vehicles. IFAC-PapersOnLine 2017, 50, 3580–3585. [Google Scholar] [CrossRef]
  23. Mostafa, K.; Hegazy, T. Automation in Construction Review of Image-Based Analysis and Applications in Construction. Autom. Constr. 2021, 122, 103516. [Google Scholar] [CrossRef]
  24. Li, H.; Lu, M.; Hsu, S.C.; Gray, M.; Huang, T. Proactive Behavior-Based Safety Management for Construction Safety Improvement. Saf. Sci. 2015, 75, 107–117. [Google Scholar] [CrossRef]
  25. Jeong, I.; Kim, J.; Chi, S.; Roh, M.; Biggs, H. Solitary Work Detection of Heavy Equipment Using Computer Vision. KSCE J. Civ. Environ. Eng. Res. 2021, 41, 441–447. [Google Scholar]
  26. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput. Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  27. Mirzaei, K.; Arashpour, M.; Asadi, E.; Masoumi, H.; Bai, Y.; Behnood, A. 3D Point Cloud Data Processing with Machine Learning for Construction and Infrastructure Applications: A Comprehensive Review. Adv. Eng. Inform. 2022, 51, 101501. [Google Scholar] [CrossRef]
  28. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision Meets Robotics: The KITTI Dataset. Int. J. Rob. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  29. Fei-Fei, L.; Deng, J.; Li, K. ImageNet: Constructing a Large-Scale Image Database. J. Vis. 2010, 9, 1037. [Google Scholar] [CrossRef]
  30. Kim, H.; Kim, H.; Hong, Y.W.; Byun, H. Detecting Construction Equipment Using a Region-Based Fully Convolutional Network and Transfer Learning. J. Comput. Civ. Eng. 2018, 32, 04017082. [Google Scholar] [CrossRef]
  31. Arabi, S.; Haghighat, A.; Sharma, A. A Deep Learning Based Solution for Construction Equipment Detection: From Development to Deployment. arXiv 2019, arXiv:1904.09021. [Google Scholar]
  32. Xiao, B.; Kang, S.-C. Development of an Image Data Set of Construction Machines for Deep Learning Object Detection. J. Comput. Civ. Eng. 2021, 35, 05020005. [Google Scholar] [CrossRef]
  33. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  34. Kim, J.; Chi, S.; Seo, J. Automated Vision-Based Construction Object Detection Using Active Learning. KSCE J. Civ. Environ. Eng. Res. 2019, 39, 631–636. [Google Scholar]
  35. Chen, J.; Fang, Y.; Cho, Y.K.; Kim, C. Principal Axes Descriptor for Automated Construction-Equipment Classification from Point Clouds. J. Comput. Civ. Eng. 2017, 31, 04016058. [Google Scholar] [CrossRef]
  36. Chen, J.; Fang, Y.; Cho, Y.K. Performance Evaluation of 3D Descriptors for Object Recognition in Construction Applications. Autom. Constr. 2018, 86, 44–52. [Google Scholar] [CrossRef]
  37. Kim, H.; Kim, C. Deep-Learning-Based Classification of Point Clouds for Bridge Inspection. Remote Sens. 2020, 12, 3757. [Google Scholar] [CrossRef]
  38. Kim, J.; Chung, D.; Kim, Y.; Kim, H. Deep Learning-Based 3D Reconstruction of Scaffolds Using a Robot Dog. Autom. Constr. 2022, 134, 104092. [Google Scholar] [CrossRef]
  39. Dai, A.; Ritchie, D.; Bokeloh, M.; Reed, S.; Sturm, J.; Niebner, M. ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4578–4587. [Google Scholar] [CrossRef]
  40. Skabek, K.; Kowalski, P. Building the Models of Cultural Heritage Objects Using Multiple 3D Scanners. Theor. Appl. Inform. 2009, 21, 115–129. [Google Scholar]
  41. Hackel, T.; Wegner, J.D.; Savinov, N.; Ladicky, L.; Schindler, K.; Pollefeys, M. Large-Scale Supervised Learning for 3D Point Cloud Labeling: Semantic3d.Net. Photogramm. Eng. Remote Sens. 2018, 84, 297–308. [Google Scholar] [CrossRef]
  42. Shi, S.; Wang, Z.; Shi, J.; Wang, X.; Li, H. From Points to Parts: 3D Object Detection from Point Cloud with Part-Aware and Part-Aggregation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2647–2664. [Google Scholar] [CrossRef] [PubMed]
  43. Sun, J.; Zhang, Q.; Kailkhura, B.; Yu, Z.; Xiao, C.; Mao, Z.M. Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions. arXiv 2022, arXiv:2201.12296. [Google Scholar]
  44. Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3D Semantic Parsing of Large-Scale Indoor Spaces (a) Raw Point Cloud (b) Space Parsing and Alignment in Canonical 3D Space (c) Building Element Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1534–1543. [Google Scholar] [CrossRef]
  45. Mitwally, M.F.; Casper, R.F. Using Aromatese Inhibitors to Induce Ovulation in Breast Ca Survivors. Contemp. Ob/Gyn 2004, 49, 73–83. [Google Scholar]
  46. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d.net: A new large-scale point cloud classification. arXiv 2017, arXiv:1704.03847. [Google Scholar] [CrossRef]
  47. Roynard, X.; Deschaud, J.E.; Goulette, F. Paris-Lille-3D: A Large and High-Quality Ground-Truth Urban Point Cloud Dataset for Automatic Segmentation and Classification. Int. J. Rob. Res. 2018, 37, 545–557. [Google Scholar] [CrossRef]
  48. Tan, W.; Qin, N.; Ma, L.; Li, Y.; Du, J.; Cai, G.; Yang, K.; Li, J. Toronto-3D: A Large-Scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 797–806. [Google Scholar] [CrossRef]
  49. Li, J.; Wu, W.; Yang, B.; Zou, X.; Yang, Y.; Zhao, X.; Dong, Z. WHU-Helmet: A Helmet-Based Multisensor SLAM Dataset for the Evaluation of Real-Time 3-D Mapping in Large-Scale GNSS-Denied Environments. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  50. Nguyen, T.-M.; Yuan, S.; Nguyen, T.H.; Yin, P.; Cao, H.; Xie, L.; Wozniak, M.; Jensfelt, P.; Thiel, M.; Ziegenbein, J.; et al. MCD: Diverse Large-Scale Multi-Campus Dataset for Robot Perception. arXiv 2024, arXiv:2403.11496. [Google Scholar]
  51. Zhou, Q.-Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  52. Girardeau-Montaut, D. CloudCompare. 2019. Available online: https://www.cloudcompare.org/ (accessed on 10 November 2023).
Figure 1. 3D-ConHE construction method and process.
Figure 1. 3D-ConHE construction method and process.
Applsci 14 03599 g001
Figure 2. Motion joint locations of heavy construction equipment diecast models.
Figure 2. Motion joint locations of heavy construction equipment diecast models.
Applsci 14 03599 g002
Figure 3. Examples of motion joint of a excavator combination when scanned.
Figure 3. Examples of motion joint of a excavator combination when scanned.
Applsci 14 03599 g003
Figure 4. 3D scanner used to scan heavy construction equipment diecast models.
Figure 4. 3D scanner used to scan heavy construction equipment diecast models.
Applsci 14 03599 g004
Figure 5. Data preprocessing process.
Figure 5. Data preprocessing process.
Applsci 14 03599 g005
Figure 8. Comprehensive configuration of the 3D-ConHE dataset.
Figure 8. Comprehensive configuration of the 3D-ConHE dataset.
Applsci 14 03599 g008
Table 1. 3D point clouds-based classification and segmentation benchmark datasets.
Table 1. 3D point clouds-based classification and segmentation benchmark datasets.
NameYearEnvironmentPrimary FieldsFile
Format
Objects/PointsNumber of ClassesApplication
Technology
Data Acquisition Type
ShapeNet2015IndoorX, Y, Z, R, G, B.obj51,300 objects (ShapeNetCore),
12,000 objects (ShapeNetSem)
55 (ShapeNetCore),
270 (ShapeNetSem)
ClassificationConversion of CAD models
ModelNet2015IndoorX, Y, Z, number of
vertices, edges, faces
.off12,311 objects (ModelNet40),
4899 objects (ModelNet10)
10 (ModelNet 10),
40 (ModelNet 40)
ClassificationConversion of CAD models
S3DIS2016IndoorX, Y, Z, R, G, B.h5695 million points13SegmentationConverting CAD files into mesh files
Semantic3D2017UrbanX, Y, Z, intensity,
R, G, B, class
.txt4 billion points8SegmentationTLS
Paris-Lille-3D2018UrbanX, Y, Z, intensity, class.ply143 million points50SegmentationMLS
Toronto-3D2020UrbanX, Y, Z, R, G, B, intensity, GPS time, scan angle rank, label.ply78 million points8SegmentationMLS
Semantic KITTI2019UrbanX, Y, Z, intensity, label.bin23,201 objects28SegmentationMLS
3D-ConHE
(ours)
2023IndoorX, Y, Z,.stl
.ply
.pcd
4683 objects-Classification,
segmentation
Portable
scanner
Table 2. List of 3D scanned heavy construction equipment diecast models.
Table 2. List of 3D scanned heavy construction equipment diecast models.
Types of EquipmentClassification in Figure 2Diecast Model
Product Number
ScaleNumber of
Motion Joints
Number of Scan Files
Excavator(a)Doosan DX225LCA1:404108
Doosan DX380LC-9C1:504108
Doosan DH2201:504108
Komatsu PC210LC-101:504108
Hyundai R215-91:404108
Roller(b)Caterpillar Cat 856301:6415
(c)Huina 17151:5013
Dump Truck(d)Huina 17181:5013
Hyundai Xcient1:3213
Hyundai HD3701:3213
Grader(e)Caterpillar Cat 12M31:87218
(f)Caterpillar Cat 120M1:50454
Dozer(g)Caterpillar Cat D8R1:5038
(h)Caterpillar Cat D7E1:5024
(i)Caterpillar Cat 924H1:5038
Caterpillar Cat D11T1:5038
(j)Caterpillar Cat 854G1:50312
Total669
Table 3. Heavy construction equipment diecast models’ motion joint movement directions and the number of scans.
Table 3. Heavy construction equipment diecast models’ motion joint movement directions and the number of scans.
TypeNumber of Motion Joints① Movement
Direction of
Motion Joint 1
② Movement
Direction of
Motion Joint 2
③ Movement
Direction of
Motion Joint 3
④ Movement
Direction of
Motion Joint 4
Number of Scans 1
Excavator4Vertical:
MAX, MID, MIN
Vertical:
MAX, MID, MIN
Vertical:
MAX, MID, MIN
Horizontal Rotation:
0°, 45°, 90°, 135°
540
Roller1Horizontal:
LEFT, LEFT MID, MID, RIGHT MID
RIGHT
---5
1Horizontal:
LEFT, MID, RIGHT
3
Dump Truck1Vertical:
MAX, MID, MIN
---9
Grader2Horizontal:
LEFT, MID, RIGHT
Vertical:
MAX, MIN
--18
4Horizontal:
LEFT, MID, RIGHT
Vertical:
MAX, MIN
Horizontal:
LEFT, MID, RIGHT
Horizontal:
LEFT, MID, RIGHT
54
Dozer3Vertical:
MAX, MIN
Vertical:
MAX, MIN
Vertical:
MAX, MIN
-24
2Vertical:
MAX, MIN
Vertical:
MAX, MIN
--4
3Horizontal:
LEFT, MID, RIGHT
Vertical:
MAX, MIN
Vertical:
MAX, MIN
-12
Total669
1 Number of Scans = {Number of Heavy construction Equipment  ×  ①  ×  ②  ×  ③  ×  ④}.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, S.; Kim, S. 3D Point Cloud Dataset of Heavy Construction Equipment. Appl. Sci. 2024, 14, 3599. https://doi.org/10.3390/app14093599

AMA Style

Park S, Kim S. 3D Point Cloud Dataset of Heavy Construction Equipment. Applied Sciences. 2024; 14(9):3599. https://doi.org/10.3390/app14093599

Chicago/Turabian Style

Park, Suyeul, and Seok Kim. 2024. "3D Point Cloud Dataset of Heavy Construction Equipment" Applied Sciences 14, no. 9: 3599. https://doi.org/10.3390/app14093599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop