Next Article in Journal
Thermal Diagnosis of Ventilation and Cooling Systems in a Sports Hall—A Case Study
Previous Article in Journal
Looking through the Models: A Critical Review of Residential Satisfaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analysis of Potential Uses, Limitations and Barriers to Implementation of 3D Scan Data for Construction Management-Related Use—Are the Industry and the Technical Solutions Mature Enough for Adoption?

Faculty of Civil Engineering, University of Zagreb, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(5), 1184; https://doi.org/10.3390/buildings13051184
Submission received: 4 April 2023 / Revised: 20 April 2023 / Accepted: 24 April 2023 / Published: 29 April 2023
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

:
The potential uses of 3D scan data in the construction industry have been extensively researched in the last 20 years, with many benefits over traditional methods proclaimed by researchers. However, despite their advocated benefits, their implementation in actual construction sites remains low. This research aims to discover the potential uses of 3D scan data for construction management purposes and the limitations and barriers to their implementation and widespread adoption. Previous research into the topic was analysed to discover what technologies were used for generating 3D scan data, for what purpose and what issues were identified. These discoveries were then used to specify the potential uses of 3D scan data for, primarily, progress monitoring and quality control, which were then cross-referenced with all known limitations and barriers from the literature and the researchers’ own experience. Research has shown that, currently, there are numerous issues with both the capabilities of current technical solutions and with the construction industry’s readiness, which hinder mass adoption. Potential for breakthroughs, fortunately, does exist; however, greater impetus from the construction industry is needed to drive forward the demand for better technical solutions, which would resolve current issues and lead to the widespread adoption of 3D scan data for construction management-related uses.

1. Introduction

The construction industry is resistant to change. This has been stated many times in scientific papers covering most aspects of construction management research. It also seems to be true for the case of using 3D scan data of construction objects in large-scale adoption. There is a myriad of research, as the remainder of this paper will show, that has used various vision-sensing technologies for various construction management purposes, but all studies seem to stop at the level of a technical demonstration that the product seems to work.
Perhaps no better proof of the claim that the construction industry is slow to adopt changes is the fact that the research by Majid et al. [1] stated the issues of automated progress monitoring and proposed a method to deal with them. Seventeen years later, at the time of writing, no substantial growth was achieved. A lot of researchers have published their own results, methods, case studies and reviews, but the question is why have we not yet seen a widespread commercial application of 3D scan data for construction management-related uses? What is the issue that remains unsolvable? Is it the lack of faith of software developers that it will be adopted by the architecture, engineering and construction (AEC) industry? Is it technically impossible or infeasible? Are the tools inadequate or not accurate enough? Would it be more expensive than traditional progress monitoring techniques?
The goal of this research, therefore, is to determine whether the construction industry and the proposed solutions themselves are mature enough to be utilised and all their supposed benefits applied for the betterment of construction management practices.
To start, a few key terms need to be defined. Three-dimensional scan data are the point cloud models of buildings, objects or terrain collected through one of, or a combination of, vision-based sensing technologies. Vision-based sensing technologies are a subtype of remote sensing technology that collect information from visual data gathered from photographs, video or laser scanners with varying degrees of complexity in the sensor system [2]. Remote sensing technologies as a broader term refer to the practice of obtaining data and information through the analysis of data acquired by devices that are not in contact with the object of interest [3,4]. They differ by the sensor types, what data is collected and how [5].
Vision-based sensing technologies include photogrammetry, videogrammetry, range cameras and laser scanning. Of those, the most common are photogrammetry and laser scanning [6,7], and both can be either aerial or terrestrial. Photogrammetry and videogrammetry can be used for the generation of point clouds and for computer vision, while laser scanning can only be used to generate point cloud data.
As mentioned earlier, different technologies can create point clouds. Even though its source can differ, in practice, the end result is a set of data points in a 3D coordinate system [8], usually defined by the x, y and z coordinates of points that are present on the external surfaces of an object. In essence, it is a 3D model of an existing object or an entire building, whose model elements are generated either by combining the individual points of a laser scan in a line or surface, or from surfaces recognised as such from photographs. The raw point cloud model shows only the surface of the scanned objects, with elements themselves being hollow in the point cloud model before postprocessing in other software.
Laser scanning or LIDAR (laser detection and ranging) is a technology that uses data collected from terrestrial laser scanners (TLS) or from drone-mounted LIDAR scanners to generate 3D point clouds. It works by measuring the time it takes for an emitted pulse of laser light to be reflected back and then calculating the distance to the target [6]. After the scanner receives the reflected signal (relative locations of the surrounding surfaces from the base station are calculated based on the time taken for the signal to return [5]), a data point is created and given x, y and z coordinates [9]. The coordinated points from each of the scans are then used to construct a 3D point cloud [9]. The accuracy of individual points collected by TLS is at the level of a few millimetres [10] and, depending on how many points are taken and how many points the model consists of, the more or less detailed the model will be [11]. Generally, LIDAR is the most accurate of the vision-based sensing technologies able to produce extremely high-resolution models [12], which makes LIDAR-made point clouds suitable for a wide range of applications in the construction sector [12], such as creating as-built/as-is documentation, monitoring construction activities, dimensional quality control, asset monitoring, reverse engineering, cultural heritage recording and urban planning [13,14,15,16,17,18,19,20,21].
Photogrammetry is the other most popular vision-based sensing technology, which can generate 3D point cloud models from 2D images [22]. Through the use of complex algorithms, it is able to reconstruct the position, orientation, shape and size of objects from pictures, which may be obtained via conventional photography or digital photography [23,24]. Photogrammetry is not just the process of collecting photographs, but it is also “the processing of images; the development of 2D and 3D model reconstruction; the classification of objects for mapping or thematic applications; and the visualization of maps” [25]. For a photogrammetric 3D point cloud to be usable, images need to have adequate quality in terms of resolution and blurriness and need to have at least 50–60% overlap in order to enable the execution of reality recreation using photogrammetry as a technique. If all conditions are met, photogrammetry can be used for progress monitoring [26,27,28,29,30], quality control [31,32,33] and 3D model generation [7,30,34]
Videogrammetry is similar to photogrammetry. The difference is that 3D data are extracted from sequential frames of a video, which, in essence, is a string of at least 24 photographs taken in one second. The advantage lies in having considerable overlap, because frames are sequential; thus, the pixels in each frame are reconstructed based on the previous frame [6]. Still, videogrammetry is not as often used for generating 3D models of buildings [6,35,36,37] and quality control [38,39,40,41], as it is used mostly for computer vision purposes.
For all three of the previously mentioned visual-based sensing technologies, data collection can be both aerial and terrestrial, with aerial meaning that the sensor is located most likely on an UAV (unmanned aerial vehicle) and terrestrial meaning that the sensor is on the ground level. For terrestrial photogrammetry and videogrammetry, both commercial-grade and professional-grade hardware can be used. There are requirements regarding the resolution of the images and the resolution and framerate of videos, but most modern smartphones are equipped with camera sensors of sufficient quality. Recently, LIDAR has also started to be found on some smartphones and tablets, but is not accurate enough to generate point clouds that could be used for quality and quantity control. Therefore, for terrestrial measurements, professional terrestrial laser scanners are used. For aerial photogrammetry and videogrammetry, either integrated cameras on UAVs or cameras attached as payloads are used, depending on the quality of the integrated cameras and the required properties of the output photographs. UAV stands for unmanned aerial vehicle (also known as drones) and is defined as an aerial vehicle that does not rely on an on-board human operator for flight [42].
UAV photogrammetry can be used to create 3D point clouds by geo-tagging images (storing their approximate coordinates) taken in flight and then utilizing special software to process the imagery into 3D models [43]. The images should be taken at regular intervals and with substantial overlap for the software to be able to mathematically calculate the elevations and positions of each data point [43]. The accuracy of a photogrammetric point cloud model can also depend on the accuracy of geo-tagging, which can be increased by ground control points or using Real-Time Kinematic (RTK)-enabled UAVs for more precise GPS coordinates [43]. LIDAR can also be used to collect spatial data from UAVs. However, its accuracy is not as high, ranging from 1 cm up to 30 cm [43], depending on the characteristics of the object being scanned.
Most of the UAVs used in construction are multirotor drones due to their distinct advantages, such as their robustness, high manoeuvrability, low purchase and maintenance costs, hovering ability, vertical take-off and landing, controllability by various devices and capacity to carry different payloads [9,44].
Depth cameras (also called time-of-flight cameras) are another vision-based sensing technology. Unlike photo/videogrammetry and laser scanning, while they are able to create 3D models of buildings in innovative ways [45], depth cameras are more widely used for automated recognition purposes [46]. The difference between depth cameras and regular cameras lies in their additional ability to judge the distance of objects from the camera’s sensor. They work by illuminating the scene with a light source, observing the reflected light and translating the phase shift between illumination and reflection into distance [47]. Their advantage is that they can generate 3D depth maps at video rate, they do not interfere with the scene, they are relatively low cost (cheaper than laser scanning, but more expensive than photogrammetry [45]) and, for practical applications, they are no different to video cameras and be easily used without special training [48]. However, only a few applications of depth cameras were found for 3D model generation of construction sites; therefore, depth cameras are not the focus of the remainder of this research.
As this research is a qualitative analysis of previous research, previous literature review papers also need to be covered. Martinez et al. [49] conducted a scientometric review of the global research published between 1999 and 2019 on computer vision applications for construction. They used VOSviewer software to conduct clustering analyses to identify trends, sub-fields and their interconnections, as well as citation patterns, key publications, key research institutions, key researchers and key journals, along with the extent to which these interact with each other in research networks, which provided input to identify the deficiencies in current research and propose future trends [49]. Wang and Kim [8] focused on capturing all of the potential applications of 3D point cloud data from 2004 to 2018, because, to that date, there had been no systematic reviews summarising these applications and pointing out the research gaps and future research directions [8].
Some reviews [36] took a broader approach and investigated the capabilities and limitations of a wider range of remote sensing technologies and their possible uses for automated data acquisition on construction job sites. Duarte-Vidal et al. [37] studied previous research on the possibilities of the interoperability of digital tools for monitoring and controlling construction projects, given how individual technologies have certain limitations, and found that the most often cited interoperability in the previous research was between UAVs, BIM and photogrammetry. While some researchers [2,8] conducted general reviews of the adoption of sensing technologies in the construction industry, others, such as Omar and Nehdi [6], Alaloul et al. [50], Maalek et al. [5], Kopsida and Brilakis [51] and Vick and Brilakis [52], were more specific and reviewed data acquisition technologies for construction progress tracking and productivity monitoring, while Golizadeh [53] et al., Motawa and Kardakou [9], Sanchez [54] and Li and Liu [44] all reviewed potential applications of UAVs and, specifically, multirotor drones for construction management activities [55].
It can be seen from previous research efforts that, while various areas of the problem at hand have been studied either on a more global or more specific level, no research has undertaken the task of performing a comprehensive qualitative analysis of existing research to determine what the current and future possible applications of 3D scan data in construction management practices are and what the barriers and reasons for low adoption in construction are. This research fills this gap by providing such an analysis of the potential causes of such low adoption and, therefore, contributing to the body of knowledge in the field. This paper can offer practical contributions to practitioners by informing them of the current state of vision-based sensing technologies for progress monitoring and quality control, specifically what they could use and for what purpose. The research presented in this paper, therefore, aims to show what the potential uses, limitations and barriers to the implementation of 3D scan data are as a basis for further research in the application of construction progress monitoring for the purposes of an EU-funded research project named “Development of an automated system for standardization of resources in energy efficient construction” (NORMENG).

2. Materials and Methods

The journal and conference papers for this research were collected by searching through academic databases, libraries, indexing sites and search engines, such as Scopus, Web of Science and Google Scholar. The keywords used in the search were: “Photogrammetry”, “LIDAR”, “Laser scanning”, “Videogrammetry” or “Computer vision”, coupled with: “Construction industry” and “Progress monitoring” or “Quality control”. No range in years of publication was chosen—papers published as early as the 1990s were included—and no results were discarded based on whether they were conference papers or journal articles.
Progress monitoring and quality control were the focus of this research. If some papers regarding health and safety (H&S) or other uses were found, they were still read to gain insight and offer clarification, but were not specifically sought out for the purpose of this analysis. Therefore, it is possible that valuable research exists that was not included in this paper. This is one of the limitations of this study and, consequently, an avenue for further research. The same is true for vision-based technologies, other photogrammetry and laser scanning, and for non-vision-based sensing technologies, such as RFID (radio frequency identification), UWB (ultra-wideband) radio and GPS. If they were found through the literature review search or as references in detected papers, they were read to gain a broader understanding, but were not specifically studied.
Due to the large number of search results, the papers were first screened by their titles for relevance to the topic at hand. Second, the abstracts and keywords of the remaining papers were read to further improve the relevance of the remaining papers. Third, as the papers often mentioned more than one of the technologies and/or applications, there were duplicate entries that needed to be removed. Finally, 159 papers remained that dealt either with a specific application or were review papers useful for a broader understanding of the topic.
These remaining papers were read in full and were the basis of the qualitative content analysis. First, previous uses of vision-based sensing technologies were identified and structured based on what technology was used and for what specific purpose. Following that, the advantages, disadvantages and barriers to implementation were identified from the literature and discussed in the context of their suitability and usability.
Based on the previous analysis of the literature and on the authors’ personal experience with using the technologies and extensive experience in the construction industry in practice, potential uses of 3D scan data for progress monitoring and quality control for each of the construction work types were proposed through inductive reasoning and explained. Finally, each of the issues and limitations for wide-scale adoption were discussed and potential uses that could be applied in the current maturity state of the technologies were inferred. The flowchart of the steps taken in this research is presented in Figure 1 below.

3. Results

3.1. Previous Uses of Vision-Based Sensing Technologies for Construction Management Purposes

A review of the existing literature revealed that vision-based sensing technologies have been extensively researched. There are numerous review papers, as well as those describing the development and application of real examples. Vision-based sensing technologies of particular interest to this paper included photogrammetry and laser scanning, with videogrammetry, computer vision and depth cameras being secondary. Regarding applications, quality control and progress monitoring were of primary concern, while 3D point cloud generation, H&S and object detection were included when encountered. The reasoning behind these choices is prioritization. Laser scanning and photogrammetry are the most widely used and show the most promise. Additionally, the authors have the most personal experience with these two technologies, as both equipment and software were purchased and used for the research activities of the previously mentioned NORMENG project. As for potential applications, quality control and quantity control were found to be the most prevalent and could have the most impact on construction management.
Table 1 below shows the identified and already researched use cases. In the leftmost column, each of the remote sensing technologies is presented, while in the remaining columns, the references to the research that uses the remote sensing technology for the purpose listed in the top row of the table are included. The rightmost column lists review papers on the topic of that particular remote sensing technology.
Some of the rather unsurprising findings from this research are that, by far, most papers focused on the application of laser scanning for purposes of quality control and that photogrammetry was most widely used for progress monitoring. Given how photogrammetry is cheaper, faster and simpler to use to gather data and that progress monitoring for certain works does not require great precision, it was expected that photogrammetry would be quite often researched and, in turn, often used. On the other hand, quality control demands greater accuracy, which can best be provided by laser scanning, and price and time are less important in comparison. Owing to precision requirements, it is not surprising that UAV-mounted sensors were not used for quality control. Computer vision was generally less prevalent than photogrammetry and laser scanning, and was mostly used for progress monitoring, but also for quality control and, in one case each, for 3D model creation and H&S. Surprisingly, there has not been much research on videogrammetry as opposed to photogrammetry. When video data were used, they were used in combination with detection algorithms and not necessarily for point cloud creation. Depth cameras were seldom used, although all observed applications have been used at least once. Regarding other non-vision-based sensing technologies, UWB, RFID and GPS were found to be used for mostly location tracking of people and/or objects on the construction site, as well as for progress monitoring and H&S.
If we were to summarise the number of papers dealing with each of the applications, those on quality control are the most numerous, followed by progress monitoring and then 3D model creation. The numbers of papers on other applications are far behind. However, it needs to be taken into consideration that this research focused on photogrammetry and laser scanning, and on progress monitoring and quality control, so there likely are additional papers detailing the use of other technologies for other purposes, which were outside of the primary scope of this paper.
There also were papers that either used a combination of technologies or used one technology for more than one purpose. The most notable combinations of technologies were photogrammetry and laser scanning, and photogrammetry and computer vision. Research with more than one application was scarce, with only a few studies that combined progress monitoring and quality control. the studies dealing with more than two of the technologies were all review papers.

3.2. Advantages, Disadvantages and Barriers to the Implementation of Vision-Based Remote Sensing Technologies

Previous research praised the possibilities that automation could provide for construction progress monitoring [36,37,52], and quite a lot of papers [5,6,11,12,26,37,50,51,52,56,72,73,74,129,154] stated how traditional methods are time-consuming, labour-intensive, subjective and error-prone. While, naturally, automation does generally bring advantages, it might not be the perfect solution, at least not immediately. For quality control, such statements were not found, indicating a more restrained approach to the topic.
Using digital technologies to control quality and measure progress could indeed automate, speed up and standardise the process, but research and experience indicate that we are not yet at that point. Previous hardware-related issues are no longer as important, as storage capabilities for relatively large point cloud data are more affordable, and on the other hand, processing power has also become more powerful and financially accessible.
The purchase costs, operating costs and maintenance costs of the equipment remain high, as well as software licences, but these are solvable problems if the decreases in time and increases in accuracy overcome the costs.
Obstacles that remain are related to the technologies themselves and those related to people. There still remains the need for human intervention in point clouds to scrub the data of erroneous or unnecessary data points, making this process also time-consuming and potentially error-prone due to the required skill of the technician. Consequently, an obstacle is the need to train personnel [37] to work with both the hardware and software for gathering and using 3D scan data, respectively.
Furthermore, even if data have been collected and processed correctly, technical issues remain. One such issue is that current progress monitoring approaches are unable to identify objects that are not in their exact locations or within a certain tolerance, as they are in the BIM model [36]. Some solutions for progress monitoring have issues with registering that an activity is partly completed when the only possible states are either fully constructed or not constructed [73]. Regarding both quality and quantity monitoring, for a large number of construction works, the accuracy of measurement is probably the largest problem. Most of the technologies have accuracy tolerances of a few centimetres, which is fine for earthworks or large concrete works, but not accurate enough to recognise all MEP (mechanical, electrical and plumbing) and finishing works.
Finally, for most construction sites, it would take more time to collect and process 3D scan data than a manual inspection would take. The accuracy could be lower and the report would need to be digitised for further automation, but in many cases, it might still be accurate enough.
The remainder of this subsection will briefly describe the advantages, disadvantages and barriers to the implementation of the most common vision-based sensing technologies so that their potential uses can be argumentatively discussed.

3.2.1. Photogrammetry

In contrast to other visual-based sensing technologies, photogrammetry is one of the most cost-effective approaches because data can be gathered using a consumer-grade digital camera, including those in smartphones [7], and it also provides a nonintrusive, easy and rapid mechanism for generating a body of operational information and knowledge on the progress of a construction activity [27]. However, although it provides reasonably precise information, there are certain limitations. First and foremost, due to weather conditions (rain, snow, strong winds and cloudiness) [9,27] and potential occlusions [29] and other bordering conditions, photos taken may have insufficient quality to ensure adequate point cloud generation [27,29]. Weather conditions can be planned for and somewhat mitigated, but the progress monitoring can be severely impacted if the weather is unfavourable for an extended period of time, notably in the winter months. As for occlusion, it can be classified as static occlusion (scaffolding, rebar, materials and tools left on the site) and as dynamic occlusion (moving labourers and machines) [29]. To address the issue of dynamic occlusion, measurements could be taken after working hours, and for some static occlusions, improvement in site housekeeping could be beneficial, but not all elements (such as scaffolds) could be removed for the purpose of taking unobstructed images.
Next, although the process of taking photographs is quick and simple, the process of converting them into 3D models requires significantly more effort and a large number of overlapping images [27,51]. This, in turn, means that the accuracy and completeness of the model depend on the skills of the person taking the photographs. Too few photographs with less overlap can prevent the creation of point clouds, while too many photographs lead to increased storage needs and increased computational times, while also not allowing increased quality of 3D point clouds.

3.2.2. Laser Scanning

Similar to photogrammetry, the efficiency of the laser scanning process depends on the engineer’s experience, and the number and locations of laser scans need to be carefully considered and optimised [8]. On one hand, there is a risk of having incomplete and insufficiently detailed data, and on the other hand, by increasing the number of scanning locations and/or accuracy of the scanner, there can be redundancies in collected data [10,155,156]. Redundancies themselves cause data storage problems, but the larger issue is in the time and effort it takes to complete the scans. TLS can only scan what it can see (what is in its line of sight) and what is within the minimum and maximum range (depth of field) [10]. Increasing the accuracy of measurement increases the number of measurements (resolution of the scan), but also increases the time it takes for a scan to complete [36]. Laser scanning is also sensitive to occlusions, even more so than photogrammetry. Scanning locations need to be carefully planned to ensure model completeness with as few scans as possible, because, while taking a photo can take several seconds, completing a scan can take up to several minutes.
Unlike photogrammetry, TLSs are not recommended to be used in below-freezing temperatures [74]; a serious limitation is that scans cannot be taken in winter months in colder climates. In addition, the scanning results are also affected by rain, but fortunately not by lighting, because scans can be taken in complete darkness.
The equipment cost is higher than that of any other visual-based sensing technology [6,12] and it is more time-consuming to both collect scans and to process them [12,29,51]. Additionally, the use of laser scanning requires trained personnel [51], driving the cost up even further. Moreover, certain works, such as most interior works, cannot be accurately scanned and identified, such as painting and tiling [74].
The most significant issue remains in the postprocessing stage. Although data processing has become faster and easier, the automation of current software solutions remains limited [72] and requires an extensive amount of human intervention, making the process time-consuming and ill-suited for repetitive progress monitoring tasks [157]. Additionally, current software packages do not enable the automated organization of data at the object level, meaning that the point cloud by itself does not differentiate objects in a scan. While some manual and, sometimes, semi-automated approaches exist, they are very time-consuming, must be used by experts and are thus very expensive [74]. However, Omar and Nehdi [6] stated that the full potential of this technology has not yet been achieved due to the complexity of commercial software packages. It stands to reason that, with increased automation and user-friendliness of software, the potential for application could rise exponentially.

3.2.3. Use of UAVs

The characteristics of UAVs are separated from photogrammetry and laser scanning, as they offer both additional advantages and disadvantages on top of those mentioned earlier. Multirotor drones have the advantage of high manoeuvrability, low cost, the ability to carry various sensors and being able to be piloted by smartphones, tablets or computers [44]. These advantages translate into significant time, cost and safety improvements for monitoring structures [67]. Their manoeuvrability allows them to collect visual data that was previously impossible to reach [9], without the need for expensive equipment or endangering the quality surveying engineer and without significant impact on the surrounding environment [158,159]. The main contributions of using UAVs as compared with human operators and other flying equipment, such as helicopters are work safety from the social perspective, cost-effectiveness from the economic perspective and lower carbon dioxide emissions from the environmental perspective [44].
There are, of course, certain disadvantages to using UAVs. Image capturing is limited to the external elements because, due to GPS and safety issues, indoor flight can be difficult. Inclement weather also adversely impacts the quality of collected data and even the possibility of flight taking place. UAVs are mostly not waterproof and cannot fly in the rain and with strong wind. Additionally, the accuracy of 3D models is correlated with the accuracy of the UAV’s position when taking photographs. With regard to conducting aerial monitoring of any kind, battery life is also one of the most significant limitations [57].

3.3. Potential Uses of 3D Scan Data for Various Construction Work Types

For the purposes of this research, due to the breadth of technologies, construction works and potential uses, only progress monitoring and quality control were analysed in detail. All other applications are outside of the scope of this research and could be the topic of a separate paper. Additionally, there might be some small differences based on the building type, but due to the sheer number of different structures, this comparison could also be a topic of a separate paper.

3.3.1. Potential for Use of 3D Scan Data for Progress Monitoring

There is a large number of different construction works that need to be completed for a building or structure to be finished. Some of them are conducted in the open, but the majority of works are conducted when the building has already received its shape (under a roof), where the necessary communication between UAVs and GPS satellites is weak or nonexistent. Table 2 below lists various types of construction works aggregated on a higher level of mutual similarities that all provide the same answer regarding the ability of their progress to be tracked by the methods listed in the table. For the purpose of this research, it would, for example, make little difference if bricklaying were further divided by brick material type or brick size, by the type of waterproofing used and the whether flooring is parquet, tiles, linoleum or something else.
Table 2 above divides all construction works, which are listed in the leftmost column, into several categories that can be grouped together for ease of view. The four other columns contain either “Yes” or “No” answers to the question “Can this technology be used to accurately collect 3D scan data to be later used for progress monitoring?” There is, of course, no clear-cut answer for all of the combinations, as some have varying conditions that could affect the answer.
Generally, massive earthworks, such as excavations, backfilling and construction of embankments, can be identified by all technologies, as millimetre accuracy is not of such importance. This is why these are the only type of works that can also be measured by UAV LIDAR. Various geotechnical works that are mostly underground cannot be measured by either of the technologies (except for tunnelling, which could be modelled from photogrammetry and laser scanning). Some elements of the works are visible from the surface, but their total quantity cannot be measured. For example, the top of a foundation pile can be identified and its width measured, but we do not have a way to measure its depth and, by extension, its total volume.
Scaffolding and formwork can, in theory, be monitored by photogrammetry and laser scanning, but the result might not be accurate enough to recognise the exact quantities of each individual element, only whether it exists or not at the place where it was planned to be. Slab and beam formwork can only be identified by UAVs from the top side, which does not provide enough visual information to create a model.
Reinforcement work could also be monitored in the same way as scaffolds and formwork: either something similar to what is planned exists in the area in which it is supposed to exist or it does not. Accuracy and occlusion issues are even more pronounced for concrete reinforcements, as there are a lot more bars that need to be accurately scanned and modelled.
For concreting works, only the elements that have a larger surface area open to scan can be recognised. Slabs and beams have larger open surface areas, while walls and columns do not, and it would be harder for the model to register that concrete was poured.
Bricklaying progress can be recognised from 3D scan data, but for rough plastering, it depends on the thickness of the layer and the accuracy of the scan.
Construction works that constitute an assembly of various wooden and/or metal elements, for example, interior columns and beams, wall substructures and stairways, can be identified by photogrammetry, laser scanning and even aerial photogrammetry if the area is visible to the UAV, i.e., not indoors. The same goes for the montage of larger prefabricated structural elements.
Modern façades, such as curtain wall façades, can be easily recognisable from 3D model scan data, while the recognition of traditional façade systems depends on the thickness of the layers and the accuracy of the scan, the same as with any other thermal insulation works. Waterproofing works could hardly be recognisable due to the small layer thickness, which is around the same dimensions as the expected deviation from the accurate position in the scan data.
None of the interior finishing works, with the exception of the installation of windows and external doors, can be modelled from a scan made by a UAV for the obvious reason that UAVs are not suited to fly indoors. Even if they were, the resulting model would be no better or worse than that obtained by terrestrial photogrammetry. Drywall works could be recognised from scan data, as well as interior doors and sanitary facilities. Other finishing works such as tiling, flooring, plastering and painting, would be hard to scan and identify in the 3D model as they are thin, comparable to the accuracy of the scans.
Mechanical, electrical and plumbing installations could be identified in 3D models depending on their size. Generally, most of the electrical wiring is thinner than a few centimetres, while mechanical and plumbing installations have larger diameters. Other elements, such as radiators, heating and cooling systems, vents, breaker boxes, etc., could be recognised in the 3D model. None of the above are identifiable by aerial photogrammetry, as they are almost exclusively on the inside of buildings.
External landscaping elements, on the other hand, are on the outside and can be recognised by aerial photogrammetry, as well as terrestrial photogrammetry and laser scanning. UAV LIDAR could be used, depending on the required accuracy.
Finally, for road construction and utilities construction, all methods besides UAV LIDAR could be used to monitor the progress of road base course construction. For road paving, the situation becomes more difficult as the layers can be only a few centimetres thick, which is near the limit of the scan accuracy. Similarly, for pipe and cable laying, it depends on the diameter of the pipe or cable. The larger the diameter, the more can we be sure that the model will correctly show the element and that the construction progress could be monitored.

3.3.2. Potential for Use of 3D Scan Data for Quality Control

This subsection will list the possibilities of using 3D scan data for the quality control of the construction works identified in Table 2. Quality control can, in general, be divided into [8] dimensional quality inspection, surface quality inspection and displacement inspection. Dimensional inspection indicates whether the element is of the right size and displacement inspection indicates whether the element is at the right place. Surface quality inspection checks whether the surface has any deformations or cracks, whether the surface is flat enough or if spalling has occurred.
All of the construction work types whose progress can be monitored from 3D scan data can also be tested for dimensional quality inspection and for displacement inspection by the same vision-based sensing technology. This is naturally due to the fact that the same properties of the observed element are being checked and with the same precision requirements. For example, if we are to successfully monitor the progress of masonry wall construction, we need to know the relatively precise dimensions of that wall. Those same dimensions can be checked against the existing BIM model to see if the dimensions match. Additionally, as we have the wall as part of a scan of the construction site, and if the scan is registered and overlayed over the BIM model, we can check whether the position of the wall is correct, or if not, by how much the wall has been displaced. Perhaps it has been moved by a centimetre, which should not matter much, but perhaps it has been constructed 10 cm away from the required place, in which case the contractor might have to tear down the wall and construct it again.
Surface quality inspection, on the other hand, could be of most use for concrete elements and for masonry walls. It could also be used for damage assessment following natural disasters, such as earthquakes.

4. Discussion

In the previous section, the current use cases, barriers and potential uses of vision-based sensing technologies (primarily photogrammetry and laser scanning) were described. This section will aim to discuss whether the industry and the existing solutions are mature enough for more widespread adoption.
Regarding progress monitoring, there are various issues that still need to be addressed before automated progress tracking surpasses and, in the end, replaces traditional manual progress tracking. One of them, which is not insurmountable, is the price. If the technical solutions were good enough and if they offered benefits of accuracy and time saved, early adopters in the construction industry would start using them. This would prompt competitors to develop other products and software, increasing market competition and driving down the price.
Other significant issues identified from the literature, which would take more effort to solve, are accuracy, occlusions, the time to take scans/photos, time for postprocessing to create usable 3D scan data, need for skilled workers, clear weather requirements, warm weather requirements, making sure that all scans are registered correctly, the need for measured work to be at the same location as planned, etc.
Accuracy is a technical limitation. All current technologies have accuracy of, at best, a few centimetres, or less than a centimetre for extremely detailed scanning processes. This is not a problem for works of large quantities, such as earthworks, where one centimetre is insignificant in the total percentage. Masonry and concrete works can also be measured, but for what purpose? It is rare that the quantity of concrete or masonry works deviates from the planned quantity, and it is enough to simply check whether the element has been constructed or not, whether it is at the correct spot and whether the length is approximately accurate. If the length is planned to be 450 cm, for example, and the wall is constructed to be 451.2 cm or 448.6 cm long, it would not matter and, for payment purposes, it would be listed as 450 cm. On the other hand, if it is 20 cm shorter or longer, or if the location deviates by more than a few centimetres, that would be a quality problem if not subject to progress monitoring, and the contractor would need to mitigate the damage. For this case of masonry and concrete works, the accuracy is good enough, but the time and effort it takes for a novice engineer or a foreman to measure the length is far less than that which would be required by using 3D scan data. However, there are a large number of construction works whose volumetric dimensions fall well below the accuracy of both photogrammetry and laser scanning. Plastering, painting, tiling, flooring, electrical wires, etc. cannot be accurately identified from the 3D point cloud, and these works constitute a large percentage of both cost of the project and completion time.
Another technical limitation is occlusion. Both photogrammetry and laser scanning can only gather visual information that they can see. Construction sites are often cluttered with materials, scaffolds, tools and moving workers and machines, and even if scans were to take place on tidy construction sites and after hours, the framework would still block the view for concrete works; additionally, due to the density of steel bars, concrete reinforcements are still technically impossible to scan.
Time-related issues were briefly touched upon earlier. It still takes a substantial amount of time to collect the data for 3D model creation (the only exception being aerial photogrammetry and LIDAR of large earthworks), especially for indoor measurements and even more so for laser scanning, and, finally, if we would want to achieve greater accuracy, even more time would be needed to complete the scans. Depending on the size of the construction site, it could take hours to collect data. Additionally, there is a risk of the data being incomplete, so scan locations need to be planned beforehand. The consequences of having insufficient data would range from not being able to use the scan and going back to perform missing scans, meaning that more time is needed. On the other hand, the existence of redundant scans means that more time would already have been spent than needed.
The second time-related issue is postprocessing. It currently takes a lot of effort from skilled technicians or engineers to take raw data and make it usable for comparison with as-planned BIM models or previous scans. While time alone might not be an issue, for example, there might be ample time to complete the report, the issue is with the hourly wage of the people scanning and postprocessing the point cloud. It would take longer and cost more to create 3D scan data than the alternative, which would take a few minutes for a novice engineer or a foreman to measure and write down the measurements. Moreover, for elements measured in pieces, it would be faster for someone with a checklist to state that a certain window, door, sink, faucet, etc. was installed than to perform this task from a 3D model. Additionally, all those previously mentioned need to be quality-tested, i.e., whether the water runs or whether the windows and doors open and close properly, which is also a pass/fail type check, most suitable for a checklist.
The third time-related issue is the consequence of how long it takes to scan the site. Data collection, especially laser scanning, takes a long time and is generally considered invasive and disruptive for regular construction activities, thus reducing the productivity of the site. Thus, it would be logical to conduct it after working hours. This, on the other hand, brings the problem of overtime for workers performing the scanning and brings us to the next set of issues, which are weather- and time-of-day reliant.
For photogrammetry to yield the best results, the weather needs to be sunny and consistent. Cloudy weather leads to darker photographs and during rain, both outside photogrammetry and laser scanning are impossible, as is, of course, the use of UAVs. Moreover, wind and cold weather cause UAVs’ batteries to drain faster and strong winds make take-off impossible. Finally, previous research even stated that terrestrial laser scanners cannot be used in below-freezing temperatures. All this taken together means that there are quite a lot of factors that influence whether scan data can even be collected in the first place. If we cannot collect data for days or even weeks during the wintertime, then effective progress monitoring is impossible.
When all of these limitations are taken together, the result is that it is infeasible (or even impossible) to use 3D scan data to continually monitor progress on construction sites. It would be even less applicable for productivity monitoring, as the frequency of the scans can be daily at the very best, and while quantities could be relatively simply calculated, the distribution of time and reasons for low or high productivity could not be identified.
Thus, when would the creation of 3D scan data be useful for progress monitoring? It would be useful on a monthly basis to have irrefutable proof of what works were conducted during the last month and what works need to be paid for to the contractor. The same could be used in court-ordered evidence collection of what works a has contractor completed prior to contract termination. Other use cases include the collection of proof that certain works were completed, such as the laying of drainage pipes that will later be covered by soil. For these purposes, it is most important to have impartial and solid evidence as a basis for payment or for legal dispute resolutions.
Quality control, again, has the same issues as progress monitoring and additional exclusive issues. Quantity control or progress monitoring is generally a contract issue between the contractor and the client. Quality control is, on the other hand, of great interest to the construction authority. In the construction industry, there are many strict requirements for what can be constructed, how it can be constructed and from what materials. While 3D scan data can provide location information, such as position and deformation, and surface quality information, it would take a long testing time under various conditions on a large number of construction sites before such automated measurements become acceptable to the construction authority.
A quality control Issue for which 3D scans could be used is regular quality assessments of buildings. On one hand, photogrammetry and laser scanning are used to detect cracks or unusual displacements or deflections for inspecting infrastructure, such as bridges, and could be expanded to residential and other buildings. Moreso, a 3D point cloud could be generated to identify unauthorised modifications that tenants may have made to buildings, i.e., which interior walls may have been torn down, which new openings may have been created or walled out, etc. These modifications are a significant issue in older buildings, which have low resistance to loads and are not up to current standards, and such alterations would make them even weaker to both regular loads and especially earthquakes.
So far, the potential uses and conditions under which 3D scan data could be used to monitor progress and quality have been discussed based on the issues identified in existing research. One unanswered and even unasked question is who would be collecting and processing scan data or paying for the creation of 3D scan data? If we are talking about a monthly progress scan to be used as a basis for payment calculation, then it makes sense for the contractor to be providing it. However, how would the client know if the scans are accurate or if they have been modified in the contractor’s favour? In that case, the client would need to perform their own scans to verify the contractor’s scans. In the opposite case, the client would calculate how much they needed to pay the contractor. The contractor would then, of course, want to know whether the quantities were correct and would, again, need their own scans to confirm or to counter the client’s quantities. The third solution would be an impartial third party, but trust issues could still arise, and the question remains of who would pay for this third party’s services. In the other potential case of gathering evidence for disputes, the same trust issue arises. As a result, the status would be the same as that without using 3D scan data, but with more money spent on both sides. The same could be said for any quality control issues between the contractor and the client. Either one would need to trust the other, or both would need to have separate models. If quality control is between the contractor and the construction authority, then the process would be similar to other quality control processes. Either the contractor is certified or a third party is responsible for quality control conformance.
After posing additional questions for relatively feasible applications, we can go into further detail about the applications that we deemed unfeasible. We could ask, who would have interest in collecting daily progress report from 3D scan data? It would most likely be the contractor, as they stand to lose the most if they fall behind schedule. However, even if we disregard the issues of occlusion, accuracy, time and cost, it would still be difficult to measure progress, as most plans on-site are either knowingly overoptimistic and/or are not detailed enough for any comparison to make sense. Time schedules often only have summarised activities, such as “concrete walls, 2nd floor” that last for four days, for example. What could progress monitoring using 3D scan data compare the data to? Regardless of the optimism and lack of detail in the schedule, any decent contractor knows well where they are ahead and where they are falling behind without the need for costly, time-intensive, unreliable and, in certain scenarios, unusable technologies.
If the client would demand daily progress reports from the contractor, the contractor would most certainly bill the client for the effort, as would any other party employed by the client, such as the project manager, quantity surveyor or specialised consultant. The client would have daily progress reports, but the use they would gain from it is unclear as they would have already contractually obligated the contractor to finish works by a set date.

5. Conclusions

This research began by examining existing literature to determine what uses had previously been established, what benefits, drawbacks and implementation challenges each technology and use had and what each technology could be used for objectively at its current stage of development. Following that, the issues regarding large-scale implementation were discussed, both those identified through the literature and those from personal experience.
It was found that there were numerous issues with the technologies themselves that hinder their adoption on a wider scale. Only certain work types under certain circumstances can be scanned with reliable accuracy or scanned at all during inclement weather. Furthermore, the necessary data collection and processing are more complicated, requires more time and more skilled workers than traditional methods and yields more questionable results. On the other hand, there is no imminent need for the use of such technologies from neither contractors nor clients, and issues exist that make their use even more complicated and costly than traditional methods. As a consequence, there is little interest from product and software developers to create tools for which they will have a small number of customers. From this, it can be concluded that, at least for now, neither the industry nor the solutions themselves are mature enough for adoption.
The paradox is that if any of the two sides of the problem were to move towards the solution, the other would follow. If the developers would create better, more useful tools, their adoption would rise, and on the other hand, if the promise of potential benefits would drive the demand up, then developers would flock towards the problem and towards creating better tools. This is a stalemate situation in which both sides are at a potential loss.
This study focused on photogrammetry and laser scanning as vision-based sensing technologies, and progress monitoring and quality control as potential applications of said technologies. There are other vision- and non-vision-based sensing technologies that can be used not just for progress monitoring and quality control, but for other purposes as well. These were only briefly touched upon in this paper and deserve a follow-up investigation to determine whether the same limitations also hold true in their case. Additionally, explorative research is needed to determine the best ways to overcome the identified barriers and issues, and how 3D scan data could best be utilised for construction management purposes.
Additionally, from a financial standpoint, a thorough cost–benefit analysis is needed to define whether the use of 3D scan data could be profitable and under what circumstances, and what could be conducted to facilitate their mass adoption for construction management uses in the construction industry.

Author Contributions

Conceptualization, M.M. and Z.S.; methodology, M.M. and L.L.B.; validation, L.L.B. and I.Z.; formal analysis Z.S.; investigation, M.M. and Z.S.; writing—original draft preparation, M.M. and Z.S.; writing—review and editing, L.L.B. and I.Z.; supervision, I.Z.; project administration, L.L.B.; funding acquisition, I.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC were funded by the European Union through the Competitiveness and Cohesion Operational Program, European Regional Development Fund, as a part of the project KK.01.1.1.07.0057 Development of automated resource standardization system for energy-efficient construction (NORMENG). The content of the publication is the sole responsibility of The University of Zagreb Faculty of Civil Engineering.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Abd Majid, M.Z.; Mustaffar, M.; Memon, Z. A systematic approach for monitoring and evaluating the construction project progress. J. Inst. Eng. 2006, 67, 7. [Google Scholar]
  2. Arabshahi, M.; Wang, D.; Sun, J.; Rahnamayiezekavat, P.; Tang, W.; Wang, Y.; Wang, X. Review on Sensing Technology Adoption in the Construction Industry. Sensors 2021, 21, 8307. [Google Scholar] [CrossRef] [PubMed]
  3. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation, 7th ed.; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  4. Campbell, J.B.; Wynne, R.H. Introduction to Remote Sensing, 5th ed.; Guilford Publications: New York, NY, USA, 2011. [Google Scholar]
  5. Maalek, R.; Ruwanpura, J.; Ranaweera, K. Evaluation of the State-of-the-Art Automated Construction Progress Monitoring and Control Systems. In Construction Research Congress 2014; American Society of Civil Engineers: Reston, VA, USA, 2014; pp. 1023–1032. [Google Scholar]
  6. Omar, T.; Nehdi, M.L. Data acquisition technologies for construction progress tracking. Autom. Constr. 2016, 70, 143–155. [Google Scholar] [CrossRef]
  7. Liu, Y.; Kang, J. Application of Photogrammetry: 3D Modeling of a Historic Building. In Proceedings of the Construction Research Congress 2014; American Society of Civil Engineers: Reston, VA, USA, 2014; pp. 219–228. [Google Scholar]
  8. Wang, Q.; Kim, M.-K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  9. Motawa, I.A.; Kardakou, A. Unmanned aerial vehicles (UAVs) for inspection in construction and building industry. In Proceedings of the 16th International Operation & Maintenance Conference, Cairo, Egypt, 18–20 November 2018; p. 10. [Google Scholar]
  10. Aryan, A.; Bosché, F.; Tang, P. Planning for terrestrial laser scanning in construction: A review. Autom. Constr. 2021, 125, 103551. [Google Scholar] [CrossRef]
  11. Trucco, E.; Kaka, A.P. A framework for automatic progress assessment on construction sites using computer vision. Int. J. IT Archit. Eng. Constr. 2004, 2, 18. [Google Scholar]
  12. Golparvar-Fard, M.; Bohn, J.; Teizer, J.; Savarese, S.; Peña-Mora, F. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques. Autom. Constr. 2011, 20, 1143–1155. [Google Scholar] [CrossRef]
  13. Akinci, B.; Boukamp, F.; Gordon, C.; Huber, D.; Lyons, C.; Park, K. A formalism for utilization of sensor systems and integrated project models for active construction quality control. Autom. Constr. 2006, 15, 124–138. [Google Scholar] [CrossRef]
  14. Dadi, G.B.; Goodrum, P.M.; Saidi, K.S.; Brown, C.U.; Betit, J.W. A Case Study of 3D Imaging Productivity Needs to Support Infrastructure Construction. In Construction Research Congress 2012; American Society of Civil Engineers: Reston, VA, USA, 2012; pp. 1052–1062. [Google Scholar]
  15. Turkan, Y.; Bosche, F.; Haas, C.T.; Haas, R. Automated progress tracking using 4D schedule and 3D sensing technologies. Autom. Constr. 2012, 22, 414–421. [Google Scholar] [CrossRef]
  16. Park, C.-S.; Lee, D.-Y.; Kwon, O.-S.; Wang, X. A framework for proactive construction defect management using BIM, augmented reality and ontology-based data collection template. Autom. Constr. 2013, 33, 61–71. [Google Scholar] [CrossRef]
  17. Thomson, C.; Apostolopoulos, G.; Backes, D.; Boehm, J. Mobile Laser Scanning for Indoor Modelling. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2013, II-5/W2, 289–293. [Google Scholar] [CrossRef]
  18. Bosché, F.; Guillemet, A.; Turkan, Y.; Haas, C.T.; Haas, R. Tracking the Built Status of MEP Works: Assessing the Value of a Scan-vs-BIM System. J. Comput. Civ. Eng. 2014, 28, 05014004. [Google Scholar] [CrossRef]
  19. Zhang, C.; Tang, P. A divide-and-conquer algorithm for 3D imaging planning in dynamic construction environments. In Proceedings of the ICSC15: The Canadian Society for Civil Engineering 5th International/11th Construction Specialty Conference, Vancouver, BC, Canada, 7–10 June 2015. [Google Scholar]
  20. Pătrăucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modelling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef]
  21. Wang, Q.; Kim, M.-K.; Cheng, J.C.P.; Sohn, H. Automated quality assessment of precast concrete elements with geometry irregularities using terrestrial laser scanning. Autom. Constr. 2016, 68, 170–182. [Google Scholar] [CrossRef]
  22. Dering, G.M.; Micklethwaite, S.; Thiele, S.T.; Vollgger, S.A.; Cruden, A.R. Review of drones, photogrammetry and emerging sensor technology for the study of dykes: Best practises and future potential. J. Volcanol. Geotherm. Res. 2019, 373, 148–166. [Google Scholar] [CrossRef]
  23. Kraus, K. Photogrammetry: Geometry from Images and Laser Scans; De Gruyter: Berlin, Germany, 2007. [Google Scholar]
  24. Hackl, J.; Adey, B.T.; Woźniak, M.; Schümperlin, O. Use of Unmanned Aerial Vehicle Photogrammetry to Obtain Topographical Information to Improve Bridge Risk Assessment. J. Infrastruct. Syst. 2018, 24, 04017041. [Google Scholar] [CrossRef]
  25. Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  26. Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U. Acquisition and Consecutive Registration of Photogrammetric Point Clouds for Construction Progress Monitoring Using a 4D BIM. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2017, 85, 3–15. [Google Scholar] [CrossRef]
  27. Bügler, M.; Borrmann, A.; Ogunmakin, G.; Vela, P.A.; Teizer, J. Fusion of Photogrammetry and Video Analysis for Productivity Assessment of Earthwork Processes. Comput. Aided Civ. Infrastruct. Eng. 2017, 32, 107–123. [Google Scholar] [CrossRef]
  28. Bügler, M.; Ongunmakin, G.; Teizer, J.; Vela, P.; Borrmann, A. A Comprehensive Methodology for Vision-Based Progress and Activity Estimation of Excavation Processes for Productivity Assessment. In Proceedings of the EG-ICE Workshop on Intelligent Computing in Engineering, Cardiff, Wales, 16–18 July 2014. [Google Scholar]
  29. Omar, H.; Mahdjoubi, L.; Kheder, G. Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities. Comput. Ind. 2018, 98, 172–182. [Google Scholar] [CrossRef]
  30. Dai, F.; Peng, W.B. Reality Capture in Construction Engineering Applications Using Close-Range Photogrammetry. Appl. Mech. Mater. 2013, 353–356, 2795–2798. [Google Scholar] [CrossRef]
  31. Ordóñez, C.; Martínez, J.; Arias, P.; Armesto, J. Measuring building façades with a low-cost close-range photogrammetry system. Autom. Constr. 2010, 19, 742–749. [Google Scholar] [CrossRef]
  32. Ordóñez, C.; Arias, P.; Herráez, J.; Rodríguez, J.; Martín, M.T. Two photogrammetric methods for measuring flat elements in buildings under construction. Autom. Constr. 2008, 17, 517–525. [Google Scholar] [CrossRef]
  33. Riveiro, B.; Jauregui, D.V.; Arias, P.; Armesto, J.; Jiang, R. An innovative method for remote measurement of minimum vertical underclearance in routine bridge inspection. Autom. Constr. 2012, 25, 34–40. [Google Scholar] [CrossRef]
  34. Karsch, K.; Golparvar-Fard, M.; Forsyth, D. ConstructAide: Analyzing and Visualizing Construction Sites through Photographs and Building Models. ACM Trans. Graph. 2014, 33, 176. [Google Scholar] [CrossRef]
  35. Feng, Y.; Golparvar-Fard, M. Image-Based Localization for Facilitating Construction Field Reporting on Mobile Devices. In Advances in Informatics and Computing in Civil and Construction Engineering Proceedings of the 35th CIB W78 2018 Conference: IT in Design, Construction, and Management; Springer: Cham, Switzerland, 2019; pp. 585–592. [Google Scholar]
  36. Moselhi, O.; Bardareh, H.; Zhu, Z. Automated Data Acquisition in Construction with Remote Sensing Technologies. Appl. Sci. 2020, 10, 2846. [Google Scholar] [CrossRef]
  37. Duarte-Vidal, L.; Herrera, R.F.; Atencio, E.; Muñoz-La Rivera, F. Interoperability of Digital Tools for the Monitoring and Control of Construction Projects. Appl. Sci. 2021, 11, 10370. [Google Scholar] [CrossRef]
  38. Koch, C.; Jog, G.M.; Brilakis, I. Automated Pothole Distress Assessment Using Asphalt Pavement Video Data. J. Comput. Civ. Eng. 2013, 27, 370–378. [Google Scholar] [CrossRef]
  39. Zhu, Z.; Brilakis, I. Machine vision-based concrete surface quality assessment. J. Constr. Eng. Manag. 2010, 136, 210–218. [Google Scholar] [CrossRef]
  40. Brilakis, I.; Lourakis, M.; Sacks, R.; Savarese, S.; Christodoulou, S.; Teizer, J.; Makhmalbaf, A. Toward automated generation of parametric BIMs based on hybrid video and laser scanning data. Adv. Eng. Inform. 2010, 24, 456–465. [Google Scholar] [CrossRef]
  41. Koch, C.; Brilakis, I. Pothole detection in asphalt pavement images. Adv. Eng. Inform. 2011, 25, 507–515. [Google Scholar] [CrossRef]
  42. Rao, B.; Gopi, A.G.; Maione, R. The societal impact of commercial drones. Technol. Soc. 2016, 45, 83–90. [Google Scholar] [CrossRef]
  43. Kwon, S.; Park, J.-W.; Moon, D.; Jung, S.; Park, H. Smart Merging Method for Hybrid Point Cloud Data using UAV and LIDAR in Earthwork Construction. Procedia Eng. 2017, 196, 21–28. [Google Scholar] [CrossRef]
  44. Li, Y.; Liu, C. Applications of multirotor drone technologies in construction management. Int. J. Constr. Manag. 2019, 19, 401–412. [Google Scholar] [CrossRef]
  45. Pučko, Z.; Šuman, N.; Rebolj, D. Automated continuous construction progress monitoring using multiple workplace real time 3D scans. Adv. Eng. Inform. 2018, 38, 27–40. [Google Scholar] [CrossRef]
  46. Escorcia, V.; Dávila, M.A.; Golparvar-Fard, M.; Niebles, J.C. Automated Vision-Based Recognition of Construction Worker Actions for Building Interior Construction Operations Using RGBD Cameras. In Proceedings of the Construction Research Congress 2012, West Lafayette, IN, USA, 21–23 May 2012; pp. 879–888. [Google Scholar]
  47. Li, L. Time-of-Flight Camera—An Introduction; Technical White Paper; Texas Instruments: Dallas, TX, USA, 2014. [Google Scholar]
  48. Cui, Y.; Schuon, S.; Chan, D.; Thrun, S.; Theobalt, C. 3D shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1173–1180. [Google Scholar]
  49. Martinez, P.; Al-Hussein, M.; Ahmad, R. A scientometric analysis and critical review of computer vision applications for construction. Autom. Constr. 2019, 107, 102947. [Google Scholar] [CrossRef]
  50. Alaloul, W.S.; Alzubi, K.M.; Malkawi, A.B.; Al Salaheen, M.; Musarat, M.A. Productivity monitoring in building construction projects: A systematic review. Eng. Constr. Archit. Manag. 2022, 29, 2760–2785. [Google Scholar] [CrossRef]
  51. Kopsida, M.; Brilakis, I.K.; Vela, P.A. A Review of Automated Construction Progress Monitoring and Inspection Methods. In Proceedings of the 32nd CIB W78 Conference on Construction IT, Eindhoven, The Netherlands, 26–29 October 2015. [Google Scholar]
  52. Vick, S.M.; Brilakis, I. A review of linear transportation construction progress monitoring techniques. In Proceedings of the 16th International Conference on Computing in Civil and Building Engineering, Osaka, Japan, 6–8 July 2016; p. 8. [Google Scholar]
  53. Golizadeh, H.; Hosseini, M.R.; Martek, I.; Edwards, D.; Gheisari, M.; Banihashemi, S.; Zhang, J. Scientometric analysis of research on “remotely piloted aircraft”. Eng. Constr. Archit. Manag. 2020, 27, 634–657. [Google Scholar] [CrossRef]
  54. Sanchez, J. Applications of Drone Technology with BIM to Increase Productivity; California Polytechnic State University: San Luis Obispo, CA, USA, 2019. [Google Scholar]
  55. Mihić, M. Incorporation of Health and Safety into Building Information Modelling through Hazard Integration System. Ph.D. Thesis, University of Zagreb Faculty of Civil Engineering, Zagreb, Croatia, 2018. [Google Scholar]
  56. Bayrak, T.; Kaka, A. Evaluation of digital photogrammetry and 3d cad modelling applications in construction management. In Proceedings of the 20th Annual ARCOM Conference, Edinburgh, UK, 1–3 September 2004; pp. 613–619. [Google Scholar]
  57. Riyanto, F.; Setyandito, O.; Pramudya, A. Realtime monitoring study for highway construction using Unmanned Aerial Vehicle (UAV) technology. IOP Conf. Ser. Earth Environ. Sci. 2021, 729, 012040. [Google Scholar]
  58. Pushkar, A.; Senthilvel, M.; Varghese, K. Automated Progress Monitoring of Masonry Activity Using Photogrammetric Point Cloud. In Proceedings of the 35th International Symposium on Automation and Robotics in Construction (ISARC), Berlin, Germany, 20–25 July 2018; pp. 897–903. [Google Scholar]
  59. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Automated Progress Monitoring Using Unordered Daily Construction Photographs and IFC-Based Building Information Models. J. Comput. Civ. Eng. 2015, 29, 04014025. [Google Scholar] [CrossRef]
  60. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Monitoring changes of 3D building elements from unordered photo collections. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 249–256. [Google Scholar]
  61. Ahmed, M.; Haas, C.T.; Haas, R. Using digital photogrammetry for pipe-works progress tracking. Can. J. Civ. Eng. 2012, 39, 1062–1071. [Google Scholar] [CrossRef]
  62. Kim, C.; Son, H.; Kim, C. The Effective Acquisition and Processing of 3D Photogrammetric Data from Digital Photogrammetry for Construction Progress Measurement. In Computing in Civil Engineering; American Society of Civil Engineers: Miami, FL, USA, 2011; pp. 178–185. [Google Scholar] [CrossRef]
  63. Marzouk, M.; Zaher, M. Tracking construction projects progress using mobile hand-held devices. In Proceedings of the ICSC15: The Canadian Society for Civil Engineering 5th International/11th Construction Specialty Conference, Vancouver, BC, Canada, 7–10 June 2015. [Google Scholar]
  64. Kim, C.; Son, H.; Kim, C. Fully automated registration of 3D data to a 3D CAD model for project progress monitoring. Autom. Constr. 2013, 35, 587–594. [Google Scholar] [CrossRef]
  65. Ahmed, M.; Haas, C.; Haas, R. Toward Low-Cost 3D Automatic Pavement Distress Surveying: The Close Range Photogrammetry Approach. Can. J. Civ. Eng. 2011, 38, 1301–1313. [Google Scholar] [CrossRef]
  66. Bhatla, A.; Choe, S.Y.; Fierro, O.; Leite, F. Evaluation of accuracy of as-built 3D modeling from photos taken by handheld digital cameras. Autom. Constr. 2012, 28, 116–127. [Google Scholar] [CrossRef]
  67. Jacob-Loyola, N.; Muñoz-La Rivera, F.; Herrera, R.F.; Atencio, E. Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction. Sensors 2021, 21, 4227. [Google Scholar] [CrossRef]
  68. Takahashi, N.; Wakutsu, R.; Kato, T.; Wakaizumi, T.; Ooishi, T.; Matsuoka, R. Experiment on uav photogrammetry and terrestrial laser scanning for ict-integrated construction. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W6, 371–377. [Google Scholar] [CrossRef]
  69. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  70. Brilakis, I.; Fathi, H.; Rashidi, A. Progressive 3D reconstruction of infrastructure with videogrammetry. Autom. Constr. 2011, 20, 884–895. [Google Scholar] [CrossRef]
  71. Rashidi, A.; Karan, E. Video to BrIM: Automated 3D As-Built Documentation of Bridges. J. Perform. Constr. Facil. 2018, 32, 04018026. [Google Scholar] [CrossRef]
  72. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  73. Puri, N.; Turkan, Y. Bridge construction progress monitoring using lidar and 4D design models. Autom. Constr. 2020, 109, 102961. [Google Scholar] [CrossRef]
  74. Turkan, Y.; Bosché, F.; Haas, C.T.; Haas, R.C.G. Automated Progress Tracking of Erection of Concrete Structures. In Proceedings of the 3rd International/9th Construction Specialty Conference, Ottawa, ON, USA, 14–17 June 2011. [Google Scholar]
  75. Mengiste, E.; García de Soto, B. Using the Rate of Color Evolution of a Point Cloud to Monitor the Performance of Construction Trades. In Proceedings of the 18th International Conference on Construction Applications of Virtual Reality (CONVR2018), Auckland, New Zealand, 21–23 November 2018; pp. 345–354. [Google Scholar]
  76. Shahi, A.; Safa, M.; Haas, C.T.; West, J.S. Data Fusion Process Management for Automated Construction Progress Estimation. J. Comput. Civ. Eng. 2015, 29, 04014098. [Google Scholar] [CrossRef]
  77. Bosche, F.; Haas, C.T. Automated retrieval of 3D CAD model objects in construction range images. Autom. Constr. 2008, 17, 499–512. [Google Scholar] [CrossRef]
  78. Maalek, R.; Lichti, D.D.; Ruwanpura, J. Robust classification and segmentation of planar and linear features for construction site progress monitoring and structural dimension compliance control. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, II-3/W5, 129–136. [Google Scholar] [CrossRef]
  79. Bosche, F.; Haas, C.T.; Akinci, B. Automated Recognition of 3D CAD Objects in Site Laser Scans for Project 3D Status Visualization and Performance Control. J. Comput. Civ. Eng. 2009, 23, 311–318. [Google Scholar] [CrossRef]
  80. Shih, N.-J.; Huang, S.-T. 3D Scan Information Management System for Construction Management. J. Constr. Eng. Manag. 2006, 132, 134–142. [Google Scholar] [CrossRef]
  81. Shih, N.J.; Wang, P.H. Point-cloud-based comparison between construction schedule and as-built progress: Long-range three-dimensional laser scanner’s approach. J. Archit. Eng. 2004, 10, 98–102. [Google Scholar] [CrossRef]
  82. El-Omari, S.; Moselhi, O. Integrating 3D laser scanning and photogrammetry for progress measurement of construction work. Autom. Constr. 2008, 18, 1–9. [Google Scholar] [CrossRef]
  83. Kim, C.; Son, H.; Kim, C. Automated construction progress measurement using a 4D building information model and 3D data. Autom. Constr. 2013, 31, 75–82. [Google Scholar] [CrossRef]
  84. Turkan, Y.; Bosché, F.; Haas, C.T.; Haas, R. Toward Automated Earned Value Tracking Using 3D Imaging Tools. J. Constr. Eng. Manag. 2013, 139, 423–433. [Google Scholar] [CrossRef]
  85. Zhang, C.; Arditi, D. Automated progress control using laser scanning technology. Autom. Constr. 2013, 36, 108–116. [Google Scholar] [CrossRef]
  86. Turkan, Y.; Bosché, F.; Haas, C.T.; Haas, R. Tracking of secondary and temporary objects in structural concrete work. Constr. Innov. 2014, 14, 145–167. [Google Scholar] [CrossRef]
  87. Kim, M.-K.; Sohn, H.; Chang, C.-C. Automated dimensional quality assessment of precast concrete panels using terrestrial laser scanning. Autom. Constr. 2014, 45, 163–177. [Google Scholar] [CrossRef]
  88. Kim, M.-K.; Wang, Q.; Park, J.-W.; Cheng, J.C.P.; Sohn, H.; Chang, C.-C. Automated dimensional quality assurance of full-scale precast concrete elements using laser scanning and BIM. Autom. Constr. 2016, 72, 102–114. [Google Scholar] [CrossRef]
  89. Nahangi, M.; Haas, C.T. Skeleton-based discrepancy feedback for automated realignment of industrial assemblies. Autom. Constr. 2016, 61, 147–161. [Google Scholar] [CrossRef]
  90. Nahangi, M.; Haas, C.T. Automated 3D compliance checking in pipe spool fabrication. Adv. Eng. Inform. 2014, 28, 360–369. [Google Scholar] [CrossRef]
  91. Rausch, C.; Nahangi, M.; Haas, C.; West, J. Kinematics chain based dimensional variation analysis of construction assemblies using building information models and 3D point clouds. Autom. Constr. 2017, 75, 33–44. [Google Scholar] [CrossRef]
  92. Bosché, F. Automated recognition of 3D CAD model objects in laser scans and calculation of as-built dimensions for dimensional compliance control in construction. Adv. Eng. Inform. 2010, 24, 107–118. [Google Scholar] [CrossRef]
  93. Wang, Q.; Cheng, J.C.P.; Sohn, H. Automated Estimation of Reinforced Precast Concrete Rebar Positions Using Colored Laser Scan Data. Comput. Aided Civ. Infrastruct. Eng. 2017, 32, 787–802. [Google Scholar] [CrossRef]
  94. Kashani, A.G.; Crawford, P.S.; Biswas, S.K.; Graettinger, A.J.; Grau, D. Automated Tornado Damage Assessment and Wind Speed Estimation Based on Terrestrial Laser Scanning. J. Comput. Civ. Eng. 2015, 29, 04014051. [Google Scholar] [CrossRef]
  95. Zhou, Z.; Gong, J.; Guo, M. Image-Based 3D Reconstruction for Posthurricane Residential Building Damage Assessment. J. Comput. Civ. Eng. 2016, 30, 04015015. [Google Scholar] [CrossRef]
  96. Kashani, A.G.; Graettinger, A.J. Cluster-Based Roof Covering Damage Detection in Ground-Based Lidar Data. Autom. Constr. 2015, 58, 19–27. [Google Scholar] [CrossRef]
  97. Sánchez-Aparicio, L.J.; Del Pozo, S.; Ramos, L.F.; Arce, A.; Fernandes, F.M. Heritage site preservation with combined radiometric and geometric analysis of TLS data. Autom. Constr. 2018, 85, 24–39. [Google Scholar] [CrossRef]
  98. Teza, G.; Galgaro, A.; Moro, F. Contactless recognition of concrete surface damage from laser scanning and curvature computation. NDT E Int. 2009, 42, 240–249. [Google Scholar] [CrossRef]
  99. Kim, M.-K.; Sohn, H.; Chang, C.-C. Localization and Quantification of Concrete Spalling Defects Using Terrestrial Laser Scanning. J. Comput. Civ. Eng. 2015, 29, 04014086. [Google Scholar] [CrossRef]
  100. Mizoguchi, T.; Koda, Y.; Iwaki, I.; Wakabayashi, H.; Kobayashi, Y.; Shirai, K.; Hara, Y.; Lee, H.-S. Quantitative scaling evaluation of concrete structures based on terrestrial laser scanning. Autom. Constr. 2013, 35, 263–274. [Google Scholar] [CrossRef]
  101. Tang, P.; Huber, D.; Akinci, B. Characterization of Laser Scanners and Algorithms for Detecting Flatness Defects on Concrete Surfaces. J. Comput. Civ. Eng. 2011, 25, 31–42. [Google Scholar] [CrossRef]
  102. Olsen, M.J.; Kuester, F.; Chang, B.J.; Hutchinson, T.C. Terrestrial Laser Scanning-Based Structural Damage Assessment. J. Comput. Civ. Eng. 2010, 24, 264–272. [Google Scholar] [CrossRef]
  103. Wang, Q.; Kim, M.-K.; Sohn, H.; Cheng, J.C.P. Surface flatness and distortion inspection of precast concrete elements using laser scanning technology. Smart Struct. Syst. 2016, 18, 601–623. [Google Scholar] [CrossRef]
  104. Nuttens, T.; Stal, C.; De Backer, H.; Schotte, K.; Van Bogaert, P.; De Wulf, A. Methodology for the ovalization monitoring of newly built circular train tunnels based on laser scanning: Liefkenshoek Rail Link (Belgium). Autom. Constr. 2014, 43, 1–9. [Google Scholar] [CrossRef]
  105. Monserrat, O.; Crosetto, M. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS J. Photogramm. Remote Sens. 2008, 63, 142–154. [Google Scholar] [CrossRef]
  106. Oskouie, P.; Becerik-Gerber, B.; Soibelman, L. Automated measurement of highway retaining wall displacements using terrestrial laser scanners. Autom. Constr. 2016, 65, 86–101. [Google Scholar] [CrossRef]
  107. González-Aguilera, D.; Gómez-Lahoz, J.; Sánchez, J. A New Approach for Structural Monitoring of Large Dams with a Three-Dimensional Laser Scanner. Sensors 2008, 8, 5866–5883. [Google Scholar] [CrossRef] [PubMed]
  108. Riveiro, B.; González-Jorge, H.; Varela, M.; Jauregui, D.V. Validation of terrestrial laser scanning and photogrammetry techniques for the measurement of vertical underclearance and beam geometry in structural inspection of bridges. Measurement 2013, 46, 784–794. [Google Scholar] [CrossRef]
  109. Teza, G.; Galgaro, A.; Zaltron, N.; Genevois, R. Terrestrial laser scanner to detect landslide displacement fields: A new approach. Int. J. Remote Sens. 2007, 28, 3425–3446. [Google Scholar] [CrossRef]
  110. Park, H.S.; Lee, H.M.; Adeli, H.; Lee, I. A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning. Comput. Aided Civ. Infrastruct. Eng. 2007, 22, 19–30. [Google Scholar] [CrossRef]
  111. Bu, L.; Zhang, Z. Application of point clouds from terrestrial 3D laser scanner for deformation measurements. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 545–548. [Google Scholar]
  112. Tang, P.; Anil, E.B.; Akinci, B.; Huber, D. Efficient and Effective Quality Assessment of As-Is Building Information Models and 3D Laser-Scanned Data. In Computing in Civil Engineering; American Society of Civil Engineers: Reston, VA, USA, 2011; pp. 486–493. [Google Scholar]
  113. Wang, Q.; Sohn, H.; Cheng, J.C.P. Automatic As-Built BIM Creation of Precast Concrete Bridge Deck Panels Using Laser Scan Data. J. Comput. Civ. Eng. 2018, 32, 04018011. [Google Scholar] [CrossRef]
  114. Kim, M.-K.; Cheng, J.C.P.; Sohn, H.; Chang, C.-C. A framework for dimensional and surface quality assessment of precast concrete elements using BIM and 3D laser scanning. Autom. Constr. 2015, 49, 225–238. [Google Scholar] [CrossRef]
  115. Yoon, S.; Wang, Q.; Sohn, H. Optimal placement of precast bridge deck slabs with respect to precast girders using 3D laser scanning. Autom. Constr. 2018, 86, 81–98. [Google Scholar] [CrossRef]
  116. Guo, Y.; Wan, J.; Lu, M.; Niu, W. A parts-based method for articulated target recognition in laser radar data. Opt. Int. J. Light Electron Opt. 2013, 124, 2727–2733. [Google Scholar] [CrossRef]
  117. Sepasgozar, S.M.E.; Lim, S.; Shirowzhan, S.; Kim, Y.M.; Nadoushani, Z.M. Utilisation of a New Terrestrial Scanner for Reconstruction of As-built Models: A Comparative Study. In Proceedings of the ISARC. International Symposium on Automation and Robotics in Construction, Oulu, Finland, 15–18 June 2015; pp. 1–9. [Google Scholar]
  118. Rabbani, T.; Dijkman, S.; van den Heuvel, F.; Vosselman, G. An integrated approach for modelling and global registration of point clouds. ISPRS J. Photogramm. Remote Sens. 2007, 61, 355–370. [Google Scholar] [CrossRef]
  119. Ahmed, M.F.; Haas, C.T.; Haas, R. Automatic Detection of Cylindrical Objects in Built Facilities. J. Comput. Civ. Eng. 2014, 28, 04014009. [Google Scholar] [CrossRef]
  120. Lee, J.; Son, H.; Kim, C.; Kim, C. Skeleton-based 3D reconstruction of as-built pipelines from laser-scan data. Autom. Constr. 2013, 35, 199–207. [Google Scholar] [CrossRef]
  121. Valero, E.; Adán, A.; Cerrada, C. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners. Sensors 2012, 12, 16099–16115. [Google Scholar] [CrossRef]
  122. Jacobsen, E.L.; Teizer, J. Real-time LiDAR for Monitoring Construction Worker Presence Near Hazards and in Work Areas in a Virtual Reality Environment. In Proceedings of the EG-ICE 2021 Workshop on Intelligent Computing in Engineering, Berlin, Germany, 30 June–2 July 2021; pp. 592–602. [Google Scholar]
  123. Wang, J.; Zhang, S.; Teizer, J. Geotechnical and safety protective equipment planning using range point cloud data and rule checking in building information modeling. Autom. Constr. 2015, 49, 250–261. [Google Scholar] [CrossRef]
  124. Ray, S.J.; Teizer, J. Computing 3D blind spots of construction equipment: Implementation and evaluation of an automated measurement and visualization method utilizing range point cloud data. Autom. Constr. 2013, 36, 95–107. [Google Scholar] [CrossRef]
  125. Marks, E.D.; Cheng, T.; Teizer, J. Laser scanning for safe equipment design that increases operator visibility by measuring blind spots. J. Constr. Eng. Manag. 2013, 139, 1006–1014. [Google Scholar] [CrossRef]
  126. Cheng, T.; Teizer, J. Real-time resource location data collection and visualization technology for construction safety and activity monitoring applications. Autom. Constr. 2013, 34, 3–15. [Google Scholar] [CrossRef]
  127. Fang, Y.; Cho, Y.K.; Chen, J. A framework for real-time pro-active safety assistance for mobile crane lifting operations. Autom. Constr. 2016, 72, 367–379. [Google Scholar] [CrossRef]
  128. Vosselman, G.; Dijkman, S. 3D building model reconstruction from point clouds and ground plans. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2001, 34, 37–44. [Google Scholar]
  129. Chen, C.; Zhu, Z.; Hammad, A. Automated excavators activity recognition and productivity analysis from construction site surveillance videos. Autom. Constr. 2020, 110, 103045. [Google Scholar] [CrossRef]
  130. Memarzadeh, M.; Golparvar-Fard, M.; Niebles, J.C. Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors. Autom. Constr. 2013, 32, 24–37. [Google Scholar] [CrossRef]
  131. Gong, J.; Caldas, C.H. An object recognition, tracking, and contextual reasoning-based video interpretation method for rapid productivity analysis of construction operations. Autom. Constr. 2011, 20, 1211–1226. [Google Scholar] [CrossRef]
  132. Rebolj, D.; Babič, N.Č.; Magdič, A.; Podbreznik, P.; Pšunder, M. Automated construction activity monitoring system. Adv. Eng. Inform. 2008, 22, 493–503. [Google Scholar] [CrossRef]
  133. Kim, C.; Kim, B.; Kim, H. 4D CAD model updating using image processing-based construction progress monitoring. Autom. Constr. 2013, 35, 44–52. [Google Scholar] [CrossRef]
  134. Yang, J.; Arif, O.; Vela, P.A.; Teizer, J.; Shi, Z. Tracking multiple workers on construction sites using video cameras. Adv. Eng. Inform. 2010, 24, 428–434. [Google Scholar] [CrossRef]
  135. Golparvar-Fard, M.; Peña-Mora, F.; Arboleda, C.A.; Lee, S. Visualization of Construction Progress Monitoring with 4D Simulation Model Overlaid on Time-Lapsed Photographs. J. Comput. Civ. Eng. 2009, 23, 391–404. [Google Scholar] [CrossRef]
  136. Iglesias, C.; Martínez, J.; Taboada, J. Automated vision system for quality inspection of slate slabs. Comput. Ind. 2018, 99, 119–129. [Google Scholar] [CrossRef]
  137. Cho, S.-H.; Lee, K.-T.; Kim, S.-H.; Kim, J.-H. Image Processing for Sustainable Remodeling: Introduction to Real-time Quality Inspection System of External Wall Insulation Works. Sustainability 2019, 11, 1081. [Google Scholar] [CrossRef]
  138. Kazemian, A.; Yuan, X.; Davtalab, O.; Khoshnevis, B. Computer vision for real-time extrusion quality monitoring and control in robotic construction. Autom. Constr. 2019, 101, 92–98. [Google Scholar] [CrossRef]
  139. Martinez, P.; Ahmad, R.; Al-Hussein, M. A vision-based system for pre-inspection of steel frame manufacturing. Autom. Constr. 2019, 97, 151–163. [Google Scholar] [CrossRef]
  140. Liu, Y.-F.; Cho, S.; Spencer, B.F.; Fan, J.-S. Concrete Crack Assessment Using Digital Image Processing and 3D Scene Reconstruction. J. Comput. Civ. Eng. 2016, 30, 04014124. [Google Scholar] [CrossRef]
  141. Chi, S.; Caldas, C.H. Image-Based Safety Assessment: Automated Spatial Safety Risk Identification of Earthmoving and Surface Mining Activities. J. Constr. Eng. Manag. 2012, 138, 341–351. [Google Scholar] [CrossRef]
  142. Sami Ur Rehman, M.; Shafiq, M.T.; Ullah, F. Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review. Buildings 2022, 12, 1037. [Google Scholar] [CrossRef]
  143. Son, H.; Kim, C. 3D structural component recognition and modeling method using color and 3D data for construction progress monitoring. Autom. Constr. 2010, 19, 844–854. [Google Scholar] [CrossRef]
  144. Nahangi, M.; Czerniawski, T.; Haas, C.T.; Walbridge, S. Pipe radius estimation using Kinect range cameras. Autom. Constr. 2019, 99, 197–205. [Google Scholar] [CrossRef]
  145. Teizer, J. 3D range imaging camera sensing for active safety in construction. Electron. J. Inf. Technol. Constr. 2008, 13, 103–117. [Google Scholar]
  146. Patel, T.; Guo, B.H.W.; Zou, Y. A scientometric review of construction progress monitoring studies. Eng. Constr. Archit. Manag. 2022, 29, 3237–3266. [Google Scholar] [CrossRef]
  147. Arashpour, M.; Heidarpour, A.; Akbar Nezhad, A.; Hosseinifard, Z.; Chileshe, N.; Hosseini, R. Performance-based control of variability and tolerance in off-site manufacture and assembly: Optimization of penalty on poor production quality. Constr. Manag. Econ. 2020, 38, 502–514. [Google Scholar] [CrossRef]
  148. Siddiqui, H. UWB RTLS for Construction Equipment Localization: Experimental Performance Analysis and Fusion with Video Data. Master’s Thesis, Concordia University, Montréal, QC, Canada, 2014. [Google Scholar]
  149. Maalek, R.; Sadeghpour, F. Accuracy assessment of Ultra-Wide Band technology in tracking static resources in indoor construction scenarios. Autom. Constr. 2013, 30, 170–183. [Google Scholar] [CrossRef]
  150. Li, H.; Chan, G.; Skitmore, M. Integrating real time positioning systems to improve blind lifting and loading crane operations. Constr. Manag. Econ. 2013, 31, 596–605. [Google Scholar] [CrossRef]
  151. Su, X.; Li, S.; Yuan, C.; Cai, H.; Kamat, V.R. Enhanced Boundary Condition-Based Approach for Construction Location Sensing Using RFID and RTK GPS. J. Constr. Eng. Manag. 2014, 140, 04014048. [Google Scholar] [CrossRef]
  152. Montaser, A.; Moselhi, O. RFID indoor location identification for construction projects. Autom. Constr. 2014, 39, 167–179. [Google Scholar] [CrossRef]
  153. Costin, A.; Teizer, J.; Schoner, B. RFID and bim-enabled worker location tracking to support real-time building protocol control and data visualization. ITcon 2015, 20, 495–517. [Google Scholar]
  154. Moeini, S.; Oudjehane, A.; Baker, T.; Hawkins, W. Application of an interrelated UAS—BIM system for construction. progress monitoring, inspection and project management. PM World J. 2017, 6, 13. [Google Scholar]
  155. Zhang, C.; Kalasapudi, V.S.; Tang, P. Rapid data quality oriented laser scan planning for dynamic construction environments. Adv. Eng. Inform. 2016, 30, 218–232. [Google Scholar] [CrossRef]
  156. Tang, P.; Alaswad, F.S. Sensor Modeling of Laser Scanners for Automated Scan Planning on Construction Jobsites. In Proceedings of the Construction Research Congress 2012: Construction Challenges in a Flat World, West Lafayette, IN, USA, 21–23 May 2012; pp. 1021–1031. [Google Scholar]
  157. Ibrahim, Y.M.; Lukins, T.C.; Zhang, X.; Trucco, E.; Kaka, A.P. Towards automated progress assessment of workpackage components in construction projects using computer vision. Adv. Eng. Inform. 2009, 23, 93–103. [Google Scholar] [CrossRef]
  158. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge related damage quantification using unmanned aerial vehicle imagery. Struct. Control. Health Monit. 2016, 23, 1168–1179. [Google Scholar] [CrossRef]
  159. Irizarry, J.; Gheisari, M.; Walker, B.N. Usability Assessment of Drone Technology as Safety Inspection Tools. Electron. J. Inf. Technol. Constr. 2012, 17, 194–212. [Google Scholar]
Figure 1. Research flowchart.
Figure 1. Research flowchart.
Buildings 13 01184 g001
Table 1. Previous uses of vision-based sensing technologies for construction management purposes.
Table 1. Previous uses of vision-based sensing technologies for construction management purposes.
Remote Sensing TechnologyProgress MonitoringQuality Control3D Model CreationH&SObject Detection/RecognitionLocation TrackingReview
Photogrammetry[8,12,27,28,29,30,49,56,57,58,59,60,61,62,63][31,32,33,64,65,66][7,30,34]-/--/--/-[5,8]
UAV Photogrammetry[26,67]-/-[43,68,69]-/--/--/-[44]
Videogrammetry-/--/-[40,70,71]-/-[72]-/-
UAV Videogrammetry-/--/--/--/--/--/-[9,44,54]
Laser Scanning[12,15,18,73,74,75,76,77,78,79,80,81,82,83,84,85,86][13,21,78,79,80,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116][40,43,68,117,118,119,120,121][122,123,124,125,126,127][77]-/-[5,8]
UAV Laser Scanning-/--/-[69,128]-/--/--/-[9,54]
Computer vision[11,27,28,129,130,131,132,133,134,135][136,137,138,139,140][40][141]-/--/-[49,142]
Depth camera[45,46,143][144][48][145]-/--/-[5,47]
All sensing technologies-/--/--/--/--/--/-[2,6,36,37,50,51,146]
All vision-based sensing technologies-/-[147]-/--/--/--/-[52]
UWB[76]-/--/--/--/-[148,149]-/-
RFID-/--/--/-[150]-/-[151,152,153]-/-
GPS-/--/--/-[150]-/-[151]-/-
Table 2. Potential for use of 3D scan data for progress monitoring.
Table 2. Potential for use of 3D scan data for progress monitoring.
Type of Construction WorksPhotogrammetryLaser ScanningUAV PhotogrammetryUAV LIDAR
Earthworks
ExcavationYesYesYesYes
BackfillingYesYesYesYes
EmbankmentsYesYesYesYes
Subterranean geotechnical worksNoNoNoNo
Pile foundationsNoNoNoNo
TunnellingYesYesNoNo
Formwork
ScaffoldingYes *Yes *Yes *No
Wall and column formworkYes *Yes *Yes *No
Slab and beam formworkYes *Yes *NoNo
Reinforcement work
Wall and column rebarYes *Yes *Yes *No
Slab and beam rebarYes *Yes *Yes *No
Concreting
Wall and column concrete pouringNoNoNoNo
Slab and beam concrete pouringYesYesYesNo
Masonry work
BricklayingYesYesYes **No
Rough plasteringYes ***Yes ***NoNo
Assembly works
Steel elementsYesYesYes **No
Wooden elementsYesYesYes **No
Prefabricated elements montage
Steel elementsYesYesYes **No
Wooden elementsYesYesYes **No
Reinforced concrete elementsYesYesYes **No
Façade
Modern facade systemsYesYesYesNo
Traditional facade systemsYes ***Yes ***Yes ***No
Insulation
Thermal insulationYes ***Yes ***Yes ***No
WaterproofingNoNoNoNo
Interior finishing works
TilingNoNoNoNo
FlooringNoNoNoNo
PlasteringNoNoNoNo
PaintingNoNoNoNo
Drywall ceilingsYesYesNoNo
Drywall partition wallsYesYesNoNo
Sanitary facilitiesYesYesNoNo
WindowsYesYesYesNo
DoorsYesYesYes **No
MEP
Mechanical installationsYes ***Yes ***NoNo
Electrical installationsYes ***Yes ***NoNo
PlumbingYes ***Yes ***NoNo
Landscaping
PavingYesYesYesNo *
HorticultureYesYesYesNo *
Road and utilities construction
Road base courseYesYesYesNo
Road pavingYes ***Yes ***Yes ***No
Pipe and cable layingYes ***Yes ***Yes ***No
* Able to identify the existence, but not yet accurate enough to be able to identify each individual element with a sufficient degree of accuracy; ** Yes, but only if visible from the outside of the structure; *** Depending on the thickness of the layer/diameter of the element.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mihić, M.; Sigmund, Z.; Završki, I.; Butković, L.L. An Analysis of Potential Uses, Limitations and Barriers to Implementation of 3D Scan Data for Construction Management-Related Use—Are the Industry and the Technical Solutions Mature Enough for Adoption? Buildings 2023, 13, 1184. https://doi.org/10.3390/buildings13051184

AMA Style

Mihić M, Sigmund Z, Završki I, Butković LL. An Analysis of Potential Uses, Limitations and Barriers to Implementation of 3D Scan Data for Construction Management-Related Use—Are the Industry and the Technical Solutions Mature Enough for Adoption? Buildings. 2023; 13(5):1184. https://doi.org/10.3390/buildings13051184

Chicago/Turabian Style

Mihić, Matej, Zvonko Sigmund, Ivica Završki, and Lana Lovrenčić Butković. 2023. "An Analysis of Potential Uses, Limitations and Barriers to Implementation of 3D Scan Data for Construction Management-Related Use—Are the Industry and the Technical Solutions Mature Enough for Adoption?" Buildings 13, no. 5: 1184. https://doi.org/10.3390/buildings13051184

APA Style

Mihić, M., Sigmund, Z., Završki, I., & Butković, L. L. (2023). An Analysis of Potential Uses, Limitations and Barriers to Implementation of 3D Scan Data for Construction Management-Related Use—Are the Industry and the Technical Solutions Mature Enough for Adoption? Buildings, 13(5), 1184. https://doi.org/10.3390/buildings13051184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop