Next Article in Journal
Reinforcement of Timber Beams with Steel Bars: Parametric Analysis Using the Finite Element Method
Next Article in Special Issue
Design of Innovative Parametric/Dynamic Façade Integrated in the Library Extension Building on UAEU Campus
Previous Article in Journal
Identification of Urban Functional Areas and Governance Measures Based on Point of Interest Data: A Case Study of the Shenyang Railway Station Area in Shenyang City
Previous Article in Special Issue
Indoor Environmental Quality Assessment and Occupant Satisfaction: A Post-Occupancy Evaluation of a UAE University Office Building
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review

by
Muhammad Sami Ur Rehman
1,
Muhammad Tariq Shafiq
1,* and
Fahim Ullah
2
1
Department of Architectural Engineering, College of Engineering, United Arab Emirates University, Al Ain, Abu Dhabi 15551, United Arab Emirates
2
School of Surveying and Built Environment, University of Southern Queensland, Springfield, QLD 4300, Australia
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(7), 1037; https://doi.org/10.3390/buildings12071037
Submission received: 15 June 2022 / Revised: 12 July 2022 / Accepted: 14 July 2022 / Published: 18 July 2022
(This article belongs to the Special Issue ZEMCH—Zero Energy Mass Custom Home International Research 2021)

Abstract

:
The progress monitoring (PM) of construction projects is an essential aspect of project control that enables the stakeholders to make timely decisions to ensure successful project delivery, but ongoing practices are largely manual and document-centric. However, the integration of technologically advanced tools into construction practices has shown the potential to automate construction PM (CPM) using real-time data collection, analysis, and visualization for effective and timely decision making. In this study, we assess the level of automation achieved through various methods that enable automated computer vision (CV)-based CPM. A detailed literature review is presented, discussing the complete process of CV-based CPM based on the research conducted between 2011 and 2021. The CV-based CPM process comprises four sub-processes: data acquisition, information retrieval, progress estimation, and output visualization. Most techniques encompassing these sub-processes require human intervention to perform the desired tasks, and the inter-connectivity among them is absent. We conclude that CV-based CPM research is centric on resolving technical feasibility studies using image-based processing of site data, which are still experimental and lack connectivity to its applications for construction management. This review highlighted the most efficient techniques involved in the CV-based CPM and accentuated the need for the inter-connectivity between sub-processes for an effective alternative to traditional practices.

1. Introduction

Traditional construction progress monitoring (CPM) is based on manual and labor-intensive procedures of information collection, documentation, and reporting of the status of a construction project periodically [1]. These documented reports are used for project monitoring and control against the as-planned project schedule and act as an as-built record throughout the project lifecycle [2]. Accurate progress reporting may keep stakeholders informed about the state of a project and help them make effective decisions about avoiding construction delays and cost overruns by applying required controls to slippage operations and prepare them for managing delay claims [3]. However, traditional progress reporting practices are tedious, error-prone, slow, and often report redundant information, preventing stakeholders from making proactive decisions [4]. More than 70% of contracting firms mentioned poor job site coordination as the primary cause for projects to run over budget and past deadlines. Resultantly, fewer than 30% of the contractors finish projects within the planned budget and on time [5].
Various emerging disruptive technologies are currently explored in construction to address these issues [6]. Applications of emerging technologies in construction have shown great potential to digitalize the project progress monitoring (PM) by providing real-time status of site activities via automated capturing and reporting of site data using digital tools [7]. These technologies include the use of barcodes by collecting real-time data regarding material, equipment, and labor for calculating the progress of the project [4]. Similarly, Radio Frequency Identification (RFID) has been implemented to measure the live information from the earthmoving equipment for accurately estimating the progress [8]. Ultra-Wide Band (UWB) was implemented for material tracking and activity-based progress monitoring, especially in remote and harsh environments [9]. More advanced technology, i.e., three-dimensional (3D) laser scanning was deployed to collect the as-built data and transform it into the 3D point clouds and estimate the construction progress by comparing it with Building Information Models (BIMs) [10]. Furthermore, Augmented Reality (AR), by comparing the real environment with as-planned models, enabled the project teams to visualize the progress and make necessary decisions [11]. One such example of advanced technological tools for automated CPM is Computer Vision (CV).
CV is a technology-driven process that accepts inputs in the form of visual media, either photos or videos, and generates outputs as either decisions or other forms of representation [12]. CV mimics human visualization and possesses a function that derives three-dimensional (3D) objects or data from two-dimensional (2D) inputs that can either be photos or videos, providing the opportunity for automatically analyzing captured images and measuring the construction progress [13,14]. Its integration into the construction field is an interdisciplinary endeavor that incorporates computer science, architecture, construction engineering, and management disciplines. The full potential of CV-based CPM requires fully automated processes of capturing, processing, and retrieving useful information without human intervention [13,15].
CV-based CPM has been claimed to digitalize the monitoring and reporting of construction progress [16]; however, an initial literature review revealed that the research in CV-based monitoring of construction progress is scattered in multidisciplinary areas, such as computer science, architecture, construction management and a holistic research focus on the methods and techniques involved throughout the process of the CV-based CPM is missing. This called for an in-depth literature review of CV-based monitoring of construction projects to understand the extent of automation achieved—specifically, the automation of data acquisition (DAQ), information retrieval, progress estimation, and visualization of useful output—because the CV-based monitoring process needs to be automated to be considered a viable alternative to current CPM methods [17,18,19].
DAQ is the process of collecting visual as-built data using image sensors, which can be either daily photo logs or videos from handheld devices, fixed cameras, or unmanned aerial vehicles (UAVs) [7,20,21]. The information retrieval process corresponds to generating useful output from visual data in the form of 3D models or 3D points defined as ‘point clouds’ [8,22,23]. CPM is usually measured by comparing point cloud data with a four-dimensional (4D) building information model (BIM) that contains information about the planned schedule, and this comparison returns the progress status of various construction activities [24]. Output visualization corresponds to useful information, e.g., earned value management (EVM) analysis, retrieved from the point cloud data and 4D BIM comparison, required to make relevant decisions by the project management teams [25,26]. Previous studies have contributed to the automation of each process, and research efforts towards the automation of the complete process of CV-based CPM will allow its viability towards implementation in the construction process and reduce the efforts of construction management teams (CMTs) [27].
Currently, commercial applications of CV-based CPM are non-existent because it is an emerging technology and is still in the experimental phase with few working demonstrations [28]. Several studies have proposed automated construction progress tracking and monitoring using CV-based methods [29,30,31,32]. Popular applications include point cloud generation [28,33], feature recognition [29,34,35], comparison between as-planned and as-built [36,37,38], and progress monitoring through material estimation [32,39,40]. However, they mainly focused on only one or two of the following aspects of CV-based CPM: DAQ, information retrieval, and progress estimation. The holistic focus on the overall process is missing and the integration of these sub-processes among themselves and the level of automation provided by the techniques is the least researched topic. Researchers have suggested that CV-based CPM could automate the entire construction monitoring process, reduce the labor-intensiveness of the traditional practices, and allow construction managers to act aptly to reduce losses due to construction delays [7,25,41,42]. However, while researchers have claimed many potential benefits of CV-based CPM, very few studies have analyzed and contributed toward the automated CV-based process and its viability in the actual construction environment. This presents a gap targeted in the current study.
In this study, relevant literary databases including Scopus and Web of Science were utilized to retrieve and analyze useful literature systematically. Accordingly, several review articles have been identified on the application of CV-based tools in automating the CPM [7,16,24]. These review articles provided significant insight into the tools, methods, and applications of CV-based CPM. However, none of these existing reviews focused on automating the complete process of CV-based CPM and its viability to replace existing construction monitoring practices. Accordingly, this systematic review presents the automation status of current CV-based tools, methods, and applications for CPM. It aims to investigate the related work on CV-based monitoring of construction projects to understand various aspects of the CV-based CPM process and highlight gaps in the currently existing literature. The objectives of this systematic review study are as follows:
  • Identify the CV-based CPM process, key sub-processes, and enabling techniques for process automation.
  • Discuss the effectiveness and level of automation provided by the identified techniques for CV-based CPM.
  • Discuss the identified CV-based CPM process in comparison with the traditional techniques to understand the industry requirements and highlight key challenges.
Since these automated CPM processes aim to be either successfully integrated within existing construction practices or replace the traditional manual and time-consuming practices. Therefore, a holistic overview of the CV-based CPM is necessary to understand its process and provide a roadmap toward its implementation. The novelty of this research study is its focus on the overall process and the assessment of its sub-processes including the methods and techniques involved in the CV-based CPM process. This study presented the findings by conducting a state-of-the-art literature review. As a result, a holistic overview of the said process is presented to understand the CV-based CPM process for researchers and construction practitioners alike. Furthermore, this paper presents the purpose and aim of the techniques involved in sub-processes, e.g., What is the data acquisition process in CV-based CPM? What kind of data can be acquired from the worksite? How are those data being managed and used to estimate the progress? etc. This will increase the understanding of the readers from diverse backgrounds to comprehend the overall concept behind this process.
Furthermore, the current study provides novel insight into integrating the sub-processes, highlights a disconnect, and guides the research focus towards the seamless integration of these sub-processes to propose a viable solution to the construction industry. Another key novelty is the assessment of the available methods and techniques enabling the CV-based CPM process. Based on the findings of this research study, future research efforts can select the most relevant automated techniques for each sub-process and integrate them into a fully integrated system to achieve the goal of automated CPM for the construction industry. Such objectives have not been undertaken to date and this research endeavor is a novel attempt to fulfill the said objectives.
The rest of the paper is as follows. The materials and methods used in this research paper are presented in Section 2. Section 2 further discusses the literature retrieval process and the process of this systematic literature review. The findings of this research are presented in Section 3. Section 3 is further divided into four sections, i.e., DAQ, information retrieval, progress estimation, and output visualization. The discussion is presented in Section 4 and Section 5 of this study comprise of conclusion, limitations, and future directions.

2. Materials and Methods

A systematic review approach was adopted for this study following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines in line with recent studies [43,44]. Accordingly, the Scopus and Web of Science (WoS) databases were searched for retrieving the relevant research articles. The overall method of this study consists of three major steps: literature retrieval, systematic review, and pertinent analyses, and presenting the retrieved data in the form of an assessment of the CV-based CPM process. These are steps subsequently explained.

2.1. Literature Retrieval

A database search was performed to retrieve relevant literature using Scopus and WoS online databases. Google Scholar search engine was not consulted due to incomplete Boolean operations in advanced search features and non-disclosure of the algorithm by which search results are presented [45]. Furthermore, Google Scholar presents location-specific results due to the inherent characteristic of the Google search engine; thus, results in one part of the world may be different from another even with the same keywords and strings. Pre-defined keywords and semantic search strings were used to search for the relevant literature. The selected keywords were CV, vision-based, real time, automated, and CPM. The asterisk (*) serves as a wildcard and was added to the keyword ‘automat*’ and ‘construction progress*’ in search strings to enable results for similar keywords, such as ‘automated’ and ‘automatic’ and ‘CPM’ and ‘construction progress tracking’. The details of keywords and semantic search strings are shown in Table 1.
The search was restricted from the year 2011 to the year 2021. The initial search returned 233 and 121 research articles from Scopus and WoS, respectively. Then, the search was narrowed down to only two relevant research areas—engineering and computer science—to keep a relevant research focus; this search restriction resulted in 195 and 103 research articles from Scopus and WoS, respectively, showing that most research articles related to the selected keywords belonged to the two research areas. Other restrictions included the selection of journal articles, conference papers, review articles, and book chapters. The final restriction was applied to the language of the retrieved literature by keeping the search between the years 2011 and 2021. Later, a duplicate analysis was performed using the MS Excel feature of finding and removing duplicate entries. In the final step of literature retrieval, abstract screening and full-text screening were performed. Based on this screening process, irrelevant literature was discarded, and a final selection was made for further review and analysis.

2.2. Systematic Literature Review Process

A systematic review approach was adopted to achieve the objectives of this study. This approach is evidence-based and delivers a clear and comprehensive overview of available data on a given topic. The primary purpose of this method is to plan, identify, analyze and summarize the findings of all relevant studies. The systematic review approach is transparent, allowing other researchers to reproduce the results by repeating our methodology.
A search for available literature was performed according to the PRISMA guidelines. The PRISMA protocol aims to enable authors to improve the reporting of systematic reviews and meta-analyses. The PRISMA-recommended flowchart for this study is shown in Figure 1.
Observing the PRISMA 2020 protocols [46], as adopted by [43,44], the following guidelines were ensured:
  • Protocol and registration: This review aims at retrieving and reviewing literature from Scopus and WoS databases based on pre-defined keywords. Furthermore, the review is limited to the literature published from 2011 to 2021.
  • Eligibility criteria: The literature with the pre-defined keywords present in its title, abstract, and keyword sections are selected.
  • Information sources: Two renowned and reliable research databases, i.e., Scopus (scopus.com/search/form.uri?display=basic#basic, 15 May 2022) and WoS (https://www.webofscience.com/wos/woscc/basic-search, 15 May 2022) are consulted for the literature search and retrieval. Both databases can be accessed using the provided links.
  • Search: The complete search process including the limits used during the database search is presented in Table 1 of this manuscript.
  • Study selection: The study selection process involves screening the pre-defined keywords, identifying and removing the duplicates, and a qualitative analysis based on abstract and full-text screening for prominent codes and themes.
  • Data collection process: Relevant literature/data are collected by referring to the online scholarly databases, i.e., Scopus and WoS, using the most suitable pre-defined keywords.
  • Definition for data extraction: One author performed the independent data extraction using pre-defined data fields and processes and by ensuring the quality indicators.
  • Risk of bias and applicability: As the processes are not ranked or subjectively assessed, the risk of bias in individual studies affecting this systematic review is not applicable.
  • Diagnostic accuracy measures: Since no test is being applied and tested in this systematic review, the diagnostic accuracy measure does not apply to this research.
  • Synthesis of results: The collected information is properly analyzed and summarized into relevant categories to understand the evidence present. The results are also compared to other research studies for consistency of the findings.
Since the retrieved literature comprised 180 research articles, an MS Excel spreadsheet was utilized. The relevant information regarding all the studies was listed in different columns starting from the left and over to the right. The main findings were listed towards the end of the spreadsheet. Most of the required information was retrieved from the CSV files downloaded from the Scopus and WoS databases. The findings of this study were further divided into sub-processes involved in the CV-based CPM process. Apart from the sub-processes, this analysis also identified and listed all the methods and techniques involved in the overall process based on their ability to enable specific sub-process, i.e., data acquisition, information retrieval, progress estimation, and output visualization. Finally, the ‘sort’ function was used intensively to filter the desired results throughout the research duration. Moreover, apart from identifying and categorizing sub-processes and techniques involved in the CV-based CPM process, an assessment was performed to categorize the available techniques based on the level of automation they provide. The assessment was conducted based on the information from relevant research studies on how they have demonstrated the use of such techniques.

3. Assessment of the CV-Based CPM Process

CPM entails periodically measuring the on-site progress and comparing the data with a planned schedule to get the actual status of a construction project [47,48]. Traditional CPM practices involve manual data collection, which requires human intervention and hence are slow, error-prone, and labor-intensive [26,49,50]. To overcome these issues, various automated CPM processes have been proposed [4,51,52]. These processes include but are not limited to the use of enhanced information, geospatial, and imaging technologies [7]. The imaging technologies comprise photogrammetry, videogrammetry, laser scanning, and range imaging. Laser scanning is a promising tool for as-built DAQ due to its accuracy; however, it requires expensive equipment, is technically complex, and requires experts to capture, model, and manipulate data for meaningful interpretations [17]. An alternate technique is CV-based technology, which comprises photogrammetry, videogrammetry, and range images [53,54,55,56].
The CV-based CPM comprises four sub-processes: DAQ, information retrieval, progress estimation, and output visualization [7,17]. Each process involves various methods and techniques to achieve the desired output posing several benefits and limitations. Table 2 presents an overview of the CV-based CPM process, identify the sub-processes involved, and summarized the techniques that enable each sub-process. Furthermore, it also presents a concise overview of the advantages and limitations of each identified technique with required references. The following sections and subsections will discuss the information presented in Table 2 in detail.

3.1. DAQ

Successful project management and delivery require control over all the aspects of the project, e.g., resource usage including labor hours, material, and equipment [8]. For efficient project control, project management teams require an accurate data collection strategy to collect from the worksite and compare it with as-planned data to stay aware of the progress and be able to deliver the project within the planned cost and time [88]. DAQ is the first sub-process of the CV-based CPM process and refers to the collection of vision datasets as inputs for the said process. Construction projects are complex and involve hundreds of activities, which create an unstructured and complex environment [89]. Various activities are simultaneously performed at a construction site with hundreds of laborers, equipment, and materials all the time. The CV-based system requires an accurate vision dataset—the image and video datasets are collectively called vision datasets—to identify features or create a point cloud dataset. Owing to the construction site complexity, it is challenging to obtain a clear vision dataset of either photos or videos for efficient DAQ [90]. Many studies have proposed several DAQ techniques using various digital cameras [41,91]. The literature review revealed that the following methods are being used to capture site data that can provide input for a CV-based CPM system.

3.1.1. Unmanned Aerial Vehicles (UAVs)

UAV is a generic aircraft design with no human pilot onboard to operate the aircraft [92]. Recently, UAVs have rapidly entered the architecture, engineering, and construction industry, and their use is expected to grow in the future [93]. For CV-based applications, UAVs are equipped with an optical sensor or a digital camera. Modern UAVs are also equipped with a communication system to transmit the captured vision dataset in real time [57]. UAVs are also quick and cost-effective methods and allow for data collection at places inaccessible by ground-based or manned vehicles [94]. To capture an accurate vision dataset, UAVs require an expert operator and a well-planned flight path with various data capturing angles. However, modern UAVs allow for a pre-planned flight path to be programmed into it, allowing a certain degree of automation in DAQ [58,95].
Mahami et al. attempted to reduce the number of photos required to create an accurate point cloud model and experimented in a physical construction environment. The high-quality camera was attached to a UAV which acquired the vision dataset to extract the measurements of as-built walls to calculate the volume of work achieved. The proposed method with the data acquired through UAVs reported a 99% accuracy for the volume of completed work [62]. Similarly, Kielhauser et al. [64] attempted to estimate the cost of UAV deployment for CPM and quality management and selected a mixed-use commercial building as a test project. The UAV was programmed for an automatic flight on a pre-determined path to target the external wall section and concrete slab only. The study acquired the volumetric as-built data and compared it with the as-planned model to estimate the percent completion of targeted activities. The study successfully demonstrated the use of UAVs for progress monitoring however it reported untidiness and cluttering of construction sites as a potential hindrance to data acquisition through UAVs. The usefulness of the UAVs for acquiring data to enable the CV-based CPM process is evident however studies reported several limitations to the adoption of UAVs, i.e., use is limited to mostly external construction features, the overall process is time-consuming and requires expert manpower, requires costly equipment and hence is costlier than traditional practices in the field [28,64,93].
Useful 3D point clouds can be generated if all features of the construction process are visible throughout the UAV’s flight path [65]. The UAV-enabled vision DAQ has been compared with crane-mounted and terrestrial handheld digital cameras, showing that the UAV-enabled technique was more efficient and flexible and enabled better coverage [91]. Most studies have explored and reported the benefits of UAVs for outside construction; hence, the use of UAVs in interior CPM is the least explored research area [16,65]. As a result, UAVs are very useful tools to capture the vision dataset for an automated CV-based CPM process provided that there is a good-quality digital camera, global positioning system, communication system, and well-programmed automated flight path to cover all possible elements of a construction project [26,96,97].

3.1.2. Handheld Devices

A handheld device is any compact and portable device that can be held, carried, or used by one or both hands. The use of handheld imaging devices, such as smartphones and digital cameras, is common at present [66]. Smartphone, digital single-lens reflex, mirrorless, film, and 360° cameras are well-known handheld devices to acquire vision datasets. From setting out acquisition geometry, collecting vision datasets to transmit data for further processing is a manual process [91]. Various studies have explored the potential of handheld devices for vision DAQ to measure the construction progress based on feature detection, e.g., concrete walls, drywalls, and bricks [40,67,82]. Daily site photologs captured by handheld devices are useful in generating point clouds, identifying various construction features, and estimating construction progress [19,21,78]. For example, Golparvar-Fard [73] identified that construction site staff usually take more than 500 photos a day using several handheld and off-the-shelf cameras and utilized the unordered daily site photologs to extract useful information to be compared with as-planned. The study successfully demonstrated the extraction of point cloud models for comparison and analysis after automatically ordering, calibrating, and removing occlusions. However, the study focused entirely on addressing the technical feasibility of the proposed concept rather than addressing the progress monitoring and tracking of various construction features. Mahami et al. [19] photographed a real construction environment and extracted the measurements of external construction features. This study used a handheld camera and the photographer moved around the entire site taking photos at specific intervals, varying angles, and fixed orientations making the process of data acquisition entirely manual and labor-intensive. Early research in CV-based CPM utilized unordered daily site photologs and other vision datasets captured specifically for extracting the point cloud models but the focus has shifted towards acquiring the vision datasets automatically without human intervention. Handheld devices provide certain flexibility during vision DAQ in adapting according to the site conditions and types of data required; however, the coverage is limited and not useful for an automated CV-based CPM process.

3.1.3. Fixed on Mounts

The term fixed on mounts indicates various camera systems mounted on camera stands, polls, formworks, cranes, robots, etc., for collecting required vision datasets. These systems can be designed to capture vision data on a short- or long-term basis. They are mounted at a specific place to capture the required elements at desired angles. These systems are sometimes connected with a wired or wireless communication system to transmit data for further processing [41].
Various studies have employed these systems for element recognition, 3D point cloud generation, and progress calculation [41,72,73]. Few studies have also mounted these systems on a crane to cover a large area and provided less occluded 3D point clouds for construction progress estimation [33,74]. One study [74] addressed the technical challenge of multi-building extraction and alignment of as-built point clouds. The study utilized the data captured through two stereo cameras installed on a tower crane on a mi-used construction project including a shopping mall, hotel, housing, and offices. The authors reported the successful acquisition of the vision datasets using crane-mounted cameras and subsequent analyses to estimate construction progress. Tuttas et al. argued that despite the effort of installation of a camera on a crane jib, associated maintenance, limited range of the motion of the crane, and its fixed position in a single plan of view; data acquisition from crane-mounted cameras can be designed for a fully automated process [91]. A self-navigating robot-mounted camera system has been explored to create 3D point clouds for an interior of a building, and the usefulness of such systems for various construction management purposes has been reported [70]. A fixed on-mount vision DAQ technique can be fully automated by equipping the camera system with a pan–tilt–zoom function and a pre-determined coverage area programmed into it [41,91]. Despite several proofs-of-concept, the cameras fixed on mounts do not provide complete coverage of the construction site and cover all the construction features making the point clouds fragmented. Future research must undertake to explore the possibility of acquiring vision datasets using multiple cameras and integrating the output to get more detailed point cloud models hence a more accurate comparison between as-planned and as-built.

3.1.4. Surveillance Cameras

Surveillance cameras are video cameras installed to observe an area for multiple security and monitoring-related purposes. These camera systems transmit video and audio signals to a digital video recorder where the video data can be viewed, recorded, or processed for the required purpose.
Few studies have attempted CV-based CPM using the video feed or video data from surveillance cameras installed on a construction site, as opposed to most available studies that explored the viability of image data and retrieved information by image processing using various techniques [75]. For example, Wu et al. [59] recognize the work cycles for an earthmoving excavator by constructing its Stretching–Bending Sequential Patterns (SBSP). This research study utilized long video sequences and recognized the complete cycle of an excavator, i.e., digging, hauling, swinging, and dumping. This research study accurately recognizes the work cycle and estimated the progress of the equipment by multiplying the excavator’s capacity by the number of cycles counted [32]. They presented a framework encompassing object detection, instance segmentation, and multiple object tracking to collect the location and temporal information of precast concrete wall installation on a construction site. However, the study reported that the movement of the camera and view range of the surveillance camera on a construction site significantly influence the effectiveness of the vision datasets. Video data from surveillance are successfully processed to obtain the progress of various prefabricated construction elements and the working of machinery at a construction site [32,59,75]. Surveillance cameras can be potential DAQ techniques for the automated CV-based CPM process provided a well-planned layout and several cameras are installed throughout the vicinity of the construction site [75].
Table 3 presents the techniques that enable data acquisition for the CV-based CPM process identified through literature and summarizes the purpose of identified techniques for the said process. It also presents the information extracted from a detailed review of the retrieved literature on how these studies were deployed and utilized by each referenced research study and categorizes them into manual, semi-automated, and fully automated based on the level of human intervention involved. This summarization help understands the level of automation provided by each technique for acquiring the vision datasets from the complex construction environment.

3.2. Information Retrieval

The acquired vision datasets contain vital as-built information from the construction environment. In the construction environment, the data collected from the worksite hold significant importance as they help in analyzing and reporting the progress of the project and enable project management teams to gain valuable insights regarding the actual status of the project in terms of physical progress, earned labor hours, material consumed, equipment utilized, etc. [1]. Once the DAQ has been performed and data are transmitted or transferred to a storage medium, the next and most important sub-process is to extract useful information from the vision data. Information retrieval is performed through signal processing or, more precisely, image processing. Images from an image dataset or frames from a video dataset are the inputs, and the outputs are usually some characteristics or features associated with the inputs. For CPM, usually, the information retrieval sub-process aims to obtain an as-built model in the form of a 3D model or a 3D dataset, which is then compared with an as-planned model to estimate the progress of various activities of a construction process [80].
The articles retrieved and selected for this study have proposed various techniques to extract the required information from the data acquired. These can be grouped into four distinct categories, i.e., (1) classification, (2) edge detection, (3) quantification, and (4) object tracking [98]. In addition, each category has several other techniques to process the associated vision datasets. The key techniques are discussed below.

3.2.1. Structure from Motion (SfM)

SfM is a technique that reconstructs a 3D structure/model/point cloud using 2D images of a scene or an object. It is a photogrammetric imaging technique and lies in the quantification category along with digital image correlation [98]. The term quantification means a method of obtaining real-life measurements from a 2D image dataset [99]. SfM reconstructs 3D models by matching features in various images and estimating the relative position of a camera. The inputs are in the form of image data with recommended 60% side overlap and 80% forward overlap between images to realize high-quality and detailed 3D models or 3D point clouds as outputs [100]. This technique automatically detects and matches features from an image dataset of varying scales, angles, and orientations. Various studies have demonstrated the use of image-based reconstruction utilizing high-quality images taken from the construction environment for progress monitoring, productivity measurement, quality control, and safety management, providing the project management teams with a remarkable opportunity to visualize as-built data [19,31,38,62].
Unordered image collection from construction sites has been used in various studies to test the effectiveness of SfM, and high accuracy of generated 3D as-built models has been reported [73,77]. Moreover, [38] utilized the high-quality images taken from the interior scene of a construction project to demonstrate image-based 3D reconstruction through SfM and compared it with the output of a laser scanner. The study concluded that the accuracy of the model generated from the image-based reconstruction was less than the laser scanner however the proposed approach automatically overlays the hi-resolution images to 3D point clouds models which presented its potential for its use in progress monitoring through as-built visualization. Another study [19] collected several images from proper positions from two real-life construction projects, i.e., one-story, and two-story residential buildings. The SfM technique was deployed to generate a 3D point cloud model for two case study projects and quantities were calculated using the proposed technique. This study reported 99% accuracy and identified that this system becomes less accurate as the length of the building/element increases. The process of reconstructing a 3D model from an image dataset remains reliant on human intervention at various steps to improve the output quality.

3.2.2. Convolutional Neural Network (CNN)

CNN is a technique that identifies and differentiates various objects or features in an image by assigning weights and biases to them [101]. CNN is a Deep Learning (DL) algorithm and falls under the feature detection/classification category of CV-based analysis. Long Short-Term Memory (LSTM), which can analyze and obtain information from video frames, belongs to the same category. The term DL refers to Machine Learning (ML) in an artificial learning environment that is capable of learning unlabeled or unstructured data without supervision. CNN comprises a convolutional and a pooling layer; usually, a pooling layer is added after the convolutional layer. The input is in the form of an image, and the convolutional layer uses matrix-based scanning over the image and identifies features. Later, the pooling layer reduces the number of parameters to learn and computations required by a network, thereby reducing the size of feature maps, which is a summarized version of features detected in the input [102]. In recent years these CNN-based techniques have achieved further development in the construction domain [103]. Object detection and tracking have been the interest of many researchers, e.g., unsafe behaviors were detected by tracking the workers while walking on formwork supports [104], and diverse construction activities were also recognized to save the valuable time of project management teams [105], also single worker and equipment were tracked for longer periods for calculating productivity [106], multi-worker/machinery tracking also for productivity estimation [107], etc.
Few researchers have used CNN to monitor the progress of construction machinery and the installation of various prefabricated components [32,59,108]. In a recent study [32], the installation of precast concrete walls was monitored by detecting and tracking individual wall panels by utilizing the video feed from a surveillance camera installed on a construction site. This vision method was designed to get two types of information, i.e., time information and location information. The study reported the useability of such algorithms for CPM purposes and directed further research to extend this technique to detect other construction features as well. Similarly, another study [59] demonstrated the combination of the CNN technique to identify the work cycle of an earthmoving excavator by utilizing long video sequences. This study demonstrated the feasibility of the idea of calculating the stretching-bending cycle of the excavator to estimate the quantity of earth moved during the overall operation. However, the proposed technique was simpler and further research was directed to explore the viability of such techniques for accurate measurement of work cycles and hence an accurate measurement of progress. The CNN process requires pre-training of the algorithm to efficiently identify various features from the input and can automate the entire process.

3.2.3. Support Vector Machines (SVM)

SVM is a technique that classifies the features or information in an image by assigning positive and negative values to features across a hyperplane. SVM is a classifier that lies in the classification/feature detection category. Unlike ANN, SVM is a supervised ML technique highly regarded for its two-group classification with a higher degree of accuracy; however, multigroup classification can be achieved by dividing a problem into several two-group classification problems [30]. The input is in the form of an image, and a pre-trained SVM classifier performs a binary analysis and classifies various features by drawing a hyperplane between two groups.
Various studies have implemented SVM to detect various construction materials and estimate the project progress [29,109]. For example, [109] inferred the construction activity of girder launching for a rail project. This study utilized the structural responses collected from the girder launching equipment and identified the exact state of a girder, i.e., auto launching, segment lifting, post-tensioning, and span lowering. However, this study was a demonstration of such techniques and highlighted the limitations of only relying on structural responses, and directed future studies to integrate more sensors to get accurate feedback. Another study [82] investigated the installation of drywall using a video feed from the interior construction environment. Based on the identification of three different states of drywall panel during installation, i.e., installation, plastering, and painting of panels, the progress of drywall installation were measured. The SVM was trained with the extracted feature to demonstrate the success of the proposed technique. The learning of SVM can be significantly improved using the k-nearest neighbor algorithm [110]; however, not many studies can be found on testing its performance in a real-world construction site with a great degree of uncertainty, occlusions, and variability.

3.2.4. Simultaneous Localization and Mapping (SLAM)

SLAM is a technique of reconstructing or updating a 3D map of an unknown location while navigating through it [111]. SLAM is similar to SfM; however, SLAM maps an environment in real time. Similar to SfM, SLAM is a photogrammetric technique and is in the quantification category. SLAM learns by moving around an environment and searching for known features, which can be achieved by moving around in the environment once or multiple times. The inputs are in the form of images obtained from video frames and the outputs are in the form of feature points [70].
A preliminary study investigated the effectiveness of SLAM and reported its potential application in tracking construction equipment [60]. For example, [60] conducted a pilot study to demonstrate the real-time 3D reconstruction of a construction environment by utilizing visual SLAM and UAV. This study discussed the use of the proposed technique on three different projects, i.e., by calculating the volume of earthwork between two instances, measuring the progress of pavement compaction by tracking the equipment on a job site, and tracking site assets, e.g., labor, equipment, material, etc. The study proposed a primitive SLAM algorithm and highlighted various limitations, i.e., limitations of performance in complex construction environments, limited sensing range of visual sensors, memory management, and difficulty of maneuvering UAVs through construction worksites. Despite this demonstration, this technique is less explored for construction progress estimation, and its effectiveness is subject to further research in this domain.

3.2.5. Cascading Classifiers (CC)

The CC technique is a technique to detect and track an object or a feature in an image. CC lies in the classification/detection category [81]. It is an ML-based approach in which the classifier is trained by inputting many positive and negative images. The positive images are the ones intended to be recognized by the said classifier; otherwise, they are negative. The inputs are in the form of images from a construction environment, and then a pre-trained CC identifies various features from the dataset and indicates or highlights them on the input images. The accuracy of this technique depends on a detailed algorithm and pre-training using a well-sorted image dataset.
Few studies have attempted using CC in progress monitoring by detecting construction features such as drywall or concrete walls and reported a good performance [37,82]. For example, [37] attempted to automate the progress monitoring for the interior construction environment and focused on the visualization and computer vision techniques by utilizing an object-based approach. In the proposed approach, the study compares as-built BIM and as-planned images in a 3D walkthrough model. The rapid object detection scheme based on the Haar-like cascading classifier was deployed to detect features from the acquired vision dataset. The cascading classifier utilized in this study was first trained to detect specific construction features from the images using a couple of hundred positive and negative samples. However, the proposed algorithm was limited to specific construction features and this study suggested that detecting multiple features from a complex construction environment requires further research towards modifying and improving such an algorithm. The supervised training of this technique makes it less desirable for a fully automated CV-based CPM process.

3.2.6. Histogram of Oriented Gradients (HoG)

HoG is a feature descriptor and is used for object detection. HoG is a feature extraction technique and lies in the classification/detection category. HoG identifies features in an image by returning a descriptor of each cell that it creates when an input image is given to the algorithm. Each input is decomposed into small cells or blocks, and the algorithm computes the HoG by counting occurrences in each cell or block and returns the detection of various features present in an image. This technique accurately detects various construction features by focusing on their shapes [112]. The detection methods that rely on visual features, e.g., shape and color have been proposed and tested in construction scenarios. HoG feature is among the top two popular shape-based features that are being used to detect construction workers and equipment [68].
Few studies have explored the effectiveness HoG technique in CPM using a CV-based dataset by identifying and tracking construction workers and equipment [81,83]. For example, [83] attempted to automate the estimation of the progress of earthmoving activity by monitoring the movement of dump trucks on large-scale construction projects. The said study evaluated the combination of HoG algorithms to recognize off-highway dump trucks in a noisy video stream. Theis study effectively demonstrated the ability of HoG algorithms to detect the activity of trucks in an effective and timely manner and presented its usefulness in productivity measurement, performance control, and other safety-related applications on large-scale civil construction projects. The combination of HoG with tracking techniques has successfully reported very precise detection and tracking of workers and equipment on a construction site for CPM purposes [68]. However, very few studies have explored this technique and further research in this domain is required to ascertain the effectiveness of this technique in CPM-related applications.

3.2.7. Laplacian of Gaussian (LoG)

LoG is a well-known algorithm for detecting edges and is widespread in image processing and CV applications. The Laplacian algorithm is used to detect edges but is sensitive to noise; therefore, a Gaussian filter is commonly applied to images to remove noise, yielding LoG as their combination [113]. LoG lies in the edge detection category. LoG filters are derivative filters that work by detecting the rapid changes in an image. They detect objects and boundaries and extract features.
CPM by counting brick has been attempted using LoG and reported relatively higher precision values [39,40]. Hui and Brilakis [39] attempted to automatically count the number of bricks ordered vs. consumed to eliminate manual surveys for the said purpose. The study proposed novel and automated method to count bricks during the facade construction of a building. This method utilized images and videos from the construction site and selected the color thresholds based on the color of the bricks. The LoG was deployed to detect the edges of the bricks from the constructed wall and compared various known features, i.e., shape and size to accurately detect the number of bricks. However, the implementation of LoG in CPM is one of the least explored areas. This technique requires various manual steps to achieve the desired accuracy, making it a less desirable option for automating the entire CV-based CPM process.

3.2.8. Speeded-Up Robust Features (SURF)

The SURF technique is a template matching technique that detects features from an image. It is a feature extraction or detection technique that lies in the classification category [114]. It can be used for object recognition, image registration and 3D point cloud generation. The SURF technique computes operators using a box filter that enables fast computation, thereby allowing real-time object detection and tracking.
A recent study [75] attempted CPM of prefabricated timber construction using surveillance cameras and reported near real-time monitoring. This study proposed an automated installation rate measurement system using inexpensive digital cameras installed on the mast of a tower crane. The time-lapse footage of the construction sequence was processed and analyzed for precise progress information. The study also successfully demonstrated the ability of SURF in aligning removing vision differences of images resulting from wind and tower crane vibrations. The study further reported the 95% accuracy rate of detected timber panels during the observation. This study also directed future research efforts towards the proper setup of gear by ensuring the minimum level of noise in the footage and algorithm improvements for multiple camera feedback. The process of point cloud generation and registration can also be enhanced using this technique [70]. The implementation of this technique in the CV-based CPM can automate the whole process. However, assessing the benefits of this technique in achieving the intended targets requires further research efforts.
Table 4 summarizes the techniques that enable the information retrieval process for the CV-based CPM process. The reviewed literature most prominently explored the SfM and SVM techniques in the early stages of the research in this domain. However, as evident from the number of studies, the CNN technique has seen increased interest from researchers and has been explored to enable the CV-based CPM process more efficiently. This table also highlights the fact that most of the studies manually extracted the information using the SfM technique as compared to SVM and CNN. Furthermore, the SVM technique provides a certain level of automation after being trained through a supervised learning process. However, the CNN technique has been providing the most automation for the said process as per the findings from the literature.

3.3. Progress Estimation

Progress estimation is the process of determining whether construction execution is according to a pre-planned or baseline schedule. In CV-based CPM, this process can also be termed the comparison between as-built and as-planned. This comparison provides information on whether the intended construction activities are executed according to the schedule, upon which construction managers can take necessary actions to keep the project on track and avoid construction delays. Distinct techniques were proposed in the articles retrieved and selected for this study to obtain necessary information on the progress status of various construction activities by comparing as-built and as-planned models and otherwise. We now discuss various frequently used information progress estimation techniques in the following subsections.

3.3.1. Building Information Models (BIMs) Registration

Building information modeling is a process of creating and managing digital representations of any built entity in a highly collaborative environment. BIMs are highly intelligent, data-rich, and object-oriented models; they not only represent various objects and spaces of buildings but also contain knowledge on how these objects and spaces relate [115]. These qualities make BIMs efficient as-planned models to perform the comparison between as-built and as-planned to estimate progress [116]. Usually, 4D BIMs, which are 3D models integrated with the fourth dimension, i.e., time, are used as as-planned models to be compared with superimposed as-built models [17]. For automated CV-based CPM, many studies have attempted various techniques of acquiring 3D point clouds or as-built models and reported the intended results by comparing as-built with as-planned BIMs [84,85,117,118]. The intended results are reported by comparing as-built with as-planned BIMs.
The process of superimposing an as-built model over an as-planned model is called registration. The registration process requires post-processing of the acquired CV-based data to remove noise. There are two distinct methods of image model registration: coarse registration and fine registration. The coarse registration along with post-processing allows for rough alignment, whereas the fine registration can achieve near-optimal alignment [74]. The coarse registration can be achieved through various approaches, such as plane-based matching, principal component analysis-based alignment [47], plane patches-based matching, 3D to 2D transformation [119], building extraction, and alignment for multi-building point clouds [74], etc. For example, [84] proposed a semi-automated plane-based coarse registration approach and compared the proposed method with already existing general-purpose registration software, and reduced the complexities and time requirements associated with this process. This system addressed the issues of self-similarities at the object and model level by the semi-automated matching stage and demonstrated resilience and robustness in challenging registration cases. The plane-based registration finds the matching planes from as-built and as-planned datasets and aligns both models [84]. However, the plane patched-based system is the state-of-art of current practice, allowing for automatic registration using a 4-plane approach rather than a 3-plane as in a plane-based process [85]. Moreover, the Iterative Closest Point (ICP) is most frequently used for fine registration in CV-based CPM studies [47,74,84,119]. For example, [120] proposed a fully automated registration of 3D data to a 3D CAD model for CPM purposes. This study deployed a two-step global-to-local registration procedure, i.e., principal component analysis-based global registration and an ICP-based local registration, and demonstrated that not only this proposed technique fully automates the process but also proved beneficial for project progress monitoring purposes. In conclusion, few studies have proposed semi-automated and fully automated registration techniques and directed further research efforts in exploring the useability of these techniques for CPM in the real construction environment.

3.3.2. Progress Estimation through Object Recognition/Matching

The progress estimation process requires the recognition of various features or objects present in the built environment following the model registration process. The comparison between as-built and as-planned datasets does not provide useful information by the mere superimposition of both models. This requires proper recognition, identification, or classification of construction features in the as-built point cloud, thereby comparing information available in object-based as-planned BIMs to retrieve information.
Many scholars have proposed techniques to detect, classify and recognize various features of construction, such as walls, panels, tiles, ducts, doors, windows and furniture. [32,34,121]. A few of these techniques are Mask R-CNN [32], DeepSORT [32], Voxel [73], OpenGL [37], Probabilistic Model [86], Point Net [99], Surface-based recognition [84], timestamp [32], point calculation [122], segmentation by color thresholding [72], color images [89], etc. These techniques retrieve useful information from object-based models that calculate progress estimation. Apart from as-built vs. as-planned comparison, a few studies have also estimated progress based on counting equipment cycles [59], material classification [78], material usage [79,82], and installation speed [75].
Table 5 compares various progress estimation methods used for CV-based CPM and categorizes them into manual, semi-automated, and fully automated based on the required manual input throughout the process based on the information retrieved from the reviewed literature. There are two prevailing techniques, first involves the overlaying of the as-built model over the as-planned BIM model and comparing the overall models for the volume of work completed and converting the output into percent complete or identifying the specific construction features using the targeted BIM model for a specific type of construction feature, i.e., RCC structure of a building. The latter technique involves identifying various construction features from the vision dataset and measures their quantity or relative position of installation/construction. In addition, it is evident from the table that BIMs registration provides a varying level of automation based on the complexity of the registration algorithm, and object recognition/matching also provides a certain level of automation depending on the type of technique being utilized.

3.4. Output Visualization

Output visualization corresponds to the presentation of useful information or results obtained from the information retrieval or progress estimation. In the CV-based CPM process, output visualization is as essential as DAQ, information retrieval, and progress estimation. The results of this sub-process are crucial to CMTs, as they must make decisions based on the output extracted from the entire process. Traditionally, CMTs use reports, Gantt charts, or other visual techniques. The reviewed literature on the output of the CV-based construction management process suggests a few visualization techniques, which are discussed as follows.

3.4.1. Color Labels

Color labels are the most frequently used form for representing the information on a vision dataset. The color labels used in CV environments are called bounding boxes. These labels can provide a range of information depending on the purpose of an algorithm or a process. The output shown by these labels can be classification, identification, segmentation, verification, detection, recognition, etc. [35,74,123].
Few studies have also superimposed various forms of color labels on the input images to visualize the current state of construction activities under consideration [39,73,77]. For example, in this study, [77], the authors utilized color labels for performance monitoring. A color label was given to each construction component to indicate whether that component was built ahead of schedule (green label), on time (semitransparent white label), or behind schedule (red label). Furthermore, different color variations were also suggested in annotating other factors as well, i.e., darker blue label to indicate that component has not been built as planned. Similarly, another study [73] color-coded the construction elements to indicate whether the element in consideration was behind or on schedule. The green label was utilized to annotate the element on schedule, red labels were assigned to elements behind schedule and grey labels indicated those elements whose progress has not been reported. However, the proposed thresholds were tailored for the specific construction elements and require further research to be proven significant in different cases [33]. The size, shape, and description of these color labels depend on the technique or selected algorithm and can be modified as per the project requirements.

3.4.2. Augmented Reality (AR) and Virtual Reality (VR)

AR is an interactive experience of a physical world with useful information loaded onto the video feed for multiple purposes and operations [124,125]. Output visualization of CV-based CPM can also be enabled by VR after processing vision datasets for extracting construction progress status. Some studies have explored the use of AR by linking it with processed BIMs for monitoring construction progress [37,42]. An object-based interior CPM was proposed by [37] utilizing the common as-built construction photographs and displaying the interior construction progress by imposing color and pattern coding based on the actual status. The study reported the difficulty in object detection and classification in the interior construction environment and directed future studies to improve the algorithm to automatically detect various types of interior objects without manual human intervention. Another study [42] proposed a real-time AR-based system for modular CPM. The proposed system demonstrated the automatic AR registration method represented by relative coordinates and a fixed camera and successfully presented a live animation of the construction sequence. However, the study was conducted in a controlled lab environment using a simple mockup of a building. To sum it up, AR-based visualization requires the accurate alignment of BIMs and real-world data. For such accurate positioning, sophisticated surveying equipment is required. Another approach is to install fiduciary markers to locate and estimate the exact location. Mobile-based AR systems can provide the required accuracy with construction status. However, it requires further research to be implemented for CV-based CPM [8].

3.4.3. Earned Value Management (EVM)

EVM is a project monitoring and control technique that integrates cost, time, and scope to calculate project performance [126]. It requires as-planned and actual information on all three constraints to calculate the schedule and cost variances and provide the schedule and cost performance indexes as an alternative means of accessing the project performance. EVM is a valued output for CMTs as it enables them to access the current state of a project and make necessary decisions to keep the project on track.
A recent study has suggested that the output of CV-based monitoring of construction projects can be integrated with EVM systems [16]. An automated calculation of EVM indicators can provide necessary project control information to identify potential delays and make useful decisions to control construction delays and cost overruns [16]. However, the retrieved literature does not provide practical evidence of EVM-based output from CV-based CPM.
Table 6 summarizes the explored techniques by the literature on the CV-based CPM process. It also stated the concise purpose of the identified techniques in enabling the output visualization of the said process for construction project management teams. Furthermore, it also compares the identified visualization techniques used for presenting the output of CV-based CPM and categorizes them into manual, semi-automated, and fully automated based on the required human intervention throughout the process. Majority of the research studies utilized color labels or varying shapes and sizes to showcase the outcome of the overall process. These color labels also provide a certain level of automation to present the outcome as well. However, AR and VR are still the least explored and very few studies have implemented these techniques to present the outcome of the CV-based CPM process.
Figure 2 summarizes the findings of this systematic literature review and provides a holistic overview of the automated CV-based CPM process. The overall process comprises four distinct sub-processes from vision datasets to statistical output, i.e., photos and videos, histograms, line graphs, and Gantt charts. Starting from DAQ, the figure summarizes the identified techniques from the literature which enable the acquisition of vision datasets from the construction environment. The dotted line between DAQ and information retrieval represents the disconnect between the two processes. Most of the studies have manually transferred the vision dataset from the capturing device to the processing medium, i.e., personal computers. Few have explored the automatic transfer of vision datasets via wireless mediums and wired connections. Information retrieval and progress estimation is a cyclical process as these sub-processes comprise overlapping techniques and the process of information extraction also overlaps between two sub-processes. Most of the literature initially retrieved the information from the vision dataset using various algorithm-based techniques. It later used the information to overlay on as-planned models via BIM registration or deployed various object recognition and feature matching algorithms to identify various construction features from the as-built dataset efficiently. Lastly, this study also found a disconnect between the progress estimation and the output visualization. Most of the reviewed literature performed complex processing to show the output of their selected technique in the form that helps the construction teams make informed decisions about ongoing construction projects.

4. Discussion

This section aimed at addressing the final objective of this study, i.e., discussing the role of identified techniques within the CV-based CPM process and providing an overview of how these techniques are aiming at improving or replacing existing practices. Generally, project success is associated with the successful completion of the project within its planned cost and time by achieving an acceptable level of project specifications [127]. To achieve the project’s pre-defined time, cost, and quality goals, proactive monitoring of its progress is required. The traditional CPM process requires manual data collection of multiple activities happening on a single day, compiling the data into useful information, and reporting it to the stakeholders. Hence, traditional methods are labor-intensive, slow, error-prone, and inefficient. Alternatively, CV-based CPM techniques have shown some potential in resolving the problems with traditional methods [24]. This section compares traditional practices with the CV-based CPM process of progress DAQ from the construction environment, retrieving the useful progress-related information from data collection instruments, i.e., Daily Progress Reports (DPR), estimating the progress, and calculating the variance between baseline and as-built and finally presenting the outcome of the CPM process to consultant or client depending on the contractual obligations. The comparison is as follows.

4.1. DAQ

In a traditional construction setting, the planning engineer prepares a template and hands it over to the site supervisor to collect progress-related information associated with all the activities planned for that specific day [1]. The data acquired through this practice usually consist of the name of the activity, actual start and end dates, percent completion, number of earned manhours, and amount of work achieved in terms of quantities [128]. Although this process is manual, slow, and labor-intensive, as established earlier in this section, it gives the required information to the construction project management teams for further processing and decision making. Currently, the DAQ process of the CV-based CPM explored several techniques, i.e., UAVs, handheld devices, fixed on mounts, and surveillance cameras, which requires specialized skills on part of the planning department and site staff to prepare for the progress-related data collection from the construction environment [129]. It also requires additional cost resources for necessary equipment, manpower, and other accessories to successfully conduct the DAQ process on a construction site and cover all activities involved in a construction project [28]. Furthermore, a typical construction project comprises more than a hundred different types of activities throughout its lifecycle, and currently, the CV-based CPM process explored only a handful of these activities, i.e., structural elements [32,67,119,130], e.g., columns, beams, slab, walls, etc., interior tile work [121], HVAC ducts [16,131], earthmoving activity [59,132], and few others, which limits the useability of the CV-based CPM for the construction management teams. Apart from these limitations, current literature demonstrated the usefulness of these techniques in ideal or near-ideal settings on construction projects which requires but are not limited to ideal lighting conditions, the least number of physical obstructions, better weather conditions, etc. None of the literature attempted the DAQ process during bad weather conditions and night shifts of the construction execution process.

4.2. Information Retrieval

Like the traditional DAQ process, the information retrieval process is also a manual activity. After filling out the provided DPR template, the construction supervisor, and hands it over to the planning engineer for further processing after getting it approved by the construction manager [1]. Then, planning engineers manually extract and transfer the necessary information into their computers and perform the necessary analysis as per the requirements of their organization and contractual obligations. However, in the case of the CV-based CPM process, information retrieval is being extracted from the vision datasets using algorithm-based techniques which lies in the domain of Information Technology (IT) and is far from the expertise of a typical planning engineer who is usually a graduate civil engineer. The information retrieval of CV-based CPM raises the requirement of an expert IT professional along with the planning engineer with technical expertise in the construction domain. Moreover, it also requires the investment in the required tools and equipment necessary for performing this process efficiently [64]. Apart from the human resource and cost resources required for this process, the major problem which no one has addressed yet is deciding whether the activity in consideration has yet finished or not based on the quality of completed work [24]. The CV-based CPM process is only taking the physical existence of any structural element or other construction features as completion. This is usually not the case in a traditional construction setting. For example, if the CV-based CPM process has detected a concrete column at a certain point in time it will mark it as complete or finished, what if it does not meet the requirement of finishing quality, or even worse what if it does not meet the minimum requirements of the desired structural strength. Moreover, no study has integrated the approval requirements from various cadres in the construction management team to make them responsible and liable for delivering the desired outcome of the construction project.

4.3. Progress Estimation

In a traditional setting, after retrieving progress-related information from DPR, the planning engineering usually inputs the data into commercially available software such as MS Project and Primavera P6 for further analysis, which compares as-planned and as-built and returns the variance between the two [133,134]. The CV-based CPM process widely utilized the BIM models for overlaying the information retrieved in the form of point clouds and estimating the construction progress by volumetric comparison between the as-built point cloud model and as-planned BIM [63,135,136,137,138]. This raises the need for the BIM models for the construction project consideration, and not every project management team utilizes and implements BIM for their construction projects [139,140]. Like the information retrieval process, the progress estimation requires designated human resources with the specialized expertise required for the advanced IT-enabled CV-based CPM process. Furthermore, the CV-based CPM aims to improve the traditional practices or transform the traditional process by eliminating them, which means replacing the currently utilized tools mandated by the contracts, and clients/consultants require feedback in a specific form [141].

4.4. Output Visualization

The output visualization in a traditional construction management practice corresponds to the weekly and monthly construction progress reports submitted to consultants or clients as per the requirements mandated by the contract [141]. These reports usually comprise a detailed summary of activities planned vs. completed, the number of human resources deployed, the type and number of tools and equipment utilized, and a summary of cost along with various charts and graphs to summarize the overall progress achieved during the reported period [1]. The visual charts and graphs usually comprise of manpower histogram, S-curve, EVM charts, and a tracking Gantt chart showing the comparison between baseline and as-built schedule [142]. Furthermore, periodic payments to the contractors are being made based on the progress information provided through such progress reports. Currently, the CV-based CPM process explored and utilized color labels to showcase the outcome of the overall process which does not portray useful information or quantify the outcome to enable project stakeholders to take any necessary decisions. Furthermore, the outcome visualization through AR and VR requires significant technical expertise and further investment on behalf of project management teams [143].
Despite the current limitations of existing techniques and limited use cases, the CV-based CPM has shown immense potential for automating progress information acquisition, processing, and disbursement. Owing to the importance of accurate progress information throughout the lifecycle of the project and the capabilities of CV-based techniques, a framework is needed that efficiently assesses the available automated techniques and determines their integration within project management processes using Technology Readiness Levels (TRL). This hypothesized framework should measure the maturity level of technology for CV-based CPM applications. By involving the construction industry practitioners, this framework should also be adopted for further research to support the understanding of the industry’s requirements from the CPM process and holistic comparison of requirements versus capabilities of the CV-based CPM techniques will provide an overview of resources required to develop the technology and its subsequent adoption. Moreover, the enhanced and accurate progress information will significantly improve the decision-making process by the CMTs which will further improve the adoption of such technologically advanced techniques. These efforts are likely to reduce the reliance of CMTs on manual, time-consuming, labor-intensive, and most importantly subjective assessments being made by the supervisory staff regarding the completion of construction activities and improve overall project delivery.
The research efforts presented by the retrieved literature mainly focus on using CV-based processing by exploring various techniques for monitoring and measuring construction progress. Very few studies have presented the PM solution from the perspective of the construction management field. Further, most studies presented and experimented with ML-based techniques to retrieve information from the vision data collected from construction sites and addressed occlusion problems that occur due to the busy nature of construction sites. Data synthesis presented in this review has also highlighted the need for research efforts from the perspective of the construction management domain by involving construction management professionals and directing the output of these analyses towards helping CMTs in making useful decisions to keep projects on track.

5. Conclusions, Limitations, and Future Directions

The concept of an automated CV-based CPM process is still in its inception, as evident by the research efforts in the form of proof of concepts, practical applications, and research advancements. To make it a viable and preferred alternative method for PM on construction projects, all sub-processes involved in the CV-based CPM, i.e., DAQ, information retrieval, progress estimation, and output visualization, must consider a certain level of automation and should minimize labor-intensive tasks and provide a well-connected process. These sub-processes further contain various techniques that can be tested with multiple alternatives to provide an efficient process for automatic PM on construction sites. In summary, the CV-based CPM method encompasses four distinct processes, i.e., (1) data acquisition, (2) information retrieval, (3) progress estimation, and (4) output visualization.
This systematic review summarizes the entire CV-based CPM process and identifies many techniques and the level of automation provided by said techniques to achieve a fully automated process. The application of CV-based PM for assessing construction projects has many potential benefits. It can standardize the process of determining whether an activity is finished or not; currently, it is up to a site supervisor to manually inspect and determine the completion of an activity. The well-connected and fully automated process can also eliminate the redundancy and errors from the process, making it reliable. However, current research lacks the focus on determining the possibility and laying out the guidelines for adopting CV-based techniques by CMTs into their existing processes. Furthermore, this research effort neither aimed at improving the existing CPM techniques nor resolving the problems posed by the traditional CPM process.
The retrieved literature further highlights the potential of CV-based techniques in the construction industry. The research interest in this domain is increasing, and many scholars have explored the real-life implementation of such concepts in the construction environment. The benefits reported by these explorations predict the adaptability of CV-based techniques over traditional techniques used in the construction industry for decades. Further, researchers could explore the possibility of real-time monitoring and measurement of construction progress to enable CMTs to adopt the CV-based CPM over traditional practices.
Finally, heightened research interest in this domain and further investment from construction and academia in exploring the benefits of the CV-based CPM process could transform the viability and acceptance by the construction industry sooner than later. Further, research should divert its focus towards exploring the readiness and attitude of the construction industry in adopting such technological solutions. It also needs to explore the potential benefits, contractual allowances, tradeoff between traditional and information technology-enabled techniques, anticipated reaction from the industry personnel, and accepted forms in which the construction industry would adopt the automated CV-based CPM technique.
The findings of this research effort will present a holistic overview of the CV-based CPM process to the readership and help them understand the workings of this complete process. Furthermore, the assessment and categorization of available techniques into the level of automation provided by such techniques will help researchers quickly grasp the idea of which technique provide the most automation throughout the process. Moreover, this study highlights the disconnect between sub-processes of the CV-based CPM process and directs future research studies toward the importance of seamless integration of all sub-processes towards achieving a viable alternative to traditional CPM practices. Lastly, this research study also compared the offerings of this advanced technique with the requirements of actual construction setup and expectations of project management teams as well which will help future research endeavors to involve and consider the industry perspective during the developmental research phase in this domain.

Author Contributions

Conceptualization, M.S.U.R. and M.T.S.; methodology, M.S.U.R., M.T.S. and F.U.; validation, M.S.U.R., M.T.S. and F.U.; formal analysis, M.S.U.R. and M.T.S.; resources, M.S.U.R. and M.T.S.; data curation, M.S.U.R.; writing—original draft preparation, M.S.U.R.; writing—review and editing, M.S.U.R., M.T.S. and F.U.; visualization, M.S.U.R. and F.U.; supervision, M.T.S. and F.U.; project administration, M.T.S. and F.U.; funding acquisition, M.T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the UAEU Program for Advanced Research (UPAR), fund code ‘31N397’.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available with the first author and can be shared with researchers upon genuine request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CVComputer Vision
CPMConstruction Progress Monitoring
DAQData Acquisition
WoSWeb of Science
UAVUnmanned Aerial Vehicle
SfMStructure from Motion
CNNConvolutional Neural Network
SVMSupport Vector Machines
SLAMSimultaneously Localization and Mapping
CCCascading Classifiers
SURFSpeeded-Up Robust Features
LoGLaplacian of Gaussian
HoGHistogram of Oriented Gradients
BIMsBuilding Information Models
ARAugmented Reality
VRVirtual Reality
EVMEarned Value Management
CMTConstruction Management Teams
DPRDaily Progress Report

References

  1. Hegazy, T. Computer-Based Construction Project Management. Available online: https://www.pearson.ch/HigherEducation/Pearson/EAN/9781292027128/Computer-Based-Construction-Project-Management-Pearson-New-International-Edition (accessed on 14 October 2021).
  2. Son, H.; Bosché, F.; Kim, C. As-built data acquisition and its use in production monitoring and automated layout of civil infrastructure: A survey. Adv. Eng. Inform. 2015, 29, 172–183. [Google Scholar] [CrossRef]
  3. El-Sabek, L.M.; McCabe, B.Y. Coordination Challenges of Production Planning in the Construction of International Mega-Projects in The Middle East. Int. J. Constr. Educ. Res. 2017, 14, 118–140. [Google Scholar] [CrossRef]
  4. Navon, R.; Sacks, R. Assessing research issues in Automated Project Performance Control (APPC). Autom. Constr. 2007, 16, 474–484. [Google Scholar] [CrossRef]
  5. Wolfe, S., Jr. 2020 Construction Survey: Contractors Waste Time & Get Paid Slowly. Available online: https://www.levelset.com/blog/2020-report-construction-wasted-time-slow-payment/ (accessed on 5 April 2022).
  6. Manfren, M.; Tagliabue, L.C.; Cecconi, F.R.; Ricci, M. Long-Term Techno-Economic Performance Monitoring to Promote Built Environment Decarbonisation and Digital Transformation—A Case Study. Sustainability 2022, 14, 644. [Google Scholar] [CrossRef]
  7. Omar, T.; Nehdi, M.L. Data acquisition technologies for construction progress tracking. Autom. Constr. 2016, 70, 143–155. [Google Scholar] [CrossRef]
  8. El-Omari, S.; Moselhi, O. Data acquisition from construction sites for tracking purposes. Eng. Constr. Arch. Manag. 2009, 16, 490–503. [Google Scholar] [CrossRef]
  9. Cheng, T.; Mantripragada, U.; Teizer, J.; Vela, P.A. Automated Trajectory and Path Planning Analysis Based on Ultra Wideband Data. J. Comput. Civ. Eng. 2012, 26, 151–160. [Google Scholar] [CrossRef]
  10. Bosché, F.; Guillemet, A.; Turkan, Y.; Haas, C.T.; Haas, R. Tracking the Built Status of MEP Works: Assessing the Value of a Scan-vs-BIM System. J. Comput. Civ. Eng. 2014, 28, 05014004. [Google Scholar] [CrossRef] [Green Version]
  11. Ibrahim, Y.M.; Kaka, A.P.; Aouad, G.; Kagioglou, M. As-built Documentation of Construction Sequence by Integrating Virtual Reality with Time-lapse Movies. Arch. Eng. Des. Manag. 2008, 4, 73–84. [Google Scholar] [CrossRef]
  12. Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Newton, MA, USA, 2008. [Google Scholar]
  13. Zhang, X.; Bakis, N.; Lukins, T.C.; Ibrahim, Y.M.; Wu, S.; Kagioglou, M.; Aouad, G.; Kaka, A.P.; Trucco, E. Automating progress measurement of construction projects. Autom. Constr. 2009, 18, 294–301. [Google Scholar] [CrossRef]
  14. Fisher, R.B.; Breckon, T.P.; Dawson-Howe, K.; Fitzgibbon, A.; Robertson, C.; Trucco, E.; Williams, C.K.; Williams, I. Dictionary of Computer Vision and Image Processing. In Dictionary of Computer Vision and Image Processing; John Wiley: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  15. Omar, H.; Mahdjoubi, L.; Kheder, G. Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities. Comput. Ind. 2018, 98, 172–182. [Google Scholar] [CrossRef]
  16. Ekanayake, B.; Wong, J.K.-W.; Fini, A.A.F.; Smith, P. Computer vision-based interior construction progress monitoring: A literature review and future research directions. Autom. Constr. 2021, 127, 103705. [Google Scholar] [CrossRef]
  17. Kopsida, M.; Brilakis, I.; Vela, P. A Review of Automated Construction Progress and Inspection Methods. In Proceedings of the 32nd CIB W78 Conference on Construction IT, Tokyo, Japan, 27–29 January 2015; pp. 421–431. [Google Scholar]
  18. Fard, M.G.; Pena-Mora, F. Application of visualization techniques for construction progress monitoring. In Proceedings of the International Workshop on Computing in Civil Engineering 2007, Pittsburgh, PA, USA, 24–27 July 2007. [Google Scholar]
  19. Mahami, H.; Nasirzadeh, F.; Ahmadabadian, A.H.; Nahavandi, S. Automated Progress Controlling and Monitoring Using Daily Site Images and Building Information Modelling. Buildings 2019, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  20. Hamledari, H.; Sajedi, S.; McCabe, B.; Fischer, M. Automation of Inspection Mission Planning Using 4D BIMs and in Support of Unmanned Aerial Vehicle–Based Data Collection. J. Constr. Eng. Manag. 2021, 147, 04020179. [Google Scholar] [CrossRef]
  21. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Monitoring of Construction Performance Using Daily Progress Photograph Logs and 4d As-Planned Models. In Proceedings of the 2009 ASCE International Workshop on Computing in Civil Engineering, Austin, TX, USA, 24–27 June 2009; Volume 346, pp. 53–63. [Google Scholar] [CrossRef]
  22. El-Omari, S.; Moselhi, O. Integrating 3D laser scanning and photogrammetry for progress measurement of construction work. Autom. Constr. 2008, 18, 1–9. [Google Scholar] [CrossRef]
  23. Chaiyasarn, K.; Kim, T.-K.; Viola, F.; Cipolla, R.; Soga, K. Distortion-Free Image Mosaicing for Tunnel Inspection Based on Robust Cylindrical Surface Estimation through Structure from Motion. J. Comput. Civ. Eng. 2016, 30, 04015045. [Google Scholar] [CrossRef]
  24. Wang, Q.; Kim, M. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  25. Turkan, Y.; Bosché, F.; Haas, C.T.; Haas, R. Toward Automated Earned Value Tracking Using 3D Imaging Tools. J. Constr. Eng. Manag. 2013, 139, 423–433. [Google Scholar] [CrossRef] [Green Version]
  26. Han, K.K.; Golparvar-Fard, M. Potential of big visual data and building information modeling for construction performance analytics: An exploratory study. Autom. Constr. 2017, 73, 184–198. [Google Scholar] [CrossRef] [Green Version]
  27. Teizer, J. Status quo and open challenges in vision-based sensing and tracking of temporary resources on infrastructure construction sites. Adv. Eng. Inform. 2015, 29, 225–238. [Google Scholar] [CrossRef]
  28. Álvares, J.S.; Costa, D.B. Literature Review on Visual Construction Progress Monitoring Using Unmanned Aerial Vehicles. In Proceedings of the 26th Annual Conference of the International Group for Lean Construction: Evolving Lean Construction Towards Mature Production Management Across Cultures and Frontiers, Chennai, India, 18–22 July 2018. [Google Scholar] [CrossRef] [Green Version]
  29. Dimitrov, A.; Golparvar-Fard, M. Vision-based material recognition for automated monitoring of construction progress and generating building information modeling from unordered site image collections. Adv. Eng. Inform. 2014, 28, 37–49. [Google Scholar] [CrossRef]
  30. Seong, H.; Choi, H.; Cho, H.; Lee, S.; Son, H.; Kim, C. Vision-Based Safety Vest Detection in a Construction Scene. In Proceedings of the 34th International Symposium on Automation and Robotics in Construction (ISARC 2017), Taipei, Taiwan, 28 June–1 July 2017. [Google Scholar]
  31. Braun, A.; Tuttas, S.; Borrmann, A.; Stilla, U. Improving progress monitoring by fusing point clouds, semantic data and computer vision. Autom. Constr. 2020, 116, 103210. [Google Scholar] [CrossRef]
  32. Wang, Z.; Zhang, Q.; Yang, B.; Wu, T.; Lei, K.; Zhang, B.; Fang, T. Vision-Based Framework for Automatic Progress Monitoring of Precast Walls by Using Surveillance Videos during the Construction Phase. J. Comput. Civ. Eng. 2021, 35, 04020056. [Google Scholar] [CrossRef]
  33. Borrmann, A.; Stilla, U. Automated Progress Monitoring Based on Photogrammetric Point Clouds and Precedence Relationship Graphs. In Proceedings of the 32nd ISARC, Oulu, Finland, 15–18 June 2015; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
  34. Kim, Y.; Nguyen, C.H.P.; Choi, Y. Automatic pipe and elbow recognition from three-dimensional point cloud model of industrial plant piping system using convolutional neural network-based primitive classification. Autom. Constr. 2020, 116, 103236. [Google Scholar] [CrossRef]
  35. Chen, J.; Fang, Y.; Cho, Y.K. Unsupervised Recognition of Volumetric Structural Components from Building Point Clouds. In Proceedings of the ASCE International Workshop on Computing in Civil Engineering 2017, Seattle, DC, USA, 25–27 June 2017. [Google Scholar] [CrossRef]
  36. Skibniewski, M.J. Construction Project Monitoring with Site Photographs and 4D Project Models. Organ. Technol. Manag. Constr. Int. J. 2014, 6, 1106–1114. [Google Scholar] [CrossRef]
  37. Roh, S.; Aziz, Z.; Peña-Mora, F. An object-based 3D walk-through model for interior construction progress monitoring. Autom. Constr. 2011, 20, 66–75. [Google Scholar] [CrossRef]
  38. Golparvar-Fard, M.; Bohn, J.; Teizer, J.; Savarese, S.; Peña-Mora, F. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques. Autom. Constr. 2011, 20, 1143–1155. [Google Scholar] [CrossRef]
  39. Hui, L.; Brilakis, I. Real-Time Brick Counting for Construction Progress Monitoring. In Proceedings of the 2013 ASCE International Workshop on Computing in Civil Engineering, Los Angeles, CA, USA, 23–25 June 2013; pp. 818–824. [Google Scholar] [CrossRef]
  40. Hui, L.; Park, M.-W.; Brilakis, I. Automated Brick Counting for Façade Construction Progress Estimation. J. Comput. Civ. Eng. 2015, 29, 04014091. [Google Scholar] [CrossRef]
  41. Kim, C.; Kim, B.; Kim, H. 4D CAD model updating using image processing-based construction progress monitoring. Autom. Constr. 2013, 35, 44–52. [Google Scholar] [CrossRef]
  42. Lin, Z.; Petzold, F.; Ma, Z. A Real-Time 4D Augmented Reality System for Modular Construction Progress Monitoring. In Proceedings of the 36th International Symposium on Automation and Robotics in Construction, ISARC 2019, Banff Alberta, AB, Canada, 21–24 May 2019; pp. 743–748. [Google Scholar] [CrossRef]
  43. Ullah, F. A Beginner’s Guide to Developing Review-Based Conceptual Frameworks in the Built Environment. Architecture 2021, 1, 5–24. [Google Scholar] [CrossRef]
  44. Ullah, F.; Qayyum, S.; Thaheem, M.J.; Al-Turjman, F.; Sepasgozar, S.M. Risk management in sustainable smart cities governance: A TOE framework. Technol. Forecast. Soc. Chang. 2021, 167, 120743. [Google Scholar] [CrossRef]
  45. Haddaway, N.R.; Collins, A.; Coughlin, D.; Kirk, S. The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching. PLoS ONE 2015, 10, e0138237. [Google Scholar] [CrossRef] [Green Version]
  46. Salameh, J.-P.; Bossuyt, P.M.; McGrath, T.A.; Thombs, B.D.; Hyde, C.J.; Macaskill, P.; Deeks, J.J.; Leeflang, M.; Korevaar, D.A.; Whiting, P.; et al. Preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA): Explanation, elaboration, and checklist. BMJ 2020, 370, m2632. [Google Scholar] [CrossRef]
  47. Kim, C.; Son, H.; Kim, C. Automated construction progress measurement using a 4D building information model and 3D data. Autom. Constr. 2013, 31, 75–82. [Google Scholar] [CrossRef]
  48. Hwang, B.-G.; Zhao, X.; Ng, S.Y. Identifying the critical factors affecting schedule performance of public housing projects. Habitat Int. 2013, 38, 214–221. [Google Scholar] [CrossRef]
  49. Yang, J.; Park, M.-W.; Vela, P.A.; Golparvar-Fard, M. Construction performance monitoring via still images, time-lapse photos, and video streams: Now, tomorrow, and the future. Adv. Eng. Inform. 2015, 29, 211–224. [Google Scholar] [CrossRef]
  50. Zhang, C.; Arditi, D. Advanced Progress Control of Infrastructure Construction Projects Using Terrestrial Laser Scanning Technology. Infrastructures 2020, 5, 83. [Google Scholar] [CrossRef]
  51. Bohn, J.S.; Teizer, J. Benefits and Barriers of Construction Project Monitoring Using High-Resolution Automated Cameras. J. Constr. Eng. Manag. 2010, 136, 632–640. [Google Scholar] [CrossRef]
  52. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Integrated Sequential As-Built and As-Planned Representation with D4AR Tools in Support of Decision-Making Tasks in the AEC/FM Industry. J. Constr. Eng. Manag. 2011, 137, 1099–1116. [Google Scholar] [CrossRef]
  53. Elazouni, A.; Salem, O.A. Progress monitoring of construction projects using pattern recognition techniques. Constr. Manag. Econ. 2011, 29, 355–370. [Google Scholar] [CrossRef]
  54. Lukins, T.C.; Trucco, E. Towards Automated Visual Assessment of Progress in Construction Projects. In Proceedings of the British Machine Vision Conference, Warwick, UK, 10–13 September 2007. [Google Scholar] [CrossRef] [Green Version]
  55. Rebolj, D.; Babič, N.; Magdič, A.; Podbreznik, P.; Pšunder, M. Automated construction activity monitoring system. Adv. Eng. Inform. 2008, 22, 493–503. [Google Scholar] [CrossRef]
  56. Kim, H.; Kano, N. Comparison of construction photograph and VR image in construction progress. Autom. Constr. 2008, 17, 137–143. [Google Scholar] [CrossRef]
  57. Taj, G.; Anand, S.; Haneefi, A.; Kanishka, R.P.; Mythra, D. Monitoring of Historical Structures using Drones. IOP Conf. Ser. Mater. Sci. Eng. 2020, 955, 012008. [Google Scholar] [CrossRef]
  58. Ibrahim, A.; Golparvar-Fard, M.; El-Rayes, K. Metrics and methods for evaluating model-driven reality capture plans. Comput. Civ. Infrastruct. Eng. 2021, 37, 55–72. [Google Scholar] [CrossRef]
  59. Wu, Y.; Wang, M.; Liu, X.; Wang, Z.; Ma, T.; Xie, Y.; Li, X.; Wang, X. Construction of Stretching-Bending Sequential Pattern to Recognize Work Cycles for Earthmoving Excavator from Long Video Sequences. Sensors 2021, 21, 3427. [Google Scholar] [CrossRef]
  60. Shang, Z.; Shen, Z. Real-Time 3D Reconstruction on Construction Site Using Visual SLAM and UAV. arXiv 2015, 151, 10–17. [Google Scholar] [CrossRef] [Green Version]
  61. Shojaei, A.; Moud, H.I.; Flood, I. Proof of Concept for the Use of Small Unmanned Surface Vehicle in Built Environment Management. In Proceedings of the Construction Research Congress 2018: Construction Information Technology—Selected Papers from the Construction Research Congress, New Orleans, LA, USA; 2018; pp. 116–126. [Google Scholar] [CrossRef]
  62. Mahami, H.; Nasirzadeh, F.; Ahmadabadian, A.H.; Esmaeili, F.; Nahavandi, S. Imaging network design to improve the automated construction progress monitoring process. Constr. Innov. 2019, 19, 386–404. [Google Scholar] [CrossRef]
  63. Han, K.; Golparvar-Fard, M. Crowdsourcing BIM-guided collection of construction material library from site photologs. Vis. Eng. 2017, 5, 14. [Google Scholar] [CrossRef] [Green Version]
  64. Kielhauser, C.; Manzano, R.R.; Hoffman, J.J.; Adey, B.T. Automated Construction Progress and Quality Monitoring for Commercial Buildings with Unmanned Aerial Systems: An Application Study from Switzerland. Infrastructures 2020, 5, 98. [Google Scholar] [CrossRef]
  65. Braun, A.; Borrmann, A. Combining inverse photogrammetry and BIM for automated labeling of construction site images for machine learning. Autom. Constr. 2019, 106, 102879. [Google Scholar] [CrossRef]
  66. Jeon, S.; Hwang, J.; Kim, G.J.; Billinghurst, M. Interaction Techniques in Large Display Environments Using Hand-Held Devices. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Limassol, Cyprus, 1–3 November 2006. [Google Scholar] [CrossRef]
  67. Son, H.; Kim, C.; Kim, C. Automated Color Model–Based Concrete Detection in Construction-Site Images by Using Machine Learning Algorithms. J. Comput. Civ. Eng. 2012, 26, 421–433. [Google Scholar] [CrossRef]
  68. Zhu, Z.; Ren, X.; Chen, Z. Integrated detection and tracking of workforce and equipment from construction jobsite videos. Autom. Constr. 2017, 81, 161–171. [Google Scholar] [CrossRef]
  69. Vick, S.; Brilakis, I. Road Design Layer Detection in Point Cloud Data for Construction Progress Monitoring. J. Comput. Civ. Eng. 2018, 32, 04018029. [Google Scholar] [CrossRef]
  70. Kim, P.; Chen, J.; Kim, J.; Cho, Y.K. SLAM-driven intelligent autonomous mobile robot navigation for construction applications. In Workshop of the European Group for Intelligent Computing in Engineering; Springer: Cham, Switzerland, 2018; pp. 254–269. [Google Scholar] [CrossRef]
  71. Gai, M.; Cho, Y.K.; Xu, Q. Target-Free Automatic Point Clouds Registration Using 2D Images. In Proceedings of the 2013 ASCE International Workshop on Computing in Civil Engineering, Los Angeles, CA, USA, 23–25 June 2013; pp. 865–872. [Google Scholar] [CrossRef]
  72. Son, H.; Kim, C. 3D structural component recognition and modeling method using color and 3D data for construction progress monitoring. Autom. Constr. 2010, 19, 844–854. [Google Scholar] [CrossRef]
  73. Golparvar-Fard, M.; Pena-Mora, F.; Savarese, S. Monitoring changes of 3D building elements from unordered photo collections. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 6–13 November 2011; pp. 249–256. [Google Scholar] [CrossRef] [Green Version]
  74. Masood, M.K.; Aikala, A.; Seppänen, O.; Singh, V. Multi-Building Extraction and Alignment for As-Built Point Clouds: A Case Study With Crane Cameras. Front. Built Environ. 2020, 6, 581295. [Google Scholar] [CrossRef]
  75. Fini, A.A.F.; Maghrebi, M.; Forsythe, P.J.; Waller, T.S. Using existing site surveillance cameras to automatically measure the installation speed in prefabricated timber construction. Eng. Constr. Arch. Manag. 2021, 29, 573–600. [Google Scholar] [CrossRef]
  76. Braun, A.; Tuttas, S.; Stilla, U.; Brrmann, A. Incorporating Knowledge on Construction Methods into Automated Progress Monitoring Techniques. In Proceedings of the 23rd International Workshop of the European Group for Intelligent Computing in Engineering, Kraków, Poland, 29 June–1 July 2016. [Google Scholar]
  77. Karsch, K.; Golparvar-Fard, M.; Forsyth, D. ConstructAide: Analyzing and Visualizing Construction Sites through Photographs and Building Models. ACM Trans. Graph. 2014, 33, 176. [Google Scholar] [CrossRef]
  78. Han, K.K.; Golparvar-Fard, M. Appearance-based material classification for monitoring of operation-level construction progress using 4D BIM and site photologs. Autom. Constr. 2015, 53, 44–57. [Google Scholar] [CrossRef]
  79. Bunrit, S.; Kerdprasop, N.; Kerdprasop, K. Evaluating on the Transfer Learning of CNN Architectures to a Construction Material Image Classification Task. Int. J. Mach. Learn. Comput. 2019, 9, 201–207. [Google Scholar] [CrossRef] [Green Version]
  80. Chen, J.; Kira, Z.; Cho, Y.K. Deep Learning Approach to Point Cloud Scene Understanding for Automated Scan to 3D Reconstruction. J. Comput. Civ. Eng. 2019, 33, 04019027. [Google Scholar] [CrossRef]
  81. Memarzadeh, M.; Heydarian, A.; Golparvar-Fard, M.; Niebles, J.C. Real-Time and Automated Recognition and 2D Tracking of Construction Workers and Equipment from Site Video Streams. In Proceedings of the ASCE International Conference on Computing in Civil Engineering, Atlanta, GA, USA, 17–19 June 2012; pp. 429–436. [Google Scholar] [CrossRef]
  82. Kropp, C.; Koch, C.; König, M. Drywall State Detection in Image Data for Automatic Indoor Progress Monitoring. In Proceedings of the 2014 International Conference on Computing in Civil and Building Engineering, Orlando, FL, USA, 23–25 June 2014; pp. 347–354. [Google Scholar] [CrossRef] [Green Version]
  83. Azar, E.R.; McCabe, B. Automated Visual Recognition of Dump Trucks in Construction Videos. J. Comput. Civ. Eng. 2012, 26, 769–781. [Google Scholar] [CrossRef]
  84. Bosché, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Inform. 2012, 26, 90–102. [Google Scholar] [CrossRef]
  85. Bueno, M.; Bosché, F.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. 4-Plane congruent sets for automatic registration of as-is 3D point clouds with 3D BIM models. Autom. Constr. 2018, 89, 120–134. [Google Scholar] [CrossRef]
  86. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Automated Progress Monitoring Using Unordered Daily Construction Photographs and IFC-Based Building Information Models. J. Comput. Civ. Eng. 2015, 29, 04014025. [Google Scholar] [CrossRef]
  87. Shahi, A.; Safa, M.; Haas, C.T.; West, J.S. Data Fusion Process Management for Automated Construction Progress Estimation. J. Comput. Civ. Eng. 2015, 29, 04014098. [Google Scholar] [CrossRef]
  88. Atkinson, R. Project management: Cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. Int. J. Proj. Manag. 1999, 17, 337–342. [Google Scholar] [CrossRef]
  89. Hwang, N.; Son, H.; Kim, C. Is Color an Intrinsic Property of Construction Object’s Representation? Evaluating Color-Based Models to Detect Objects by Using Data Mining Techniques. In Proceedings of the 29th International Symposium of Automation and Robotics in Construction, Eindhoven, The Netherlands, 26–29 June 2012. [Google Scholar] [CrossRef] [Green Version]
  90. Hamledari, H.; McCabe, B. Automated Visual Recognition of Indoor Project-Related Objects: Challenges and Solutions. In Proceedings of the 2016 Construction Research Congress, San Juan, Puerto Rico, 31 May–2 June 2016; pp. 2573–2582. [Google Scholar] [CrossRef]
  91. Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U. Evaluation of Acquisition Strategies for Image-Based Construc-tion Site Monitoring. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS-2016), Prague, Czech Republic, 12–19 July 2016; Copernicus Publications on behalf of ISPRS: Prague, Czech Republic, 2016. [Google Scholar] [CrossRef]
  92. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  93. de Melo, R.R.S.; Costa, D.B.; Álvares, J.S.; Irizarry, J. Applicability of unmanned aerial system (UAS) for safety inspection on construction sites. Saf. Sci. 2017, 98, 174–185. [Google Scholar] [CrossRef]
  94. Gheisari, M.; Esmaeili, B. Unmanned Aerial Systems (UAS) for Construction Safety Applications. Construction Research Congress 2016: Old and New Construction Technologies Converge in Historic San Juan. In Proceedings of the 2016 Construction Research Congress, CRC, San Juan, Puerto Rico, 31 May–2 June 2016; pp. 2642–2650. [Google Scholar] [CrossRef]
  95. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  96. Han, K.; Degol, J.; Golparvar-Fard, M. Geometry- and Appearance-Based Reasoning of Construction Progress Monitoring. J. Constr. Eng. Manag. 2018, 144, 04017110. [Google Scholar] [CrossRef] [Green Version]
  97. McCabe, B.Y.; Hamledari, H.; Shahi, A.; Zangeneh, P.; Azar, E.R. Roles, Benefits, and Challenges of Using UAVs for Indoor Smart Construction Applications. In Proceedings of the Congress on Computing in Civil Engineering, Seattle, Washington, DC, USA, 25–27 June 2017. [Google Scholar]
  98. Mostafa, K.; Hegazy, T. Review of image-based analysis and applications in construction. Autom. Constr. 2020, 122, 103516. [Google Scholar] [CrossRef]
  99. Liu, C.; Tang, C.-S.; Shi, B.; Suo, W.-B. Automatic quantification of crack patterns by image processing. Comput. Geosci. 2013, 57, 77–80. [Google Scholar] [CrossRef]
  100. Messinger, M.; Silman, M. Unmanned aerial vehicles for the assessment and monitoring of environmental contamination: An example from coal ash spills. Environ. Pollut. 2016, 218, 889–894. [Google Scholar] [CrossRef] [PubMed]
  101. Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
  102. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
  103. Choi, S.; Myeong, W.; Jeong, Y.; Myung, H.; Choi, S.; Myeong, W.; Jeong, Y.; Myung, H. Vision-Based Hybrid 6-DOF Displacement Estimation for Precast Concrete Member Assembly. Smart Struct. Syst. 2017, 20, 397. [Google Scholar] [CrossRef]
  104. Fang, W.; Zhong, B.; Zhao, N.; Love, P.E.; Luo, H.; Xue, J.; Xu, S. A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network. Adv. Eng. Inform. 2019, 39, 170–177. [Google Scholar] [CrossRef]
  105. Luo, X.; Li, H.; Cao, D.; Dai, F.; Seo, J.; Lee, S. Recognizing Diverse Construction Activities in Site Images via Relevance Networks of Construction-Related Objects Detected by Convolutional Neural Networks. J. Comput. Civ. Eng. 2018, 32, 04018012. [Google Scholar] [CrossRef]
  106. Park, M.-W.; Makhmalbaf, A.; Brilakis, I. Comparative study of vision tracking methods for tracking of construction site resources. Autom. Constr. 2011, 20, 905–915. [Google Scholar] [CrossRef]
  107. Bügler, M.; Borrmann, A.; Ogunmakin, G.; Vela, P.A.; Teizer, J. Fusion of Photogrammetry and Video Analysis for Productivity Assessment of Earthwork Processes. Comput. Civ. Infrastruct. Eng. 2016, 32, 107–123. [Google Scholar] [CrossRef]
  108. Brilakis, I.; Soibelman, L.; Shinagawa, Y. Material-Based Construction Site Image Retrieval. J. Comput. Civ. Eng. 2005, 19, 341–355. [Google Scholar] [CrossRef]
  109. Harichandran, A.; Raphael, B.; Varghese, B.R.A.K. Inferring Construction Activities from Structural Responses Using Support Vector Machines. In Proceedings of the International Symposium on Automation and Robotics in Construction, Berlin, Germany, 20–25 July 2018; pp. 1–8. [Google Scholar]
  110. Caputo, B.; Hayman, E.; Fritz, M.; Eklundh, J.-O. Classifying materials in the real world. Image Vis. Comput. 2010, 28, 150–163. [Google Scholar] [CrossRef] [Green Version]
  111. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef] [Green Version]
  112. Peker, M.; Altun, H.; Karakaya, F. Hardware emulation of HOG and AMDF based scale and rotation invariant robust shape detection. In Proceedings of the International Conference on Engineering and Technology, ICET 2012–Conference Booklet, Caire, Egypt, 10–11 October 2012. [Google Scholar] [CrossRef]
  113. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
  114. Herbert, B.; Andreas, E.; Tinne, T.; Luc, V.G. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  115. Azhar, S. Building Information Modeling (BIM): Trends, Benefits, Risks, and Challenges for the AEC Industry. Leadersh. Manag. Eng. 2011, 11, 241–252. [Google Scholar] [CrossRef]
  116. Rehman, M.S.U.; Thaheem, M.J.; Nasir, A.R.; Khan, K.I.A. Project schedule risk management through building information modelling. Int. J. Constr. Manag. 2020, 22, 1489–1499. [Google Scholar] [CrossRef]
  117. Kropp, C.; Koch, C.; König, M. Interior construction state recognition with 4D BIM registered image sequences. Autom. Constr. 2018, 86, 11–32. [Google Scholar] [CrossRef]
  118. Asadi, K.; Ramshankar, H.; Noghabaei, M.; Han, K. Real-Time Image Localization and Registration with BIM Using Perspective Alignment for Indoor Monitoring of Construction. J. Comput. Civ. Eng. 2019, 33, 04019031. [Google Scholar] [CrossRef]
  119. Wang, Q.; Kim, M.-K.; Cheng, J.C.; Sohn, H. Automated quality assessment of precast concrete elements with geometry irregularities using terrestrial laser scanning. Autom. Constr. 2016, 68, 170–182. [Google Scholar] [CrossRef]
  120. Kim, C.; Son, H.; Kim, C. Fully automated registration of 3D data to a 3D CAD model for project progress monitoring. Autom. Constr. 2013, 35, 587–594. [Google Scholar] [CrossRef]
  121. Deng, H.; Hong, H.; Luo, D.; Deng, Y.; Su, C. Automatic Indoor Construction Process Monitoring for Tiles Based on BIM and Computer Vision. J. Constr. Eng. Manag. 2020, 146, 04019095. [Google Scholar] [CrossRef]
  122. Zhang, C.; Arditi, D. Automated progress control using laser scanning technology. Autom. Constr. 2013, 36, 108–116. [Google Scholar] [CrossRef]
  123. Xu, Y.; Shen, X.; Lim, S. CorDet: Corner-Aware 3D Object Detection Networks for Automated Scan-to-BIM. J. Comput. Civ. Eng. 2021, 35, 04021002. [Google Scholar] [CrossRef]
  124. Wang, X.; Dunston, P.S. Design, Strategies, and Issues towards an Augmented Reality-Based Construction Training Platform. Electron. J. Inf. Technol. Constr. 2007, 12, 363–380. [Google Scholar]
  125. Casini, M. Extended Reality for Smart Building Operation and Maintenance: A Review. Energies 2022, 15, 3785. [Google Scholar] [CrossRef]
  126. Kim, E.; Wells, W.G.; Duffey, M.R. A model for effective implementation of Earned Value Management methodology. Int. J. Proj. Manag. 2003, 21, 375–382. [Google Scholar] [CrossRef]
  127. Bannerman, P.L. Defining Project Success: A Multilevel Framework. In Proceedings of the Project Management, Warsaw, Poland, 13 June 2008. [Google Scholar]
  128. Hegazy, T.; Abdel-Monem, M. Email-based system for documenting construction as-built details. Autom. Constr. 2012, 24, 130–137. [Google Scholar] [CrossRef]
  129. Gheisari, M.; Esmaeili, B. Applications and requirements of unmanned aerial systems (UASs) for construction safety. Saf. Sci. 2019, 118, 230–240. [Google Scholar] [CrossRef]
  130. Liu, D.; Li, X.; Chen, J.; Jin, R. Real-Time Optimization of Precast Concrete Component Transportation and Storage. Adv. Civ. Eng. 2020, 2020, 1–18. [Google Scholar] [CrossRef]
  131. Li, N.; Calis, G.; Becerik-Gerber, B. Measuring and monitoring occupancy with an RFID based system for demand-driven HVAC operations. Autom. Constr. 2012, 24, 89–99. [Google Scholar] [CrossRef]
  132. Gang, L.; Peng, Z.; Shu-Hong, S. Research on Real-time Control of Construction Progress. IOP Conf. Ser. Earth Environ. Sci. 2019, 376, 012010. [Google Scholar] [CrossRef]
  133. Kastor, A.; Sirakoulis, K. The effectiveness of resource levelling tools for Resource Constraint Project Scheduling Problem. Int. J. Proj. Manag. 2009, 27, 493–500. [Google Scholar] [CrossRef]
  134. Gharaibeh, H. Evaluating Project Management Software Packages Using a Scoring Model—A Comparison between MS Project and Primavera. J. Softw. Eng. Appl. 2014, 7, 541–554. [Google Scholar] [CrossRef] [Green Version]
  135. Braun, A.; Tuttas, S.; Stilla, U.; Borrmann, A. BIM-Based Progress Monitoring. In Building Information Modeling; Springer International Publishing: Cham, Switzerland, 2018; pp. 463–476. [Google Scholar] [CrossRef]
  136. Tserng, H.-P.; Ho, S.-P.; Jan, S.-H. Developing Bim-Assisted as-Built Schedule Management System for General Contractors. J. Civ. Eng. Manag. 2014, 20, 47–58. [Google Scholar] [CrossRef] [Green Version]
  137. Getuli, V.; Ventura, S.M.; Capone, P.; Ciribini, A.L. A BIM-based Construction Supply Chain Framework for Monitoring Progress and Coordination of Site Activities. Procedia Eng. 2016, 164, 542–549. [Google Scholar] [CrossRef]
  138. Salehi, S.A.; Yitmen, I. Modeling and analysis of the impact of BIM-based field data capturing technologies on automated construction progress monitoring. Int. J. Civ. Eng. 2018, 16, 1669–1685. [Google Scholar] [CrossRef]
  139. Vidalakis, C.; Abanda, F.H.; Oti, A.H. BIM adoption and implementation: Focusing on SMEs. Constr. Innov. 2019, 20, 128–147. [Google Scholar] [CrossRef]
  140. Assaad, R.; El-Adaway, I.H.; El Hakea, A.H.; Parker, M.J.; Henderson, T.I.; Salvo, C.R.; Ahmed, M.O. Contractual Perspective for BIM Utilization in US Construction Projects. J. Constr. Eng. Manag. 2020, 146, 04020128. [Google Scholar] [CrossRef]
  141. Memon, Z.A.; Majid, M.Z.A.; Mustaffar, M. An Automatic Project Progress Monitoring Model by Integrating Auto CAD and Digital Photos. In Proceedings of the International Conference on Computing in Civil Engineering, Cancun, Mexico, 12–15 July 2005; pp. 1–13. [Google Scholar]
  142. Abramova, V.; Pires, F.; Bernardino, J. Open Source vs. Proprietary Project Management Tools. Adv. Intell. Syst. Comput. 2016, 444, 331–340. [Google Scholar] [CrossRef]
  143. Delgado, J.M.D.; Oyedele, L.; Demian, P.; Beach, T. A research agenda for augmented and virtual reality in architecture, engineering and construction. Adv. Eng. Inform. 2020, 45, 101122. [Google Scholar] [CrossRef]
Figure 1. PRISMA systematic review flow diagram for the current study.
Figure 1. PRISMA systematic review flow diagram for the current study.
Buildings 12 01037 g001
Figure 2. Summary of the CV-based CPM process.
Figure 2. Summary of the CV-based CPM process.
Buildings 12 01037 g002
Table 1. Search strings, restrictions, and results.
Table 1. Search strings, restrictions, and results.
DatabaseStrings and RefinementsResults
Scopus(TITLE-ABS-KEY (“computer vision” AND “construction progress*”) OR TITLE-ABS-KEY (“vision-based” AND “construction progress*”) OR TITLE-ABS-KEY (“real-time” AND “construction progress*”) OR TITLE-ABS-KEY (“automat*” AND “construction progress*”))233
AND (LIMIT-TO (SUBJAREA, “ENGI”) OR LIMIT-TO (SUBJAREA, “COMP”)195
AND (LIMIT-TO (DOCTYPE, “cp”) OR LIMIT-TO (DOCTYPE, “ar”) OR LIMIT-TO (DOCTYPE, “cr”) OR LIMIT-TO (DOCTYPE, “re”) OR LIMIT-TO (DOCTYPE, “ch”)194
AND (LIMIT-TO (LANGUAGE, “English”)AND (LIMIT-TO (PUBYEAR, “2011–2021”)180
180
Web of ScienceTOPIC: (“computer vision” AND “construction progress*”) OR TOPIC: (“vision-based” AND “construction progress*”) OR TOPIC: (“real-time” AND “construction progress*”) OR TOPIC: (“automat*” AND “construction progress*”)121
Refined by: RESEARCH AREAS: (ENGINEERING OR COMPUTER SCIENCE)103
Refined by: DOCUMENT TYPES: (ARTICLE OR PROCEEDINGS PAPER OR REVIEW OR BOOK CHAPTER)103
Refined by: LANGUAGES: (ENGLISH)Refined by: PUBLICATION YEARS: 2011–2021102
102
Total Articles282
Duplicates84
After Abstract Screening183
After Full-text Screening180
Total Selected Articles180
Table 2. Summary of automated CV-based CPM process, its sub-processes, and techniques with associated advantages and limitations.
Table 2. Summary of automated CV-based CPM process, its sub-processes, and techniques with associated advantages and limitations.
Sub-ProcessesTechniquesAdvantagesLimitationsReferences
DAQUAVs
  • Help automate progress monitoring
  • Enable laser scanners, digital cameras, and a variety of other sensors onboard
  • Provides visual and detailed progress information
  • Provides better area coverage
  • Provides views from human-inaccessible angles
  • Requires proper operation
  • Potential safety hazard
  • Causes distraction
  • Requires accurate path planning
  • Requires obstruction avoidance planning
  • Rotational motion and sudden angular movements cause motion blur
  • Affected by wind speeds and other environmental anomalies
[20,31,57,58,59,60,61,62,63,64,65]
Handheld devices
  • Widely available in the form of smartphones, tablets, digital cameras, etc.
  • No prior planning and preparation required
  • Provide close shots of the object under consideration
  • Avoid obstructions and hindrances to a large extent
  • Views, angles, and coverage depends on human accessibility at the worksite
  • Required a large number of photographs taken manually
  • Visual data must cover every nook and cranny of the construction feature under observation
[19,33,37,39,40,66,67,68,69]
Fixed on mounts
  • Provide automated data collection from a constant elevation or view
  • Least affected by varying weather conditions
  • Best for long-term data acquisition from the construction environment
  • A reliable source for high-quality visual progress information
  • Enable real-time data acquisition and PCD extraction
  • Limited to a specific view or orientation
  • Minor maintenance requires significant effort, i.e., crane-mounted cameras
  • Incomplete coverage of construction site
  • Requires a higher number of cameras for efficient data collection
[33,41,70,71,72,73,74]
Surveillance cameras
  • Allow for real-time productivity computation
  • Minimizes the need for human intervention
  • Good-quality DVR system enables input from multiple cameras
  • Requires significant memory requirements
  • Varying weather conditions may affect the quality of data
  • Not suitable for smaller features that require a closer view
[32,59,75]
Information retrievalSfM
  • Easy to use in a construction environment, requires a single camera
  • Cost-effective in comparison to lidar
  • Automatically estimates the camera positions between images
  • Less precise as compared to lidar
  • Takes a long time to process relatively larger vision datasets
  • A minimum 60% overlap is recommended to get the higher quality results
[19,31,33,62,63,73,76,77,78]
CNN
  • Integrates both classification and detection into its architecture
  • Detect multiple construction features from one image
  • Faster processing time enables real-time monitoring applications
  • The training process takes significant time
  • Requires higher computing power than ordinary desktop PCs
  • Do not encode the position and orientation of construction features
[28,32,59,65,79,80]
SVM
  • One of the most powerful classification techniques
  • Can be applied for multi-class classifications scenarios
  • Not suitable for larger vision datasets
  • Do not perform well when the dataset has more noise
[29,47,67,68,73,81,82]
SLAM
  • Enable the reconstruction of a 3D map of a construction scene in real-time
  • Does not require GPS for localization
  • Creates higher computational complexity in case of a larger dataset
  • The heavy computational workload of image processing requires significant time and memory
[60,70]
CC
  • Very fast at computing construction features due to the use of integral images
  • Requires supervised training with sets of positive and negative examples
[37,59]
SURF
  • A fast and robust algorithm enables real-time applications such as tracking and object recognition
  • Requires clear and noise-free input, does not perform well for low illuminations and dark scenes
[71,74]
LoG
  • Useful for detecting edges of various construction features, e.g., concrete structural elements, precast walls, etc.
  • Robust to dynamic changes of illumination, viewpoint, camera resolution, and scale
  • Required vision datasets with uniform regions
  • Performance in the real construction environment with various obstruction and lighting conditions might not return precise results
[39,40]
HoG
  • Better than any edge descriptor as it uses magnitude and angle of the gradient
  • Very sensitive to image rotation and requires very careful input images
  • Higher computation times
[81,83]
Progress EstimationBIMs registration
  • Facilitates as-built and as-planned data comparison
  • Enables autonomous data collection and navigation through the construction site by providing a detailed reference model
  • Generally, does not work well with partially occluded patches in a 3D point cloud
  • Registration of multiple point clouds still poses technical challenges
[33,74,84,85]
Object recognition/matching
  • Enable feature recognition and matching from object-based models for differentiating between various construction features, i.e., concrete, bricks, doors, windows, etc.
  • Various combinations of algorithms are being utilized for object recognition, matching, and tracking, no universal method or technique is available to address a variety of construction features at once.
[32,59,73,79,80,86]
Output VisualizationColor labels
  • Very simple and objects can be labeled easily
  • Volumetric bounding boxes can efficiently depict completed work
  • Currently, used labels are designed to locate objects coarsely, but for construction applications, it lacks higher precision
[31,39,63,65,68,73,78,82]
AR and VR
  • Enable visual assessment of physical construction progress on work site
  • Requires complex processing for AR registration and lacks a real-time visualization function
[37,42]
EVM
  • Depicts actual cost and time performance
  • Enables accurate forecasting
  • Integration of EVM with CV-based CPM is the least explored area in the existing literature
[16,25,87]
Table 3. Comparison of data acquisition techniques.
Table 3. Comparison of data acquisition techniques.
TechniquePurposeLevel of Automation
ManualSemi-AutomatedFully
Automated
Unmanned Aerial Vehicles (UAVs)Provides efficient, accurate, and quick access to vision datasets from human-inaccessible places.[31,60,61,62,63][57,64,65][20,58,69]
Handheld DevicesProvides a large vision dataset without the need for technical complexity, designated equipment, and the need for multiple visual sensors.[19,33,37,39,40,67,68,77,78,82]--
Fixed on MountsProvides accurate and effective vision dataset from short or long-term observation of a fixed view.[71,72,73][33,41,70][74]
Surveillance CamerasProvides real-time vision datasets in the form of videos of single or multiple views from the construction environment.--[32,59,75]
Table 4. Comparison of information retrieval techniques.
Table 4. Comparison of information retrieval techniques.
TechniquePurposeLevel of Automation
ManualSemi-AutomatedFully
Automated
SfMA technique to reconstruct a 3D model by extracting information from a 2D image.[19,31,33,62,63,73,76,78][77]-
CNNA deep neural network-based technique to analyze visual imagery.-[59,80][28,32,65,79]
SVMA supervised technique, used for classification, regression, and edge detection.-[29,47,67,68,73,82][81]
SLAMA technique used for localization and environment mapping.[60,70]--
CCA training-dependent classifier that detects the object in question from an image.[37,59]--
SURFA local feature detector and descriptor are used for object recognition tasks.-[71,74]-
LoGA kernel-based technique is used to detect edges.[39,40]--
HoGA feature descriptor is used for object detection.-[83][81]
Table 5. Comparison of progress estimation techniques.
Table 5. Comparison of progress estimation techniques.
TechniquePurposeLevel of Automation
ManualSemi-AutomatedFully Automated
BIMs registrationIt superimposes an as-built dataset onto an as-planned dataset to measure progress status.[33][74,84][85]
Object recognition/matchingIt identifies, recognizes, or matches various construction features from overlayed models.-[32,59,73,80,86][79]
Table 6. Comparison of output visualization techniques.
Table 6. Comparison of output visualization techniques.
TechniquePurposeLevel of Automation
ManualSemi-AutomatedFully Automated
Color labelsThe color labels are indicators of varying sizes and shapes to show the outcome of image processing.-[31,65,78,82][39,63,68,73]
AR and VRThe visualization with overlaying information retrieved from as-built vs. as-planned comparison to depict progress status in a virtual environment.-[37,42]-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sami Ur Rehman, M.; Shafiq, M.T.; Ullah, F. Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review. Buildings 2022, 12, 1037. https://doi.org/10.3390/buildings12071037

AMA Style

Sami Ur Rehman M, Shafiq MT, Ullah F. Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review. Buildings. 2022; 12(7):1037. https://doi.org/10.3390/buildings12071037

Chicago/Turabian Style

Sami Ur Rehman, Muhammad, Muhammad Tariq Shafiq, and Fahim Ullah. 2022. "Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review" Buildings 12, no. 7: 1037. https://doi.org/10.3390/buildings12071037

APA Style

Sami Ur Rehman, M., Shafiq, M. T., & Ullah, F. (2022). Automated Computer Vision-Based Construction Progress Monitoring: A Systematic Review. Buildings, 12(7), 1037. https://doi.org/10.3390/buildings12071037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop