# **Towards Smart Tech 4.0 in the Built Environment Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate**

Edited by Fahim Ullah Printed Edition of the Special Issue Published in *Buildings*

www.mdpi.com/journal/buildings

**Towards Smart Tech 4.0 in the Built Environment: Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate**

## **Towards Smart Tech 4.0 in the Built Environment: Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate**

Editor

**Fahim Ullah**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Fahim Ullah University of Southern Queensland Australia

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Buildings* (ISSN 2075-5309) (available at: https://www.mdpi.com/journal/buildings/special issues/smartech).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-7354-0 (Hbk) ISBN 978-3-0365-7355-7 (PDF)**

Cover image courtesy of Fahim Ullah

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


Key Adoption Factors for Collaborative Technologies and Barriers to Information Management in Construction Supply Chains: A System Dynamics Approach Reprinted from: *Buildings* **2022**, *12*, 766, doi:10.3390/buildings12060766 .............. **169**


## **About the Editor**

### **Fahim Ullah**

Dr. Fahim Ullah is a preeminent scholar and pedagogue in the realm of Construction Project Management, occupying the posts of Program Director (Construction) and Senior Lecturer at the University of Southern Queensland (UniSQ). His previous tenure as a lecturer at UniSQ, in addition to his attainment of a Doctor of Philosophy degree from the School of Built Environment at the University of New South Wales (UNSW) in Sydney, Australia, serves as a testament to his mastery of the discipline. Throughout his academic journey, he has also served as a Casual Lecturer at UNSW, Lead Lecturer at the University of Sydney, and Lecturer at the National University of Sciences and Technology (NUST) in Pakistan, where he has imparted his wealth of knowledge in the areas of Construction and Project Management to students. Prior to his academic pursuits, Fahim gained substantial practical experience through his employment as Assistant Manager (Planning) at Bin Nadeem Associates (BNA) and as Planning Engineer at Malik Abdul Hanan and Sons in Pakistan. Fahim's research interests encompass a multitude of topics within the disciplines of Construction Management, Project Management, Smart Cities, Digital Technologies, and Disruptive Innovation, and his contributions to these fields have been widely acknowledged through various research grants and awards for exceptional papers. To date, he has published over 80 research articles of superior quality, delving into subjects such as construction projects, smart cities, real estate, and property management. Furthermore, he served as an Editorial Board Member for multiple Q1 journals and is currently engaged in editing multiple Special Issues in Q1 journals pertaining to digital disruptions and Industry 5.0 technologies in the construction and wider built environment.

## **Preface to "Towards Smart Tech 4.0 in the Built Environment: Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate"**

With the recent wave of digital disruption and associated transformations, all fields of science, including the built environment, have been affected. Various smart technologies, in line with Industry 4.0 (SmartTech 4.0), have been introduced to facilitate the digital transformation of the built environment. Consequently, affiliated fields of the built environment, including construction, architecture, urban and regional planning, cities, and the property and real estate industries, have seen the introduction and adoption of many disruptive technologies to keep up with modern smart initiatives and SmartTech 4.0 requirements.

To assist professionals and practitioners in the built environment domains, including construction, architecture, urban and regional planning, cities, and the property and real estate industries, this book presents recent studies on advancements in digital technologies and SmartTech 4.0 in the built environment space. It is expected that academics, construction managers, civil engineers, project managers, city and urban planners, real estate and property managers, architects, IT managers, data scientists, software developers, computer systems analysts, web developers, governance management specialists, and others will equally benefit from the state-of-the-art chapters presented in this book targeting novel, nascent, and emerging technologies in the built environment.

The editor expresses gratitude to MDPI Publishing and the editorial staff for their excellent work, as well as to the reviewers for their dedication to peer review. The University of Southern Queensland (UniSQ) is also acknowledged for providing an enabling research environment. Lastly, the editor extends thanks to the 67 authors who contributed to this book.

> **Fahim Ullah** *Editor*

#### *Editorial*

## **Smart Tech 4.0 in the Built Environment: Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate**

**Fahim Ullah**

School of Surveying and Built Environment, University of Southern Queensland, Springfield Central, QLD 4300, Australia; fahim.ullah@usq.edu.au

Since the beginning of industrialization, there have been several paradigm shifts initiated through technological revolutions, inventions, and leaps. These were later named the industrial revolutions. Accordingly, the first industrial revolution focused on mechanization, followed by advancements in electrical energy and its usage (second industrial revolution) and digitalization (third industrial revolution) [1]. The fourth industrial revolution (Industry 4.0) is based on digital transformation and aims to revolutionize industries' manufacturing, distribution, and process improvement [2]. Recently, the focus is shifting to Industry 5.0, which aims to complement the existing "Industry 4.0" approach. It puts research and innovation at the service of the transition to a more human-centric, sustainable, and resilient industry. However, there are fields such as the built environment that are lagging behind the technology curve and are yet to materialize the true potential of Industry 4.0.

The traditional built environment needs to transform into a smart built environment. Accordingly, it needs a technological transformation aligned with the Industry 4.0 requirements. Disruptive digital technologies must be adopted in the built environment and its associated fields such as construction, cities, real estate, architecture, and urban planning to achieve this goal of technological transformation. Accordingly, in line with the United Nations sustainable development goals, integrated smart cities, construction, and real estate goals can be achieved to promote sustainability in the built environment. Such technologies in line with Industry 4.0 requirements (hereby referred to as the Smart Tech 4.0) have been proposed in various built environment fields. To date, more than 20 such technologies have been identified. Some examples include, but are not limited to: Unmanned Aerial Vehicles (UAVs), Artificial Intelligence (AI), Big Data, Internet of Things (IoT), Clouds, 3D Scanning, Laser Scanning, 3D Printing, Wearable Technologies, Wireless Technologies, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (XR), Robotics, Blockchains, Software as a Service (SaaS), Digital Twins, Building Information Modeling (BIM), Machine Learning (ML), Ubiquitous Computing, Mobile Computing, Renewable Energy, Autonomous Vehicles, and 5G Communications [3]. Though the huge potential for the adoption and applications of such technologies exist in the built environment, the research around the adoption and implementation of these smart technologies is currently limited. To address this gap, this Special Issue (SI) sought contributions from academic researchers and industry professionals, including construction managers, city and urban planners, project managers, civil engineers, IT managers, real estate and property managers, web developers, software developers, architects, governance management specialists, data scientists, computer systems analysts, and others, to submit their scholarly works.

The scholarly contributions in this Special Issue present some important technologies, aspects, and applications of Smart Tech 4.0 in the built environment. In the papers submitted to this SI, researchers have developed a wide range of novel ideas, frameworks, approaches, models, and prototypes that provide valuable resources, guidelines, implications, and adoption approaches for built-environment practitioners, managers, policy,

**Citation:** Ullah, F. Smart Tech 4.0 in the Built Environment: Applications of Disruptive Digital Technologies in Smart Cities, Construction, and Real Estate. *Buildings* **2022**, *12*, 1516. https://doi.org/10.3390/ buildings12101516

Received: 29 August 2022 Accepted: 19 September 2022 Published: 23 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and decisionmakers. Overall, this Special Issue compiles diverse but relevant thoughtprovoking ideas of Smart Tech 4.0 applications in the built environment. The details of the papers published in this SI are as follows:

The only review paper published in this SI by Gao et al. [4] presented a state-of-the-art review of the applications of wearable devices in construction safety. The authors conducted an extensive bibliometric analysis of the 169 articles published on the topic between 2005 and 2021 using CiteSpace® software. The authors identified 10 research clusters as extremely important for developing construction safety wearables and recommended developing a dynamic platform to integrate such technologies in the future.

Among the technical articles, Munawar et al. [5] investigated infrastructural damage and corrosion in civil engineering structures using UAVs. The authors applied ML to 600 images collected from two case study projects in Victoria, Australia. The authors developed a modified convolutional neural network (CNN)-based architecture utilizing 16 convolution layers and a cycle generative adversarial network (CycleGAN) and achieved more than 98% accuracy in identifying damaged components of the case study projects.

Olowa et al. [6] presented the idea of a web platform, "BIM-enabled Learning Environment (BLE)," to facilitate BIM-enabled education and training. The authors used the Adaptive Structuration Theory (AST) perspective to interpret their findings and suggested integrating information technology applications, including virtual learning environments, collaboration platforms, and BIM applications, to enable BLE.

Baraibar et al. [7] investigated the challenges of implementing BIM for executing underground works. Using an example of the Arnotegi tunnels of the Bilbao Metropolitan Southern Bypass, the authors developed a contractual framework to enable collaboration among stakeholders using BIM. The framework also encouraged advanced uses for integrating the information in the common data environment. Overall, the approach enables improvements in quality decisions and presents the information in an easily interpretable and transparent manner to enhance participants' perceptions.

Pérez et al. [8] optimized fieldwork for digitalizing urban settings using examples of two university campuses in northern Spain. The authors designed solutions generated through 3D models using the Design Science Research (DSR) approach. The authors used LiDAR ALS point clouds to digitalize the case studies and leveraged additional TLS capture techniques using UAV-assisted photogrammetric techniques. The results showed potential for reduction of working time on sites.

Ramos-Hurtado et al. [9] proposed deploying an AR tool for construction safety inspections. The authors analyzed building projects' traditional safety inspection processes and associated key performance indicators (KPIs). Accordingly, the authors proposed an AR-based digitalized collective protective equipment (CPE) inspection process followed by methodological recommendation and evaluation of applications.

Aslam et al. [10] identified and ranked landfill sites for municipal solid waste (MSW) using an integrated remote sensing and GIS approach. The authors used a case study of Fasiablabd Pakistan and developed a framework for identifying, ranking, and selecting MSW disposal sites in nine municipalities of the case study. The authors used multiple datasets, including normalized difference vegetation, water, and built-up areas indices (NDVI, NDWI, and NDBI) and some physical factors, including roads, water bodies, and the population.

Jahan et al. [11] addressed the integration complexity related to profitability-influencing risk factors (PIRFs) for construction projects using a system dynamics (SD) approach. Using the systems thinking (ST)-based approach to developing the causal loop diagram (CLD), the authors quantified the PIRFs. Field surveys and expert opinions were utilized in the process, and recommendations were presented for field professionals to identify PIRFs, diagnose issues and integrate the impacts of such factors on decision making.

Rajput et al. [12] investigated the impacts of political, social safety, legal risks, and the host country's attitude towards foreigners on project performance using a case study of China Pakistan Economic Corridor (CPEC). A Likert scale was used in the quantitative analyses of data collected through questionnaire surveys that were empirically investigated using partial least square structural equation modeling (PLS-SEM). The authors reported a negative relationship between political, social safety, and legal risks on the project performance. The social risks are increased if there is a negative attitude towards foreigners.

Amin et al. [13] examined the key factors for adopting collaborative technologies and the barriers to information management in construction supply chains of developing countries using an SD approach. The authors identified 60 factors and highlighted three main barriers, including top management support, complexity, and trust and cooperation, which impede the adoption of collaborative technologies in construction supply chains.

Jo and Lee [14] focused on the analytical methods for characterizing smart cities' ecological and industrial perspectives. The authors developed a smart SPIN (Spectrum, Penetration, Impact, and Network) model and analyzed the Korean smart city industry's ecology. The results highlighted the interactions of the Korean industrial ecosystem of smart cities with existing industries and uplifting its intelligence and smartness levels.

Shahzad et al. [15] examined the plastic deformation and seismic structural response of a mega-sub controlled structural system (MSCSS). The authors used nonlinear finite elements and dynamic analyses to examine the structural behavior using the MSCSS configuration that improved the mean equivalent plastic strain of columns and beams by 51% and 80% and reduced in maximum equivalent plastic strain by 44%. In addition, using varying configurations, the authors achieved a maximum coefficient of variation (COV) of 16% and 32% in the acceleration and displacement responses.

Ghufran et al. [16] determined the enablers of the circular economy (CE) in the construction industry for its sustainable development. The authors identified 10 key enablers of the CE in the construction industry, established the casualty among the factors using interviews, developed a CLD, and simulated it using SD. The authors concluded that policy support and organizational incentives for adopting the CE are critical for its implementation to achieve sustainable development in the construction industry.

Gharaibeh et al. [17] explored the barriers affecting BIM implementation in the Swedish construction industry. The authors extracted 34 barriers through an extensive literature review followed by barriers to BIM implementation in the wood construction industry in Sweden. The authors developed a conceptual framework and recommended solutions to overcome the identified barriers.

Calvetti et al. [18] deployed a laboratory circuit-based simulation to enhance electronic process monitoring (EPM) of construction tasks. The simulation was based on 10 common construction activities that involved wearable devices and used ML (accuracy between 92 and 96%) and multivariate statistical analysis (47–76%). The results showed accurate detection of hand motions in manual methods using wearables. More analysis points were recommended for free hand movements, walks, and operation activities.

The final paper in this SI is that of Huang et al. [19]. The authors presented a multicriteria digital evaluation approach to facilitate accessible journeys and move towards inclusive walkable communities in smart cities. The authors introduced a proximity modeling web application subjected to two case studies. The application simulates pedestrian catchments for user-specified destinations and is based on the crowd-sourced road network and open topographic data. The model considers urban topography, travel speed, time, and visualization modes to accommodate various simulation needs for different urban scenarios, thus promising applications for urban planners, designers, managers, and health planners.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Ran Gao 1, Bowen Mu 1, Sainan Lyu 2, Hao Wang 1,\* and Chengdong Yi <sup>1</sup>**


**\*** Correspondence: hao.wang@cufe.edu.cn; Tel.: +86-135-8185-9827

**Abstract:** Wearable devices as an emerging technology to collect safety data on construction site is gaining increasing attention from researchers and practitioners. Given the rapid development of wearable devices research and the high application prospects of wearable devices in construction safety, a state-of-the-art review of research and implementations in this field is needed. The aim of this study is to provide an objective and extensive bibliometric analysis of the published articles on wearable applications in construction safety for the period of 2005–2021. *CiteSpace* software was used to conduct co-citation analysis, co-occurrence analysis, and cluster identification on 169 identified articles. The results show that 10 research clusters (e.g., attentional failure, brain-computer interface) were extremely important in the development of wearable devices for construction safety. The results highlight the evolution of wearable devices in construction-safety-related research, revealing the underlying structure of this cross-cutting research area. The analysis also summarizes the status quo of wearable devices in the construction safety field and provides a dynamic platform for integrating future applications.

**Keywords:** wearable device; bibliometric analysis; construction safety; *CiteSpace*

### **1. Introduction**

The construction industry has long been regarded as one of the most dangerous industries worldwide [1,2]. It employs approximately 7% of the global workforce but contributes to 30–40% of total fatalities [3]. Among all causes of construction accidents, unsafe behaviors of construction workers are the primary and immediate causes. For instance, a study has reported that 88% of accidents are related to unsafe behaviors [4]. To reduce unsafe behaviors and improve the safety performance of construction workers, various measures have been proposed, such as establishing multiple training programs, applying academic knowledge to work sites, and exploring new technologies [5,6]. Wearable devices that offer a promising solution for construction safety management and risk identification are increasingly adopted on construction sites [7]. Due to the dynamic and transient nature of construction [8], traditional manual collection of construction safety data is time intensive [9], and needs to be automated by an effective tool that provides timely information for safety managers to take positive actions. As an emerging technology, wearable devices can potentially realize real-time and accurate security monitoring [10]. They are products controlled by electronic components and software that can be incorporated into clothing or worn on the body like accessories. Wearable devices collect information through tiny, easily worn sensors [11]. Such non-invasive devices avoid the obvious problems of large and complex physical examination devices [12,13], and provide real-time information interaction with the wearer [7]. Timely monitoring and feedback ensure the effectiveness of the information provision. Automated safety monitoring systems based on wearable devices are another promising avenue of research. The data of construction sites collected through

**Citation:** Gao, R.; Mu, B.; Lyu, S.; Wang, H.; Yi, C. Review of the Application of Wearable Devices in Construction Safety: A Bibliometric Analysis from 2005 to 2021. *Buildings* **2022**, *12*, 344. https://doi.org/ 10.3390/buildings12030344

Academic Editors: Fahim Ullah and Audrius Banaitis

Received: 27 January 2022 Accepted: 9 March 2022 Published: 11 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

wearable devices have been evaluated by researchers and practitioners, providing early warnings of the safety risks in construction environments [14–16], physiological signals from construction workers [6,17,18] and automatic recognition of workers' actions [11,19]. Wearable devices have also assisted safety training [20] and accident prevention [21]. For example, the biomechanical gait-stability parameters can prevent falling and colliding accidents, which are common occurrences on construction sites [21,22].

To reveal the possible connections among the studies on wearable-device applications in construction safety, some studies have reviewed the past development and proposed new research trends in this area. For example, Wang et al. (2015) reviewed the available techniques for the risk assessment of work-related musculoskeletal disorders, and summarized the advantages and limitations of wearable-device systems in this theme [23]. Awolusi et al. (2018) reviewed the application of wearable technologies in constructionsafety monitoring, and analyzed the relevant safety performance metrics [24]. Ahn et al. (2019) recently reviewed and identified general wearable-sensing technology applications in construction safety and health, and indicated the challenges and future research opportunities for advancing this field [6]. However, these reviews are often qualitative or based on manual process and researchers' subjective judgement. Such methods may overlook some articles available for review and thus be vulnerable to biases.

This study aims to conduct a comprehensive and objective bibliometric analysis of the research on the application of wearable devices in construction safety from 2005 to 2021 with the help of *CiteSpace* software. This research clusters the applications of wearable devices in construction safety, reviews the whole development framework, and suggests future research trends. Based on the bibliometrics, the study quantitatively summarizes the status quo and establishes the important issues concerning the new technologies of wearable devices in construction safety. The research hotspots are illustrated on visual maps. This study extends traditional literature review methods to carry out a bibliometric analysis to delineate the intellectual structure and quantitatively summarizes the related knowledge in graphical form.

#### **2. Research Method**

#### *2.1. Data Collection*

The data collection consisted of two stages. In Stage 1, a comprehensive search was carried out in the online academic database. This study used *Scopus* database for literature search, as it could provide a comprehensive coverage of the sciences, social sciences, arts, and humanities across journals, books, and conference proceedings and was sufficiently large for most bibliometric analysis. Articles containing the specific terms in the title/abstract/keyword' were firstly retrieved. Five experts were interviewed to provide the key search words for the research topic. Based on the analysis of interviews, the following search string was used in the 'title/abstract/keyword' fields: ("Wearable devices" OR "Wearable systems" OR "Wearable technology" OR "Wearable sensor") AND ("construction safety"). The search was further refined by limiting the time span into the recent 16 years—'from 2005 to 2021' and the document type to 'article, review and conference paper'. At the same time, the same search string was used in the Web of Science database to find the articles not included in the *Scopus* database. A total of 239 articles were identified in this stage.

Given that the search results of Stage 1 may include irrelevant papers that contain the search keywords but do not actually focus on wearable devices in construction safety, Stage 2 was conducted to eliminate the irrelevant literature. To ensure the accuracy and relevance of data, the titles and abstracts of all 239 articles retrieved in Stage 1 were carefully scrutinized by authors. During this process, two authors examined these articles independently and their results of screening were compared and consolidated. After removing 70 irrelevant articles, 169 papers were retained as the basis of our bibliometric analysis. The bibliographic records of the identified 169 literatures were downloaded from the database, including the article title, article type, a list of authors, a set of keywords, the

abstract, the journal name, publication year, volume, issue number, number of citations, and a list of the cited references. These bibliographic records were further standardized (e.g., correcting the different spellings of authors, journal, or keywords) through manually checking for bibliometric analysis.

#### *2.2. Bibliometric Analysis*

The term of bibliometric was first introduced as "the application of mathematical and statistical methods to books and other means of communication" by Pritchard (p. 349) [25]. Bibliometric analysis is a quantitative statistical analysis of the literature [26], which has been widely used for identifying the relationships among authors, institutions, research directions, and other variables [27–29]. A bibliometric analysis can be realized through visualization tools to identify the emerging trends and knowledge structures in a specific research field. The results are often presented intuitively in visual maps. The present study focuses on author co-citation analysis, keyword co-occurrence network, and cluster identification. These techniques are advantageous over the conventional manual review method.

Co-citation analysis measures the semantic similarity among documents, authors, or journals by computing their co-citation relationships. A co-citation relationship defines the frequency at which two items are cited together [30]. Co-citation analysis assumes that when two items are commonly cited together, their contents are relevant to each other. Co-citation analysis can be performed on documents, authors, or journals, which connects the cited documents, authors, or journals that researchers consider as valuable and interesting. The present study conducts an author co-citation analysis, which identifies the relationships among authors whose publications are cited in the same literature. More specifically, an author co-citation analysis identifies and visualizes the knowledgeable structure of a specialist research area by counting the co-citation frequency of two authors' publications among the reference lists of cited literature [31].

Keywords represent the core content of a research article. A keyword co-occurrence network constructs and maps the knowledge domain of a particular area over a specific time span. This method acknowledges that when keywords frequently co-occur in publications, their underlying ideas are closely associated [32]. A keyword co-occurrence network constructs a similarity measure from the literature contents themselves, rather than linking the literature indirectly through citations. In the present analysis, the bibliometric analysis results of wearable devices in construction safety were demonstrated in a keyword network, which identifies the keywords that co-occur in at least two different articles in a given time span. High-frequency keywords are recognized as indicators of research hotspots or directions over a specified period.

Cluster analysis is commonly applied in knowledge discovery, which identifies the profound themes hidden in the textual data [33]. Cluster analysis categorizes a mass of data into different units with common relevancy of terms, which identifies the research topics and their interrelation within a research domain. In cluster analysis, the homogeneity or consistency of clusters is evaluated from the mean silhouette of the network [34]. When the silhouette value is 1, the clusters in the network are completely separated. Research trends can also be effectively analyzed by cluster analysis [35].

At present, there are many widely used bibliometric tools, such as *CiteSpace*, *VOSViewer*, and *HistCite*. In terms of cluster analysis, *VOSViewer* does not have as many algorithms as *CiteSpace* to extract cluster labels. *HistCite* is relatively simple to operate, but its graphical presentation is not as rich as *CiteSpace*'s. *CiteSpace* software has all the functions mentioned above, as well as time slicing technology, which supports more intuitive performance of time series in network analysis for systematic review [36]. *CiteSpace* is a Java application for structural and temporal analyses of various networks derived from the academic literature and has been optimized several times in recent years to improve its function and practicability [27]. It supports networks with hybrid node types (e.g., institutions and countries), and hybrid link types (e.g., co-citations and co-occurrence) [29,37]. *CiteSpace* can also detect trends and citation bursts of academic papers by calculating publication indicators [34].

Therefore, version 5.8R3 of the *CiteSpace* software was chosen as the bibliometric tool to conduct a comprehensive analysis.

#### **3. Results**

#### *3.1. Overview of Research*

Tables 1 and 2 illustrate the trends of identified articles regarding the wearable devices in construction safety by country, year, and journal/conference. The data in Table 1 are derived from the article list filtering function in *Scopus* database. In term of the geographic distribution, the contribution of the United States to the literature of wearable devices in construction safety is the most (N = 88, P = 42.1%). In fact, it is much larger than the second one, i.e., Hong Kong (N = 31, P = 14.8%) and the third one, i.e., Mainland of China (N = 23, P = 11.0%) (see Table 1).

**Table 1.** Main research origin of papers published.


Countries or regions with four or more papers are counted in the table.

**Table 2.** The publishing year of journals/conference proceedings contributing to the area of wearable devices in construction safety.


Journals and conference proceedings with less than four paper were classified into Others.

As shown in Table 2, *Automation in Construction* and *Journal of Construction Engineering and Management* have published the most articles at 36 (i.e., 21.3%) and 18 (i.e., 10.7%) out of 169 identified articles. The papers published in *Automation in Construction* are significantly more than those published in other journals. The total number of publications on this topic by year has increased. Until 2016, the number of articles was under 10 per year, but from 2017 to 2021 the number of publications has increased to triple, e.g., 2021 (N = 31). It indicates that the study on applying wearable devices in construction safety has attracted increasing interest of researchers and practitioners.

Table 3 lists the top 10 cited papers of wearable devices in construction safety. Six papers were published in *Automation in Construction* and two were published in *Applied Ergonomics*. The remaining two papers were published in *Journal of Construction Engineering and Management* and *Journal of Computing in Civil Engineering.* In terms of research content, many of them focus on the construction workers' posture and activity, including three concerning about work-related musculoskeletal disorders (WMSDs) [23,38,39], and three concentrating on ergonomic analysis, fall detection, and activity recognition [19,40,41]. In these studies, inertial measurement unit (IMU), accelerometer gyroscope, and linear accelerometer, etc., were the most commonly used wearable sensors. The remaining of them focus on workers' fatigue or stress levels, collecting physiological data from construction workers using heart rate, body surface temperature, and EEG data [18,42].


**Table 3.** Top 10 cited articles on wearable devices in construction safety.

#### *3.2. Co-Authorship Analysis*

Co-authorship analysis can identify main researchers and research communities in this field. Figure 1 depicts the co-authorship network generated from the literature data, and visualized by *CiteSpace*. The 212 nodes represent the authors in the cited literature, and

the 340 links represent their co-authorship relationships. The color of the links represents different ranges of years, e.g., gray, blue, green, yellow, orange, and red, and those colors range from light to dark, corresponding to different years from 2005 to 2021, as shown in Figure 1. A high 'count' parameter indicates a great influence of authors in the field. As shown in Figure 1, the larger the 'count' parameter, the larger the author's name label size, e.g., *Heng Li* (Hong Kong, count = 19), *Houtan Jebelli* (USA, count = 15), *SangHyun Lee* (USA, count = 13), *Antwi-Afari Maxwell Fordjour* (United Kingdom, count = 10), *Changbum Ryan Ahn* (USA, count = 9), *Jiayu Chen* (Hong Kong, count = 8), *Chukwuma Nnaji* (USA, count = 8), *Kanghyeok Yang* (South Korea, count = 7), *Byungjoo Choi* (South Korea, count = 7), *Ibukun Gabriel Awolusi* (USA, count = 7).

**Figure 1.** Co-authorship network of wearable devices in construction safety.

When the links form a closed-loop circuit, the linked authors share a strong interaction relationship. such as the circuit of *SangHyun Lee*, *Houtan Jebelli*, *Yizhi Liu* and *Mahmoud Habibnezhad*. In addition, multiple research communities can be identified through these closed loops and productive authors can be found within them. For example, *Heng Li* and *Antwi-Afari Maxwell Fordjour* are the two crucial authors of a research community, including *Waleed Umer*, *Shahnawaz Anwer*, *Arnold Wong*, etc., and *Jiayu Chen* is the crucial author of a research community, consisting of *Di Wang*, *Dong Zhao*, *Dai*, *Fei*, etc.

In graph theory, a node with high betweenness centrality usually means that the node is located in a more crucial position in the network. The top five authors with this property were *Heng Li* (centrality = 0.05), *JoonOh Seo* (centrality = 0.03) *Jiayu Chen* (centrality = 0.02), *Cenfei Sun* (centrality = 0.02), and *SangHyun Lee* (centrality = 0.01). An author with many counts and a high betweenness centrality in Figure 1 will most likely lead the research field of wearable devices in construction safety. Combined with the above metrics of main researchers and the links of research communities in Figure 1, we can continue to explore the research direction of specific communities and find the most influential articles based on node information.

#### *3.3. Keyword Co-Occurrence Network*

Figure 2 shows an overview of the keywords co-occurrence network with 379 nodes generated from the dataset. Each node represents one keyword term specified in the articles. Table 4 lists the top 28 terms (frequency > 10) with a total of 752 co-occurrence frequencies, which account for 47.9% of all keyword frequencies.

**Figure 2.** Keyword co-occurrence network of wearable devices in construction safety.

**Table 4.** Occurrence frequencies of specified keywords in the literature of wearable devices in construction safety.


According to Figure 2 and Table 4, occupational risk is the most frequent keyword, appearing 77 times, revealing that most studies are inspired by the occupational injuries suffered by workers in the construction industry. The second most frequently mentioned keyword is wearable technology (73 times), showing that wearable technology is the main research focus in this field. Following these two keywords, construction workers, construction industry, wearable sensor and construction safety are also mentioned frequently, with 66, 54, 47 and 40 times, respectively. These terms constitute the background and

objectives of the construction safety research domain. Most of the remaining keywords appear less than 40 times. Some of these low-frequency keywords refer to specific wearable technologies, such as electroencephalography (16 times), inertial measurement unit (13 times), and heart (generally refers to heart rate, 12 times). Some keywords explain the method of analyzing data collected by wearable devices, e.g., ergonomics (19 times), machine learning (15 times), and physiological model (11 times). In addition, some keywords have high betweenness centrality, such as construction worker (centrality = 0.17), risk assessment (centrality = 0.17), accident prevention (centrality = 0.15), construction worker (centrality = 0.14), health (centrality = 0.13). These keywords constitute different research topics and are interrelated. The co-occurrences of these keywords report the major research interests of wearable devices in construction safety.

#### *3.4. Cluster Identification*

Knowledge domains can be identified and presented as clusters by the bibliometric review method based on information with the relevant articles. *CiteSpace* extracts the term from the titles, keywords, or abstracts of the literature as text resource, and then the calculation can be carried out after setting parameters such as node type and selection criteria. *CiteSpace* provides three assessment measures: *Latent Semantic Indexing* (LSI), *Likelihood Ratio Test* (LLR), and *Mutual Information* (MI) index. One of the methods is selected to extract clustering labels from the titles or abstracts of cited references [44]. In this paper, LLR, recommended by the software author, was chosen as the algorithm, which calculates the p-value based on the likelihood ratio or compares it with a critical value to decide whether to reject the null model, thus obtaining the clustering label of the optimal confidence. Figure 3 illustrates a cluster view of the knowledge domains of wearable devices in construction safety, by the *loglikelihood ratio* (LLR) algorithm. The modularity score of the network is 0.6857. As this score lies between 0.4 and 0.8, the clustering is deemed to be acceptable. The weighted mean silhouette metric measures the average homogeneity of a cluster [45]. When the clustering size is similar, a higher weighted mean silhouette indicates better consistency of the cluster [46]. Therefore, the weighted mean silhouette score of 0.8447 indicates that the consistency of cluster members is enough. The cluster ID ranges from 0 (largest) to 9 (smallest). The size and quality of each cluster are decided by the number of papers assigned to the cluster and the silhouette value of the cluster, respectively. In Table 5, the mean silhouette of each cluster exceeds 0.6, confirming an acceptable level of clustering validity. The hybrid node network is composed of 379 nodes and 1430 links. The 10 major knowledge clusters are attentional failure (#0), brain-computer interface (#1), activity tracking (#2), industrial work safety (#3), corporate clothing (#4), construction site (#5), accelerometer-based activity recognition (#6), intelligent monitoring (#7), building site (#8), and wearable wireless identification (#9). The next section will discuss these clusters in detail.




**Table 5.** *Cont.*

**Figure 3.** Cluster view of knowledge domains for wearable devices in construction safety.

Figure 4 shows the timeline view of the network. Each horizontal line represents one cluster and the size of each ring represents the centrality of the nodes. The curved lines represent the relationships between the clusters and the authors. Unlike the cluster view in Figure 3, the timeline view in Figure 4 shows the temporal evolution patterns of the 10 clusters. Specifically, Figure 4 reveals that keywords in cluster #1 (brain-computer interface) and cluster #7 (intelligent monitoring) have the longest time range covered, with relevant keywords appearing from 2005 to 2021. In addition, cluster #0 (attentional failure), cluster #3 (industrial work safety), cluster #5 (construction site), cluster #6 (accelerometerbased activity recognition), and cluster #9 (wearable wireless identification) all emerged after 2010. Cluster #4 (corporate clothing) and cluster #5 (construction site) contain some central keywords, making the tree-ring circles in the figure larger. Moreover, it can be found that red links are mostly distributed in the first five clusters (#0–#3), indicating that the research hotspots in the recent five years are concentrated in these clusters.

**Figure 4.** Timeline view of the co-occurrence network cluster of keywords.

#### **4. Discussion**

Wearable devices in construction safety have focused on technologies and applications. Four clusters—brain–computer interface (#1), accelerometer-based activity recognition (#6), and wearable wireless identification (#9)—are placed into the technology category, which encompasses the basic functions of wearable devices and sensors. Most of the tags in this category possess obvious technical attributes. Accelerometer-based activity recognition, for example, are commonly employed as collectors of worker activity data (e.g., identifying body posture and acceleration, and walking steps) in construction safety. The remaining clusters—attentional failure (#0), activity tracking (#2), corporate clothing (#4), and intelligent monitoring (#7)—are categorized as applications. Most of the works in these clusters employ existing wearable technologies in novel assessment systems of construction risk (e.g., worker pressure, worker falls and collision damage, and other relevant occupational disease risk). In addition, cluster #3 (industrial work safety), cluster #5 (construction site), and cluster #8 (building site) also illustrate the application scenarios of wearable technology. The clustering results effectively identify the emerging research hotspots in this domain.

#### *4.1. Cluster #0 (Attentional Failure)*

The most significant cluster is cluster #0 (attentional failure). The construction industry is labor-intensive and necessarily involves repetitive manual labor [81]. Highly physically demanding activities increase the risk of physical fatigue [43], which increases the likelihood of attentional failure and tends to have adverse consequences for construction workers [82].The most common construction accidents are usually related to equipment operation, and attention failure is the leading cause of equipment operator error [83,84]. Using eye-tracking technology, workers' attention allocation, mental fatigue, and hazard detection abilities can be well evaluated [84,85]. Eye-tracking devices can also be used to measure some metrics of visual search patterns (e.g., fixation count, fixation rate, fixation spatial density, and fixation time [86]) to determine workers' perception of hazard in empirical investigation [87]. Meanwhile, based on computer vision technology, the data of eye-tracking devices can be uploaded to 3D point cloud to build a training environment, which can further analyze the attention distribution of workers [20]. Besides, Jebelli et al. (2019) reported that the physical state of workers is measurable [17]. As fatigue is mainly related to work intensity, it can also be measured in terms of physical demands [88]. In recent years, obtaining the physical demand levels of workers through physiological signals has followed a common research path. Jebelli et al. (2019) revealed that the physical demand levels and stress states of workers are important considerations

in a construction environment, and that physical demand on the construction site can be detected by wearable devices [17]. Aryal et al. (2017) monitored physical fatigue by wearable devices, and subjectively collected the fatigue level by the Borg's Rate of Perceived Exertion scale [42]. Li and Gerber (2012) non-intrusively evaluated the physiological load of construction workers using wearable sensors, and found that heartrate was sensitive to rest breaks during the construction test [89]. Gatti et al. (2012) related the physical strain measured by wearable devices to the productivity of construction, and identified heartrate as a significant predictor with a strong parabolic relationship to productivity [90].

#### *4.2. Cluster #1 (Brain-Computer Interface)*

The second most significant cluster is cluster #1 (brain-computer interface). This cluster label refers to the exchange of information between the brain and the device, and the main way to achieve this in construction safety research community is through wearable electroencephalography (EEG) devices. In the application stage, it has proved feasible to identify workers' stress status by brain waves. For example, wearable EEG devices can assess the mental workload, attention, and vigilance of workers [78]. EEG captures the electrical activity of firing neurons in the brain [91], and hence the mental statuses (e.g., emotional states) of construction workers [18]. This widely used technique assesses individuals' stress by analyzing their brain waves [18]. The attention levels of construction workers can also be effectively monitored by wearable EEG systems [92]. EEG rapidly indicates any changes in workers' mental statuses. However, acquiring highquality EEG signals is more challenging than collecting other physiological indicators, because the signals are interfered by automatic actions such as eye blinking. Previous studies have also shown that displaying images of construction hazard in a laboratory environment can lead to information distortion, and these images do not have as much impact on the pupil or brain as they do in real life [93]. Therefore, hazard recognition process can be simulated as far as possible by simulating construction hazards site with virtual reality (VR) technology and collecting data through wearable electroencephalogram in VR environment [94]. Jebelli et al. (2019) found that stress is less accurately recognized by EEG than by physiological signals collected by a wristband-type sensor [67]. Additionally, wristband devices can measure their physical demands. Wearable devices equipped with photo plethysmography sensors can monitor a worker's heart rate [95]. Besides, humanrobot collaboration can be achieved through brain-computer interface (BCI) [96]. Liu et al. proposed a BCI based system that can control collaborative construction robots with 90% accuracy using EEG signals [56]. This technology has the potential to improve productivity and help workers to avoid hazardous working conditions.

#### *4.3. Cluster #2 (Activity Tracking) and Cluster #6 (Accelerometer-Based Activity Recognition)*

Cluster #2 (activity tracking) and cluster #6 (accelerometer-based activity recognition) represent a similar research topic. For construction workers, lifting, squatting, walking, and even turning screws and swinging tools can be repeated many times. Therefore, the recognition of workers' movements or behavior patterns is the first step to find the abnormal situation of construction workers. Koskimaki et al. (2009) identified these movements with accelerometer and gyroscope (angular speed) with 88.2% accuracy. The study of Work-related Musculoskeletal Disorders (WMSDs) has been developed by many researchers in recent years on the basis of the identification of worker postures and activities [19,23,38,39]. According to relevant study, falling from heights is among the most common accidents in the construction industry [97], which is strongly associated with loss of balance [21]. Some previous empirical research on falling-risk assessment have shown that wearable inertial measurement units (WIMUs) effectively gather the data of workers' body responses (such as balance and gait) [12,21,98]. For example, Umer et al. (2018) detected task-induced changes in the static balances of construction workers equipped with WIMUs [99]. In addition, some systems (such as multi-parameter monitoring wearable sensor (MPMWS)) composed of multiple sensors are widely used in analysis of worker's

trunk posture [100]. However, these devices need to be placed in multiple places on the worker's body, which can cause mobility inconvenience. It is worth noting that some researchers have devoted to developing less invasive wearable measurement devices in recent years. For example, utilizing a wearable insole system with higher accuracy than previous wearable inertial devices to identify falling risk [48,101]. The wearable insole pressure system provides more substantial safety gait metrics than the WIMU system, and extends the current wearable technologies for construction safety [21,48]. In laboratory conditions, built-in sensors of smartphones have been proven to recognize worker's postures effectively [16,19,102]. According to previous studies, accelerometers are usually placed at the waist or back [38,103,104]. By contrast, wristband-type activity tracker has higher flexibility and lower hardware costs [11]. Therefore, future research is promising to focus on the portability and accuracy of wearable devices.

#### *4.4. Cluster #5 (Construction Site)*

It is worth noting that cluster #5 (construction site) has two alternative labels ("wearable biosensor" and "physical demand"). It appears that most of these studies are based on wearable sensors that measure the workers' physiological states. The measurement and collection of safety data is essential for safety monitoring in the construction industry. As shown in Figure 4, there are three large tree-ring circles in the timeline of cluster #5 (construction site), indicating that keywords in this cluster were widely cited by articles of the construction safety research community. The wearable technologies applied in other sectors can monitor and measure a wide variety of safety performance metrics within this industry [24]. In addition to the EEG devices mentioned in cluster #1, Guo et al. (2017) found that workers' physical data (heart rate, skin temperature, calorie consumption, etc.) could indirectly measure their psychological status [76]. Pillsbury et al. also effectively assessed the physical and health status of workers by measuring heart rate, respiration rate, and core temperature through physiological status monitors [61]. In addition, upper body posture angle, traveling speed, and acceleration have also been shown to be added to the system of physiological metrics [105]. These case studies have shown the practical effectiveness of safety monitoring based on various physiological indicators collected by wearable biosensor.

#### *4.5. Relationships between Clusters*

The remaining clusters represent specific techniques and knowledge domain in construction safety research. For example, cluster #4 (corporate clothing) illustrates the application potential of textile technology in wearable devices, cluster #7 (intelligent monitoring) summarizes the prospect of intelligence and automatic monitoring for the construction safety, and cluster #3 (industrial work safety), cluster #5 (construction site), and cluster #8 (building site) echo the application scenarios of wearable technology in this review. From the above discussion, it can be found that these cluster labels well represent the respective knowledge domain. In addition, different research directions may use the same wearable devices, which means that the database of construction safety field has the potential to be established. At the same time, further development of wearable technology in the future will constantly open up new application scenarios for this field.

#### **5. Conclusions**

This paper provides an objective and accurate bibliometric analysis of wearable applications in the field of construction safety. The analysis was based on selected papers published between 2005 and 2021. Many key areas were identified by keyword co-occurrence analysis, such as ergonomics, electroencephalography, and inertial measurement unit. Ten knowledge clusters were identified: attentional failure, brain-computer interface, activity tracking, industrial work safety, corporate clothing, construction site, accelerometer-based activity recognition, intelligent monitoring, building site, and wearable wireless identification.

Through this systematic and quantitative bibliometric analysis, we could clearly visualize and explain the knowledge clusters and the frontier of wearable devices in construction safety. The present work highlights the developments and trends in this research domain and provides a clear perspective based on comprehensive data and statistical analysis. The developments have been clearly summarized by information maps and statistical descriptions. In future work, the performance of wearable devices should be further improved to reduce monitoring bias and to create low-cost systems with potential for commercial promotion. Future construction safety might also employ integrated wearable sensors for multi-parameter monitoring. In fact, to design an integrated multi-functional wearable system is another developmental trend. It is worth noting that some wearable technologies have been available for other industries for years, but have only recently been applied to construction safety. Further research could focus on whether mature equipment from other industries can be adapted to scenarios in the field of construction safety.

Although the relevant literature has been carefully collected and analyzed, this research has several limitations. Although this paper screened literatures from the *Scopus* database and the Web of Science database, a manual review would inevitably be subjective. At the same time, due to the limitation of the software algorithm, the discussion part is based on the 10 clusters identified, which may result in the omission of some relevant knowledge fields. Significant contributions could be ignored as a result of this deficient coverage. In addition, some literature might be ignored when using keywords to search for literature. Therefore, the research results could not completely cover the entire literature related to wearable devices in construction safety. Future studies should address the limitations by utilizing various databases and broadening data sources to collect and review literature.

**Author Contributions:** Conceptualization, R.G.; methodology, R.G. and S.L.; software, B.M.; validation, H.W., S.L. and C.Y.; writing—original draft preparation, B.M. and R.G.; writing—review and editing, R.G., H.W., S.L. and C.Y.; visualization, B.M.; supervision, R.G.; funding acquisition, R.G., H.W. and C.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Humanities and Social Science Fund of the Ministry of Education of China, grant number 20YJC630023, 19YJCZH154 and 21YJAZH104; National Natural Science Foundation of China, grant number 72074240 and 72174220; Program for Innovation Research in Central University of Finance and Economics; First Class Academic Discipline Construction Project of Central University of Finance and Economics, grant number 20190807-8.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors gratefully acknowledge the School of Management Science and Engineering at Central University of Finance and Economics for providing technical support to conduct this research. The authors also acknowledge the anonymous reviewers for their comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Proposal for the Deployment of an Augmented Reality Tool for Construction Safety Inspection**

**Jorge Ramos-Hurtado 1,2, Felipe Muñoz-La Rivera 1,3,4,\*, Javier Mora-Serrano 1,3, Arnaud Deraemaeker <sup>2</sup> and Ignacio Valero 1,3**


**Abstract:** The construction site is a hazardous place. The dynamic, complex interaction between workers, machinery, and the environment leads to dangerous risks. In response to such risks, the goal is to fulfill the zero accidents philosophy, which requires the development of safety skills among workers and the provision of tools for risk prevention. In pursuit of that vision, this work studies collective protective equipment (CPE). Traditional methodologies propose visual inspections using checklists, the effectiveness of which depends on the quality of the inspection by the safety advisor (SA). This paper analyses the traditional process of safety inspections in building projects: the traditional methods, main pain points, and bottlenecks are identified, along with the key performance indicators (KPIs) needed to complete these processes correctly. Because of this, a methodology that digitises the CPE inspection process is proposed. Augmented reality (AR) is used as a 3D viewer with an intuitive interface for the SA, and, accordingly, functional requirements are detailed and different information layers and user interfaces for AR applications are proposed. In addition, the workflow and KPIs are shown. To demonstrate the feasibility of the proposal, a proof of concept is developed and evaluated. The relevance of this work lies in providing background for the use of AR in safety inspection processes on construction sites and in offering methodological recommendations for the development and evaluation of these applications.

**Keywords:** occupational risk prevention (ORP); collective protection elements (CPE); construction safety inspection; augmented reality; building information modeling (BIM); construction industry

#### **1. Introduction**

A construction site is a hazardous setting that poses numerous risks to workers [1,2], such as through the use of heavy machinery, dangerous tools, and large materials. In addition, the interactions of different work teams on a construction site generate a complex and broad diversity of potential scenarios that are challenging to coordinate in a safe manner because each group has its own specific, and often conflicting, objectives [3]. Each professional has very different tasks and goals to achieve, but all of them are sharing the same space, so they are affecting the work of others, crossing safety zones, or breaking into the wrong spaces [4]. Taken together, these elements make the Architecture, Engineering, Construction, and Operation (AECO) industry one of the most dangerous industries in the world, as indicated by it having one of the highest global accident rates.

**Citation:** Ramos-Hurtado, J.; Muñoz-La Rivera, F.; Mora-Serrano, J.; Deraemaeker, A.; Valero, I. Proposal for the Deployment of an Augmented Reality Tool for Construction Safety Inspection. *Buildings* **2022**, *12*, 500. https:// doi.org/10.3390/buildings12040500

Academic Editor: Fahim Ullah

Received: 23 February 2022 Accepted: 15 April 2022 Published: 17 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Health and safety is therefore a relevant discipline for the construction sector. This discipline is responsible for the design and implementation of occupational risk mitigation and prevention plans to avoid accidents in the workplace and protect the welfare of workers. Different measures can be applied. Proactive construction safety management measures are deployed at different temporal stages and organisational levels. They consider a broad time frame of action, from design to influencing the best decisions, avoiding transfer to contractors during construction, identifying site risks, and generating design solutions that reduce or avoid hazardous work environments. In addition, the identification and communication of unavoidable risk areas enables the planning and development of accident mitigation plans and prevention measures. In this sense, achieving a real awareness of workers is relevant to forming a culture of safety. With this, safety plans must be implemented at the construction site. These safety measures are reflected in the fact that workers use personal protective equipment (PPE) and engage in safe behaviours (as a result of training and safety culture). In addition, collective protection elements (CPE) are correctly placed throughout the construction site. Health and safety on-site is a field in which there is ample room for improvement, and thus there is a central focus on accelerating the improvement of measures and practices in construction enterprises in order to protect workers from potential injuries [5].

CPE are all those protection elements that are situated in risk areas and which protect groups of workers. CPE are considered 'of passive operation' because they do not require the user's action (unlike individual protective equipment, where the worker is responsible for putting on the necessary equipment). Thus, the layout of CPE in the worksite involves certain challenges, which are primarily associated with the changing worksite environment. During construction, the configuration of the worksite changes every day; therefore, the collective protection measures must adapt to these new configurations and the different tasks to be performed. To meet such challenges, safety managers design the protective measures associated with each task and place the protective measures at different locations on the site. On top of this, it is also typical to identify unprotected areas, either because they have been wrongly removed (through ignorance or carelessness) or because they have not been put in place in time. Therefore, regular inspection of the presence of CPE is crucial. Identifying and correcting these faults in time ensures the protection of workers and provides rapid continuity of work. These inspections are traditionally carried out using checklists, paper checklists, or applications (apps), whereby inspectors walk around the worksite and visually check whether the protective measures are in place. While it is possible to record items and verify their presence/absence, the effectiveness of the inspection is limited to the experience of the professional in charge and their ability to recognise CPE in the workplace.

Emerging technologies, particularly digitalisation, represent an opportunity to invest resources and efforts to improve health and safety in construction [6]. The industry is searching for new radical solutions based on other ongoing paradigm shifts to enhance performance regarding Occupational Risk Prevention (ORP). On the one hand, building information modelling (BIM) seeks to integrate people, processes, and technologies, incorporating safety aspects in the models of projects [7]. On the other hand, lean construction (LC), the continuous improvement of safety management processes, is based on lean management's lossless production principles. In addition, Design through Prevention (DtP) incorporates safety considerations from the early stages of the project and promotes monitoring and control throughout the project [8]. Digital technologies and computation form the basis of increasing efficiency in information exchange processes and their development in a complex growing environment, such as current construction sites [9,10].

In order to develop on-site digital model visualisations, augmented reality (AR) technologies were analysed [11], who identified a trend of applying sophisticated immersive AR solutions to create interfaces to manage complex workplace situations, building up risk-preventive knowledge and supporting training. Research projects such as this show that AR enhances a person's perception by superimposing virtual models onto the real

environment [12,13]. Augmented reality linked to BIM models is being tested in the construction sector for different purposes. Through the use of AR, it is possible to compare BIM models with real elements on the worksite, superimposing virtual elements onto real ones, verifying dimensions, and validating technical characteristics. In addition, AR facilitates the use of tracking machinery and cranes on the job site, confirming the distribution of materials and interaction zones as well as allowing virtual signage to be placed on the worksite to improve movement around the site and display information on elements of interest. This extra information accessible in real time is an asset that can aid in tasks such as the inspection of the safety equipment present on-site.

Although AR has enormous potential for construction sites, its use is not massive [14]. The industry has not yet massively adopted its use because it is considered an expensive and immature technology. However, despite this (understanding the rapid development of technologies, reducing costs), the development of capabilities in the sector's professionals, the establishment of protocols, and standardised developments of these applications are more relevant challenges. Along with this, those who promote AR and other technologies of this type must be explicit and demonstrate the benefits of their use for different specific tasks in construction [15,16]. The applications of AR in construction need to be diversified and integrated with BIM. Although several authors mention the potential of this technology for health and safety in construction, there is little evidence of applications for safety (no specific applications have been identified for CPE management). Despite this, no protocols standardise the development and use of augmented reality in safety management (project review, on-site inspection, monitoring and control of protective elements, management of preventive measures). Workflows are needed to guide professionals in the development of applications in order to better integrate AR into their processes. In addition, standardisation is necessary to move towards the automation of the development of these applications [17–19].

Given the relevance of the inspection processes of safety elements at the construction site, this work analyses the traditional inspection processes of CPE at the worksite and studies the potential of AR to evaluate the presence of these elements, providing recommendations for the development of AR applications for these purposes, together with indicators to assess their performance. Thus, this work does the following:


The relevance of this work lies in providing background for the use of AR in safety inspection processes on construction sites and in providing methodological recommendations for the development and evaluation of these applications.

#### **2. Research Methodology**

For the development of this research, the design science research methodology (DSRM) is used as a base to describe the process of justification, development, and testing [20,21]. The phases of this methodology are presented in Figure 1.

A background review is carried out in the first stage, based on SCOPUS and the Web of Science libraries. Background information is collected in order to study the methods of inspection of safety elements at the construction site, to understand the relevance of their proper placement, and to determine how the use of augmented reality (AR) technology can help the safety advisor to identify and manage this equipment in such a way as to increase the reliability of the presence and effectiveness of the collective protective equipment (CPE). Based on this background, the research team outlines a traditional inspection workflow for building construction safety elements.

**Figure 1.** Research methodology to study the feasibility of AR devices for safety inspections.

Next, the following actions are carried out to analyse the proposed flow:


In the second stage, the objective of developing an AR-based solution is defined: building an AR tool to improve the inspection of the equipment involved in a construction site, increasing the reliability of the presence and effectiveness of the CPE. To this end, the application functionalities and key performance indicators were defined.

In the third stage, the AR application is designed, implemented, and deployed according to the following steps: (a) development of the 3D model of the work site and the CPE data in a BIM platform; (b) definition of user interfaces and their information management processes for the AR application; and (c) AR application development. In the fourth stage, to test the AR application's functionalities, a proof of concept in a real environment is performed. Finally, in stage five, an analysis of the application's effectiveness is carried out based on defined quantitative and qualitative KPIs, according to the solution's requirements.

**Table 1.** Expert panel characteristics.


#### **3. Background**

#### *3.1. Occupational Health and Safety in the Construction Sector*

The most significant peculiarity of the construction industry concerning many others, particularly those under the name industry 4.0, such as the automotive industry or the aeronautical industry, is the workplace during the execution phase. This pertains to the tasks performed by workers to change a site (the ground, the structure, the environment) and to build a new space (a place to live, move, or work, ultimately) [22]. Therefore, the construction site is in a constant state of change due to the continuous tasks performed by the numerous stakeholders involved [6]. The final product is always different because it is linked to the setting involved. Its development is thus subject to several uncertainties, which can cause sequential delays in the project. Since a construction site is a complex place to work due to the many factors involved, ensuring safety is also a very difficult task as it is not easy to standardise; nevertheless, the trend seen in the majority of companies is to develop a series of measures and improvements to protect their workers [6,23].

Continuous improvement processes to enhance safety are needed while accidents and fatalities continue to occur [24]. It is well known that the construction industry is one of the most hazardous industries in which to work; however, the vast majority of the risks on-site can be avoided by implementing the proper health and safety prevention techniques [25]. Many studies have produced evidence that there is much room for improvement in this regard. According to the current data on the accidents in the industry [26], the main causes of accidents are as follows:


The regulation in terms of health and safety in construction is broadly similar across most of the countries in the European Union. Safety advisors are responsible for the coordination of the tasks that are performed on-site, and they make sure all work complies with the proper safety measures.

#### *3.2. New Trends in the Construction Industry and Hope for Better Safety*

During recent years, the construction industry has undergone several changes, primarily as a consequence of the inevitable rise of technology, which has led to the modernisation of the industry. There are five major changes foreseen in the concept of construction 4.0, which is defined as the finding of coherent complementarity between the main emerging technological approaches in the industry to improve real-time decision making [11], which are the following:


#### *3.3. Augmented Reality for Construction Safety*

The inclusion of the BIM methodology in the development of construction projects has meant an important advance for improving process efficiency [34]. The visualisation of 3D models provides important support for resolving conflicts and doubts at the time of construction [35]. The automatic connection of 3D models with budgets and schedules facilitates on-site management, among other multiple benefits. The link between BIM and visualisation technologies such as augmented reality seems natural [36,37].

Several authors have discussed the benefits and potential of augmented reality for the construction site. The main value is visualising information graphics and data on the real construction site [38,39]. In the construction stages, the visualisation of the components to be built and the construction stages for a specific construction site allows for the clarification of the construction process's doubts to the work team, thus improving the quality of the construction and avoiding the need for rework. In addition, the data associated with the 3D models (linked to the schedule and costs, 4D and 5D models) allow clarification of technical specifications, details of specific materials and equipment, and other relevant information for the construction process [40,41].

Augmented reality is used to install mechanical, electrical, and plumbing systems among the main applications. Many systems in projects, mainly building projects (hospital projects, for example), generate problems during installation. Augmented reality coordinates and situates the systems in the specific locations determined in the design phase. The BIM MEP models are virtually superimposed on AR devices so that each team places the systems in the corresponding location. Thus, rework, network malfunctions, or unexpected adjustments are reduced [38,42–44]. In addition, augmented reality visualisation of building systems and equipment allows access to operational and maintenance information, facilitating facility management processes [45,46].

Several authors indicate the potential of augmented reality for health and safety management in construction. The use of AR improves workers' ability to identify and recognize hazards [47] due to the possibility of monitoring the project's status and working conditions using AR devices [48]. In addition, it allows them to interact in the real world with a virtual layer of information, which has the potential to alert them to hazards or potential accidents [48]. Access to virtual information from the devices the worker interacts with on the job site gives them advantages when making interaction decisions (e.g., in installations with electrical hazards) [45,49,50]. Thus, there are applications to manage the safety associated with handling facilities, fire protection, crane movement, and work machinery, among others [18,31,51,52]. In managing collective protective equipment (CPE) at the construction site, no specific works on augmented reality applications have been found where procedures and methods of use are indicated. However, there is a parallelism with the use of AR in the management of MEP systems. Nevertheless, the associated management of CPE is more complex. The temporality of the on-site disposition of the CPE, the continuous verification of their absence/presence, their relevance for the continuity of the construction processes, and the safety of the workers must be considered.

Although several authors discuss the potential of augmented reality in safety, and examples of safety use have been identified, there is a lack of standardisation in the use of augmented reality tools, lacking taxonomies for their standardised use in safety review, in support of prevention plans, and in the identification, monitoring, and control of preventive measures. The need to have standardised workflows to move towards the automation of the generation of these applications, which can easily integrate safety monitoring and control data based on BIM models and data, would allow for moves towards the massification of these tools [14–17].

#### **4. Safety Inspections Analysis**

#### *4.1. Safety Inspection Workflow*

Considering that each country has specific regulations regarding construction safety measures and their associated protocols, this research aims to focus on common on-site inspection processes, prioritising generic roles and activities that can be personalised according to the specific measures of each region and company [53,54].

The safety advisor is responsible for the integral safety management on the construction site [55]. Their tasks as a safety advisor can be summarised as follows: to advise and assist the Quality, Health, Safety, and Environment (QHSE) Manager in formulating and implementing the QHSE policy for the different work teams; to develop a safety culture in collaboration with the QHSE manager and to enhance continuous safety awareness in the management and the on-site employees; and to centralise, structure, and organise security priorities [56,57]. More specifically, by following the relevant legislation, the safety advisor must (a) analyse the risks, promote improvements, outline procedures and instructions, inspect the works on-site, formulate advice on safety aspects and installations, and coordinate with subcontractors; (b) advise and support site supervisors in the development of QHSE plans and projects; (c) carry out periodic site visits and inspections; (d) organise the welcome information and training for new employees; and (e) assist the QHSE manager in the running of the consultative bodies within the organisation [58].

The workflow of the safety advisor's inspections of the safety equipment can be described according to three main stages [57,59,60]:


On a construction project, the safety advisor meets with the project management team to discuss and plan the work for the next days or weeks. Weekly on-site meetings with the project management team are necessary to discuss planning, progress, and delayed changes. During the on-site inspections, the SA must ensure that all the safety measures are being deployed as planned by checking whether the workers are performing their tasks in compliance with the safety measures. When these safety measures are not being followed, the job has to be stopped. In such a case, the safety advisor must apply corrective actions by modifying any protective equipment or ensuring that the workers perform their tasks appropriately. In terms of CPE, a purely visual inspection is usually sufficient. During the meetings before the inspection, the project's phase and the safety elements must be foreseen. Using their knowledge and experience, the safety advisor must be aware of whether any CPE needs to be replaced or substituted. In the case of an incident/observation, such as a missing CPE, a worker's misbehaviour, risks that need to be addressed, hazardous products, etc., they must issue a report [61,62]. Figure 2 shows the safety inspection workflow.

**Figure 2.** Safety inspections workflow.

All incidents can be classified into three different categories, each of which requires different procedures [63–65]:


#### *4.2. Security Inspection Workflow Analysis*

The details of a construction site were studied to validate the application of the general procedure to real situations. This site corresponds to the construction of a residential building in Belgium. Three of the members of the expert panel, two of them safety advisors, visited the site to review the on-site real-time use of the app to support the proposed inspection flow. Figure 3 shows an example of the incident report made at the construction site, while Figure 4 shows some of the observations made during the inspection of the project.


**Figure 3.** Example of the report for an observation used in an inspection.

**Figure 4.** Observations collected during the inspection at the project: (**a**) a barrier is missing on one of the floors of the building; (**b**) an unsafe opening on the staircase shaft; (**c**) missing barriers on the central core of the building; and (**d**) an opening not covered with barriers. (Own source).

(**c**) (**d**)

In order to quantify (QT) and qualify (QL) the performance of the traditional inspection, several key performance indicators (KPIs) are set. Table 2 shows the assessment of the traditional inspection method according to what was observed in the company visited.


**Table 2.** Key performance indicators for the current safety equipment inspection.

Once the workflow is analysed during a safety inspection, the bottlenecks (BN) and pain points (PP) should be analysed too. Table 3 shows the main pain points and bottlenecks collected from the safety equipment inspections.



#### **5. Conceptual Design of the Proposed Inspection**

#### *5.1. Overview of Safety Inspections*

Depending on the kind of information used, different levels of data assistance can be defined for the SA. The first level is when the SA does not rely on any information, i.e., a purely visual inspection using the inspector's knowledge, which has been the most traditional and common level over centuries of construction. The second level is when any document, either a paper or a digital version, is available to the SA that has been previously created. A third important level is when the inspection can acquire data linked to the site by using a kind of digital creation of information, including the description of the items, location, maintenance notes, etc. Lastly, the level proposed in this work consists of having an enriched layer of a combination of 3D models previously loaded with data collected on-site by using the features of the AR technology. That is why this new layer of information can be defined as an augmented vision of the setting. This analysis introduces the concept of different levels of augmented inspections. Figure 5 schematically shows the different identified levels of augmented inspection, and Table 4 describes each layer.

**Figure 5.** Example of the report for an observation used in an inspection.


**Table 4.** Description of the four layers of the augmented inspection.

#### *5.2. Functionality Requirements of the App*

Since this research aims to study the implementation and development of the proposal, it is necessary to develop a prototype that demonstrates the capabilities of AR in the inspection workflow. The app's functionality is designed to mitigate the main pain points identified. Some of these points and bottlenecks are related to safety management rather than the visualisation of the elements. The app interface primarily addresses the visualisation aspects and adapts the app's data management to a workflow to facilitate communication and data processing. Table 5 lists the difficulties in the inspection and the functions that have been developed to overcome these issues.

An important aspect of the app's functionalities is to include the evolution of the CPE during the execution phase. The CPE models must be updated in line with the constant changes in the equipment. For example, Alcalde [66] proposed the implementation of the equipment by using a Revit BIM model that can be easily studied in real time with Autodesk Navisworks®. Several web pages, such as BIM Object and BIM and Co., include complete models that can be directly used without the need of modelling each construction site again. For regulatory purposes, the EPC models should be designed according to certain standards (without prejudice to the SA considering generic models for inspection only, considering that the actual ones should comply with defined specifications, the information of which could also be placed in the application).

**Table 5.** App functionalities that overcome the main pain points of the current inspection.


Therefore, consideration of the time taken in the inspection of the CPE throughout the project phases is key to ensure a much more realistic supervision. An analysis carried out by Tarancon [67] was performed to provide the timeline for each of these models, the time dimension in BIM, stating that planning of the activities undergoes continuous change and must be tracked constantly. Unexpected events always occur in construction, and managing them is key to avoid risks. To summarise, it is necessary to integrate CPE as a BIM element of the construction process to improve the planning regarding the safety equipment. This process means that the CPE 3D models can be transferred to the AR experience without having to create them. Therefore, the applicability of the concept would be eased on the assumption that the CPE must be included in the BIM libraries. The timeline of these models is an important input for safety equipment planning. This information is used in the AR experience to indicate the evolution of the models during their planning phases.

#### *5.3. Key Performance Indicators (KPIs)*

Jetter et al. [26] indicate the most common KPIs used to evaluate AR applications. Accordingly, Table 6 shows the KPIs considered for the evaluation of the proposed application.


**Table 6.** Definitions of KPIs used to test the performance and characteristics of the proposed application.

#### *5.4. User Interfaces (UI) Design*

Once the aspects considered are determined, the functionalities can be created for the proposed application (see Table 7). The details of their development are explained in the chapter on the development of the application. It is essential to categorise all these functionalities and to determine how the user interfaces (UI) are organised and presented in order to clearly understand how the app's organisation works and how the user experiences the various functionalities inside each UI screen.


**Table 7.** Table of the UI screen's contents and functionalities.

Figure 6 shows the UI screens of the proposed application. In the first UI (Figure 6a), the input mentioned in Table 6 is used. A dropdown menu for all the options is designed to avoid having to manually type out the information, so as to accelerate the procedure and improve the user experience (UX). The data input is automatically fulfilled following the date of the inspection. This second UI screen (Figure 6b) inputs the data regarding the project phase and the zone that needs to be inspected. Only the elements according to those data are shown in the app.

In the third UI screen (Figure 6c), the 3D BIM model is placed. There are several techniques to place the model. Damià [68] placed the model using the GPS coordinates, whereas Ramos-Hurtado [69] placed it by scanning the horizontal floor and using two points of reference that define the location and the orientation. In ARCore, the default option is to scan the horizontal plane, tap one point of reference, and orientate the model manually. As mentioned, the precision of the model placement depends on the technique used and the device specifications. In this third UI screen (Figure 6d), the checklist of the CPE can start. In this case, a panel appears with a list of the items to be inspected. If an incident occurs regarding safety aspects, the UI provides an option to report it. In the other UI screen (Figure 6e), the report panel is shown to fill the fields used in the safety reports. These fields are the observation type, add image, comments, status, and action. Once the inspection of the elements is completed, a button can be pressed to finish the inspection, and all the data should be transferred to the cloud database.

Once the application's design is completed, the specific workflow of the app to be used by the SA is defined (see Figure 7). In the following section, the application's workflow and how it is integrated into the inspection is described.


(**c**) (**d**)

(**e**)

**Figure 6.** UI screen of the proposed application: (**a**) first UI screen; (**b**) second UI screen; (**c**) third UI screen—3D BIM model is placed; (**d**) third UI screen—3D checklist of the CPE starts; and (**e**) third UI screen—report panel.

**Figure 7.** Flowchart of the safety inspection with the proposed application.

#### **6. Development of the Application**

Using an illustrative example, this section outlines a specific step-by-step development procedure to satisfy the detailed requirements and conceptual design described in the previous sections. This procedure can generate many other contents with different scenarios and elements. The established algorithms and procedures enable the development of a similar application for other BIM models which consider the changes in the security elements and the specific characteristics of the project. Figure 8 shows a workflow to create the contents for the augmented reality application. For each deployment phase, the required software and devices are indicated and described (see Table 8).

**Figure 8.** Workflow to create the contents of the proposed app.



*6.1. Creation, Import, and Configuration of Models*

The specific implementation of the application is carried out to inspect the safety elements in renovation work on a civil engineering laboratory building at the Université Libre de Bruxelles (ULB), located in building C at the Solbosch Campus, Brussels. The correct planning and presence of the safety elements should consider the operations performed at the laboratory and the protection of the testing machinery located there. Figure 9a shows the location of the laboratory at the university campus.

(**a**) (**b**)

**Figure 9.** Location of the ULB lab (red) in relation to building C, and location of the construction simulation (blue). (**a**) Building to be modelled in the AR application (Retrieved from google maps); and (**b**) AutoCAD 2D drawing of the building.

The first phase is devoted to creating the BIM model of the construction site. A 3D model must be mixed with the real environment using the AR app. Given that there is no BIM model of the ULB lab, it should be created according to the technical documentation of the project and the AutoCAD 2D drawings (Figure 9b). The BIM software used to model the lab and the construction simulation is Autodesk Revit. The 3D model created includes the building itself and its components and accesses building information from the model's database and on-site photos. The structural elements are modelled with the columns, walls, and the floor, but without non-essential elements to avoid a heavy model in terms of memory size.

The 3D model of the construction built in Revit is detailed in Figure 10. Once the model is completed, it can almost immediately be exported to the Unity 3D development framework. This 3D model is transferred by using the FBX 3D data interchange format, which is the most appropriate for Unity.

One of the goals of the AR app is to provide an easy and direct visual comparison between the virtual model of the CPE and the real CPE. These CPE models can be created in Revit or any other BIM environment as the building was created, and then exported to the Unity development environment. Still, it can also be completed by using libraries directly in Unity. In this project, the CPE are introduced from the Asset Store of Unity, where many 3D safety equipment models are available (Figure 11a). The CPE models are then uploaded to the Unity project and placed in the corresponding zones on the building. For example, the barriers of the construction simulation are placed on the slab edge of level 1 and level 0, covering the area of the construction. In addition, some railings have been added to the stairs. The enrichment of the 3D model with this safety equipment is shown in Figure 11b.

**Figure 10.** 3D model in Revit of the ULB lab and the exporting process to the Unity 3D package.

(**a**)

**Figure 11.** (**a**) CPE package included in the Asset Store; and (**b**) integration of the CPE model into the lab building model.

To set the phase where this equipment belongs, a script called Object\_Information (Algorithm 1) is created with two parameters to define the phase and checking state for each CPE, which helps to validate the CPE status at execution time.


This code is easily added to each element at the inspector window in Unity3D, where the properties can be accessed. Once all the 3D models are configurated in Unity, the AR environment should be set to manage the placement and visualisation of the elements, and to register the UI's information to be managed by the SA.

#### *6.2. Development of the Augmented Reality App*

This section describes the components needed to create the Unity app and the process to develop the code and algorithms to cover all the functionalities already defined by means of the different levels of the user interface.

The core elements to develop an AR app for Android can be included in a Unity project by downloading and installing the ARCore SDK [70]. Once installed, the Unity project window will contain several folders with the AR components and examples of complete AR applications, which makes it easier to become acquainted with the workable environment. These examples include several assets which are practical functions for developing an AR app and can be reused for the establishment of a specific app.

For the purposes of this project, one of these examples is used to establish the essential functions, such as the plane detection on the real environment and the placement of 3D objects in those detected planes. It is important to consider that these functions are created using several coding programs that are already ready to use. The example chosen for application is the Object Manipulation scene. Each function of the components is briefly described in Table 9.


**Table 9.** Components of the Object Manipulation.

To place a 3D object in a plane detected by the device, it is necessary to select a reference point or point of origin to anchor the BIM 3D model. This point of origin would need to be easily visible in the real environment to detect the planes accurately around that point. This procedure can be completed in Revit by modifying the Project Base Point, which maintains the origin in the FBX file. Another option is to change it in Unity3D. This is done by creating an empty game object in the hierarchy. Then, in the inspector window, the coordinates are adjusted to the origin of the coordinates system. Once the game object is set, the BIM 3D model is moved to place the new origin desired at the origin of coordinates. Then, the BIM 3D model file is dragged into the empty game object to make it a child of the element. The result is that the BIM 3D model is set on the origin of coordinates at the desired point.

It is important to note that changing the base point of the project in Revit (or in any BIM environment) will change the georeferencing of the project. This aspect could cause problems in project management if this type of information is associated with elements of the model. On the other hand, modifying the base point of the model in the augmented reality development engine (in this case, Unity3D) does not generate these problems (as long as the BIM-RA environments are not interconnected). For this project, the management of the models is based on text information (for example, the level where the EPCs are located); therefore, the change in the base point does not affect the model.

The next step is to place the 3D model in the AR environment. The default option of ARCore is conducted by the following piece of code, which places the BIM 3D model according to a point previously selected in the real environment.

The procedure works as follows: first, reading the coordinates of the point selected in the virtual plane from the real environment; second, checking whether the point belongs at the front of the plane or at the back. If the point is at the back of the plane, a debug message is shown, and the model is reversed. If the point is at the front of the plane, as it would be expected to be, this code (Algorithm 2) changes the model's origin coordinates to the coordinates previously read from the tap input.


The different application UIs are created once these previous steps are completed. The following sections describe the configuration and organisation of the UIs inside Unity, each UI's creation, and the codes used to configure the different functionalities. When creating a UI in Unity, there are two options: creating a Unity scene for each UI or creating all the UIs inside the (only one) scene. For demonstration purposes of this work, the second option is selected.

#### 6.2.1. First User Interface Level: Input of Basic Information

This UI design consists of the safety advisor's name and the construction project name as inputs. In Figure 12, a view of the dropdown menu and the options that can be added in the inspector window can be observed: first are the two dropdown menu objects with both fields, and second is the configuration setup. As much as possible, the app is provided with default values or information before the inspection in order to save time at the test site.

**Figure 12.** (**a**,**b**) Overview of the dropdown menus in the first UI; and (**c**) configuration of the options in the inspector window for the inspector's name.

Once the dropdowns menus are set, a button that introduces the second UI is created. The managing of the functions to disable the first UI and to enable the second is carried out via a specific script. It can also be done by using the OnClick function located in the inspector window, given that the different UIs are organised in packages. This second option allows one to select the objects that can be set as active or not when clicking it. Once all the functions of the first UI are created, the second UI can be developed with similar UI elements.

#### 6.2.2. Second UI Level: Input of Inspection Data

The second UI is designed to input the data regarding the inspection. The project phase is a variable linked to all the objects belonging to that phase, the location of the inspected area, and the safety equipment related to that location. The creation of these elements is done in the same way as in the first UI, where the distribution of the elements is shown in Figure 13.

**Figure 13.** Overview of the dropdown menus in the second UI. (**a**) Project Phase menu; (**b**) Inspection Location menu.

For this prototype, there are three project phase options, whereas there are two options for the inspection location. In terms of the app's development, having different scenes depending on the project phase or location is important for avoiding a heavy 3D model, which can be difficult to manage in many of the current devices. For this particular demo, due to the small size of the construction simulation, the inspection can be completed at one time. In this UI, the next button has the same code and configuration as the equivalent one in the previous UI, with the only difference being that this time the second UI is deactivated, and the third UI is activated. This button also activates the game objects that run the AR functions.

#### 6.2.3. Third UI Level: Placement of BIM Model

The third UI is designed to handle the 3D model (see Figure 14). The following steps are provided to the user to correctly locate the virtual model in its real place: (1) scanning the floor plane; (2) tapping the model point of origin in the real environment; and (3) rotating the model to fit with the right position.

**Figure 14.** Third UI of the application with its elements.

The AR functionalities are activated with the next button of the second UI, which makes it possible to place the model at the beginning of its appearance. Furthermore, to help the user, a hand animation GIF is provided to let the operator know that it can start scanning the environment. Afterwards, the app starts scanning the environment internally and creates a mesh of the detected planes. Once the desired plane is created, the user can tap the point of origin to place the model. Finally, once the object is placed, the user can rotate the model to its desired angle.

It is important to note that the level of accuracy of the meshing performed by the application will depend on the characteristics of the device being used. In the case of tablets, the accuracy is low. However, for this application, high accuracy is not required. The user must try to correctly select the base point defined for the model. However, millimetre-level accuracy is not needed since the objective is to position the CPE. The correct placement (joining of elements at specific points, for example) will depend on the SA review. Thus, for these functionalities, aspects of the model's georeferencing and the scanning performed by the tablet do not represent a problem since the model is positioned concerning the point defined by the user, not concerning a georeferenced point. Suppose georeferencing was necessary (for other types of functions): in that case, aspects such as the accuracy given

by the equipment used and the quality of the GPS signal inside the building, for example, must be considered.

This UI includes a button to start the inspection, which activates the following fourth UI: the checklist of the CPE. This same button deactivates the current UI and AR features.

6.2.4. Fourth UI Level: Initiate Checklist

This UI belongs to the inspection of the elements with a checklist. The Unity objects for this UI are as follows:


With these (unity3D+AR) technologies, the more efficient way to introduce the CPE's Unity objects in the scroll view is to prepare and run a script that includes all the CPE and reads all their parameters. These inputs can be automatised at a higher level by creating an importing code that reads the data directly from the BIM model. In this case, since the construction simulation is small in size, these inputs were introduced manually in Unity. These contents were included as toggle objects in the inspector window of the scroll view object (see Figure 15).

**Figure 15.** Fourth UI of the application with the hierarchy and scene windows.

6.2.5. Fifth UI Level: Reporting

Once the inspection is completed, the checklist and any additional information should be packaged as a report. The fifth UI contains all the elements to report the potential incidents. This reporting includes five types of inputs depending on the observation type, incident image, comments, status, and action. The objects created for this UI are as follows:


Since the reporting can greatly change depending on the quality protocols of each company, this work has been focused more on the available tools and procedures rather than on a complete and specific report with all the fields.

#### 6.2.6. Launching the App

As it is still in the development phase, this app is not yet available to the public, so the developer mode in Android is required, and the connection to the smartphone or tablet is established via USB (see Figure 16).

**Figure 16.** Steps to enable the developing mode in Android settings (according to Samsung.com). (**a**) Setting smartphone options; (**b**) Software information option; (**c**) Build number option; (**d**) Developer options; (**e**) Developer options configuration.

At the project settings of the Unity environment, just two fields are required to define the app. The company and product name is the name that will appear in the device app icon. Once this is completed, to launch the app, it is necessary to compile the models and codes by going to the Files/Build and Run tab. The resulting files should be transferred to the device and the APK saved into the desired folder.

#### *6.3. Final Application Developed*

This section describes the real-time validation of the app's functionalities, which are to have the final specifications, key performance indicators, and possible improvements.

In the testing of the first UI (Figure 17), the inputs are reasonable in size, and the required steps are easy to follow, which makes for a pleasing user experience (UX). When testing the dropdown menus, the ease of introducing these inputs rather than typing them is sensed.

In the second UI (Figure 18), the user experience (UX) is relatively similar to the first UI, as the input type is similar.

The third UI requires more care because it validates the AR features and is much more dependent on the kind of device used (in this case, the device used is shown in Table 7).

The detection of the planes was relatively rapid and straightforward. Figure 19 shows the scan of the planes along with the three-step guide to mix the real scene with the 3D model.

The handling of the model (see Figure 20) is easy and intuitive. The default origin is located at a corner of one of the columns. The model can be rotated with the aid of a circle illustration, which allows you to tap and drag to the desired position in a similar vein as the UI for CAD software. This UI includes a button that initialises the following (fourth) UI to start the site inspection. Two additional controls (back and replace switch) can be used to undo actions if the model's placement is incorrect.

**Figure 17.** First UI of the app tested in the lab.

**Figure 18.** Second UI of the app tested in the lab.

**Figure 19.** Scan of the real environment and creation of the planes in the third UI of the app tested on the lab.

**Figure 20.** Placement and rotation of the 3D BIM model in the real lab environment in the third UI of

The CPE item list makes inspection easy, as expected (see Figure 21). The UI elements, such as the scroll bars and the values according to each phase of the inspection, are key to finding the optimal balance between all the information necessary to verify each element. Still, there are not too not many elements, so as to avoid wasting time looking for each box to verify in the app. In accessing the information regarding the CPE, a tap on the element name should trigger a panel with the desired information previously introduced in the BIM software.

**Figure 21.** Fourth UI of the app tested in the lab.

the app.

According to this preliminary lab test, it is important to consider the following key points when developing an AR app for this kind of purpose:


### **7. Concept Proof Assessment**

As a result of analysing the inspection methods and applying the methodology and recommendations proposed in this research, an application has been developed. This app is a concept test to assess the feasibility of the proposed method, the practicality of the recommendations provided, and the functionality of the incorporated tools. The app must respond to the CPE inspection requirements obtained in the first stage of the research. Three types of assessment were performed: an assessment of the user experience, an assessment of KPIs for the application's performance (qualitative and quantitative), and a comparison with the traditional inspection method. These assessments are then aligned to the relevant aspects considered for occupational health and safety, according to ISO 45001 [71].

#### *7.1. User Experience (UX) Internal Assessment*

The user experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviours, and accomplishments that occur before, during, and after use of the app. In this case, the application tested in the lab has been internally evaluated, with a focus on the perception of the functionalities and the AR environment. Other possible requirements in terms of UX have been identified and proposed as part of a continuous improvement cycle. These aspects are classified as follows.

#### 7.1.1. Input of the Data Regarding the Inspection

The input information introduced in the first UI is the name of the safety inspector (Figure 22a) and the name of the construction project (Figure 22b). These data are submitted by typing each letter on the input box. To save time during the inspection itself, the UI input was substituted with a dropdown menu of the inspector's name (to be introduced beforehand when the inspection is planned). In this way, the application allows you to enter the information of the project's members and name, linked as appropriate. Similarly, other relevant information (internal company codes, for example) can be incorporated into the user interface, either manually typed (which may not be efficient for a mobile device application, as it is important that information is entered quickly) or with a dropdown menu.

**Figure 22.** Assessment of input of the data regarding the inspection. (**a**) Name of the safety inspector; (**b**) name of the construction project; (**c**) Project phase; and (**d**) location to be inspected.

Equivalently, the inputs of the project phase (Figure 22c) and the location to be inspected (Figure 22d) are indicated by a dropdown menu in the second UI, having been automatically introduced from the BIM model. The project phase is presented with a specific date linked to all the CPE elements that should appear on the inspection corresponding to that date, according to the construction plan. This information should be constantly linked and updated in case of unexpected delays in the construction processes. The location of the inspection project section is classified according to a specific building, specific area of the project, floor level, etc. The location where the inspection was performed is relevant for data management. The connection between the data and 3D models in the BIM methodology is key to project management.

#### 7.1.2. Placement of the 3D Model

The placement of the construction model and its safety equipment is introduced by detecting the floor plane (Figure 23a), selecting the point of origin, and then manipulating the rotation of the 3D model (Figure 23b). This is the most basic procedure, whereas more complex techniques are developed with higher precision. Another more straightforward option is to detect the floor plane, and by selecting two points corresponding to this plane and the model, the location and rotation can be set.

(**a**) (**b**)

**Figure 23.** Assessment of input of the data regarding the inspection. (**a**) Plane detection; and (**b**) 3D model positioning.

Although an important aspect is the precision with which the virtual model is superimposed on the real environment in the application, high accuracy is not required for CPE (unlike AR for surgery, for example, where precision of less than a millimetre is required). What is relevant is the correct detection of the environments to superimpose the model on the right location coordinates (x-y-z) without inclinations, and that the virtual models of the structure coincide with those of the environment (or where its construction is planned, as appropriate) so that the associated CPE are correctly placed.

#### 7.1.3. Inspection of the Safety Equipment

The list of elements included in the fourth UI to be inspected according to the phase and location previously provided are shown as expected (Figure 24). All the contents can be either automatically or manually displayed by clicking the element with the desired information regarding the state of maintenance, dates of its presence, and map locations that would be automatically introduced from the BIM model.

**Figure 24.** List of elements according to the phase and location.

The application developed shows the concept of itemising the safety equipment in order to visualise it in AR. It is important to highlight the potential between the information visualised as text data and the connection to the 3D models. Given the link with the BIM models, this connection is direct. Therefore, other elements can be incorporated as data or as 3D models.

#### 7.1.4. Reporting Incidents

The fifth UI works by reporting any incidents witnessed during the inspection. The inputs of the report are those designed to be directly connected to the cloud database to be sent in real time to the project management team. This reporting is accessed in the mid-inspection, and once it is saved, the SA can resume their work. This UI has not been developed, however, given that there are already multiple available platforms with such functionality. The potential connection with platforms of this type can increase the value of augmented reality applications since such applications are generally based on photographs and data management, not the relationship with BIM models directly in AR environments.

#### 7.1.5. Augmented Inspection General Assessment

The concept of augmented inspection refers to the digital models of the CPE to be inspected. With this proposed solution, the SA performs a direct visual comparison between the real CPE and the virtual model, which can be a more precise process in combination with the other layers seen in the AR app, and in conjunction with this, other tools such as digital rulers or computer vision algorithms can detect differences automatically. This visual comparison can greatly improve the SA's work and help to avoid mistakes in CPE identification.

#### *7.2. Application Performance—Key Performance Indicators*

The key performance indicators, KPIs, selected to monitor the application's performance, quantitative and qualitative, are described in Table 10. Different aspects have been evaluated, limiting the evaluation to the proof of concept. However, these provide a notion of the performance of these applications (Description, Table 10) and the values used in this proof of concept (Value, Table 10).

Table 11 includes other KPIs and compares these proposed inspections with the most traditional ones, with a more forward-looking view. The comparison between the current inspection and the RA inspection considers three indicators: time consumed; robustness; and incident responsiveness. This comparison is based on the workflows of the current methods identified in the first stage of this research.

As can be observed, the KPIs of robustness and incident responsiveness are improved when the proposed AR inspection is compared to the traditional inspection. The robustness is enhanced due to the extra aid that AR provides in identifying the CPE. Moreover, the improved incident responsiveness is a result of the interconnection with the cloud database, which permits the sharing of the incident reports in real time with the project management team. Security inspection processes are the responsibility of a security agent, who must determine the level of risk and identify any missing CPE. In addition, the time

that this professional takes to perform the inspection is relevant since a delay could result in an accident.

**Table 10.** Quantitative and qualitative KPIs of the application.


**Table 11.** KPI comparison between the current inspection and that proposed with AR.


#### **8. Discussion**

#### *8.1. Improvement of Construction Health and Safety Management Processes*

Employing effective health and safety protocols in the construction industry is crucial because of the high rate of accidents in the sector. However, health and safety in construction is not always given the priority it deserves. Indeed, it is typical for construction sites to prioritise time and the rapid progress of work over safety. In other words, while maintaining quality and scope, the aim is to advance as quickly as possible (especially in a sector where initial schedules frequently have to be extended). In this rush to meet deadlines, safety is often seen as a consideration that slows down the project's progress since it is not part of the construction itself.

Therefore, in order to achieve a balance between time-sensitive goals and safety concerns, it is important for the construction sector to improve its management systems and its implementation of health and safety plans. Thus, on the one hand, awareness policies and creating a safety culture can help to encourage workers to behave safely and to care about occupational safety. However, on the other hand, greater efficiency in the placement and inspection of CPE enables these elements to be seen as part of the work, directly associated and linked to the work plans, and not as extraneous elements that delay the start or continuation of a work unit because they are not in place or are poorly placed.

#### *8.2. Safety Inspection Opportunities with Augmented Reality*

The traditional inspection method has some shortcomings that do not help the vision of health and safety integration. While information management technologies have been incorporated, the experience of the safety officer still plays an important role in the efficiency and effectiveness of inspections. There are risks associated with the SA's criteria, time availability, and multiple tasks to be reviewed in a day. It should be considered that inspectors must constantly review the entire building. Consequently, constant inspection is necessary. Therefore, reducing uncertainty and reducing the time between inspections is relevant. In this sense, as shown in this article, augmented reality provides great benefits for construction site inspection, both for the construction process itself and for safety applications.

Chen et al. [18] use augmented reality to improve fire equipment inspection processes due to the drawbacks of paper-based tracking. They challenge the use of non-experts in the inspection process. Hasan et al. [31] use AR for applications in a digital twin, carried out at the prototype and laboratory level, raising the need for full-scale work. Chung et al. [45] use AR for a facility management system in buildings to display relevant information about real objects, suggesting the need to move towards the incorporation of more complex things to identify and massify their use. Concerning these recent works, this research advances the use of augmented reality for inspection processes of collective protection elements. It provides a procedure and a tool that non-expert users can develop (with partial knowledge in these environments and with a low level of programming), opening the way to its massification in real work environments.

Several reasons have been identified that limit the use of RA in construction safety: it is considered an immature technology; lack of workers' skills for its use and development; lack of implementation protocols; lack of standardisation in development processes. Dávila et al. [14] suggest the need to study the adoption of augmented reality technologies, along with the factors that contribute to their massification, from a practical perspective (beyond the simple description of uses). More recently, Dávila et al. [15] suggest that among these factors, one of the most critical that limits adoption is that they are considered expensive and immature technologies. Added to this is the complex context and dynamics of the construction sector, where there is a high resistance to change. Chi et al. [16] suggest the need to advance research on the application of AR in the field, the connection with data of interest, and the effectiveness of the deployment of AR devices in the construction site. Specifically, in the area of construction safety, Li et al. [17] raise the need to combine and interconnect information from AR devices with other tools and automate evaluation methods and processes, along with increasing the amount of testing and testing of functionalities in order to evaluate the effectiveness of these tools considering different parameters. Nnaji et al. [47] consolidate the limitations and barriers to adopting technologies for building safety management. Thus, costs, training of workers, technical support, and integration into work systems are the main issues. In addition, the small number of safety use cases fail to demonstrate the benefits of its use for different specific tasks in construction. There are also no protocols that standardise the development and use of augmented reality in safety management [19]. Given this potential and the lack of these applications to construction safety in the literature, this research has relevance because it enters into a new phase of the project. Progress has been made in providing background information for the use of AR in safety inspection processes on construction sites and in offering methodological recommendations for the development and evaluation of these applications. The evaluation performed in the proof of concept demonstrates the procedure's reliability. It induces the reflection of potential developers who replicate the

designs to evaluate the performance of their applications. Thus, this research contributes to the state of the art in the following aspects:


#### *8.3. Potentials and Challenges of BIM and AR*

Considering the perspectives and challenges of the construction sector, BIM has given an important boost to the industry. The collaboration and integration of people, processes, and models help to promote the interconnection of management and the project in general. Thus, the integration of safety elements with BIM models makes it more natural to see that a work package, the construction of a beam, for example, considers materials and construction processes and safety elements. Presently, the use of BIM models is widespread in offices, but the deployment of BIM information has not yet become widespread in construction sites (including the specific construction site that we studied).

Augmented reality manages to give continuity to the BIM methodology at the job site and manages to deploy the BIM models and safety elements in situ. Thus, in general, BIM+AR contributes to the solution of typical construction site problems. The extension of BIM in the construction stage, and even more, literally on the construction site (virtually), contributes to the resolution of doubts in the field, the management of construction processes, and information on materials and technical specifications, among other aspects. In the domain of safety, SAs can now use the full potential of BIM, reducing the experience variable of the inspector, which is replaced by the full deployment of BIM information (data and 3D models) on mobile devices where augmented reality applications are used.

Some challenges are important to highlight for the mass deployment of BIM+AR on construction sites. BIM models are characterised by large file sizes and have large amounts of project data linked to them. The integration of such robust 3D models is restricted to the capability of AR viewing devices (glasses or tablets). In addition, moving towards open data formats is important for deployment on multiple devices, regardless of software or hardware brands. On the other hand, the professionals in charge of developing and deploying these tools must be trained, which represents a major effort, even more so when BIM alone is not yet fully implemented at construction sites. On the other hand, resistance to change in the sector is high. Incorporating new technologies at the construction site is not easy due to the site's characteristics (outdoors, exposed to the weather, robust materials, dirt, and large machinery, among others) and its workers.

#### **9. Conclusions**

Based on the information obtained in the research project regarding health and safety in construction, the AR implementation in the Architecture, Engineering, Construction, and Operation sector, and specifically in the occupational health and safety field, the understanding of the different platforms, and the knowledge acquired during the development of this research has led to the following conclusions.

We have been able to identify the main pain points and bottlenecks of safety inspection: the possible robustness deficit in visual inspections, the dependence on the safety advisor's experience and knowledge, lack of control of subcontractors' knowledge, possible bad practices among workers, lack of coordination between reporting and corrective measures, and the lack of rapid responsiveness between the safety advisor and the project management team. All these problems are much more easily solved using digital means, which facilitates extra aid during the CPE inspection, which in turn is crucial to ensuring the safety of workers. By pursuing the zero accidents goal, the implementation of an AR technological tool has been studied to augment inspection capabilities, thus achieving a more robust procedure.

Different layers of information, detailed functional requirements of the EPC, proposed user interfaces, and a workflow to perform the inspections were proposed. In addition, key performance indicators to evaluate these applications were created. A workflow suited to the AR inspection was developed. This method also improves one of the current methodology bottlenecks: the timing between reporting and taking corrective measures. Furthermore, this workflow preserves the main tasks of the SA during the safety inspections. The design of the application meets and exceeds the pain points of the current methodology. The basis and roadmap for developing an AR application that could be implemented in a construction company is set.

For the development of the app, it has been seen that there are several platforms available in the market. For this project, the framework Unity with the plug-ins ARCore has been used due to the following advantages: ease of use for non-advanced programmers and a strong UX that makes it easier to understand the platform's functionality and eases the transfer of BIM models. ARCore provides several advantages for AR applications such as predefined functionalities, which help in the development of AR experiences and have been used in this project.

AR has potential in the ORP field as it can ease the safety advisor's inspection with the main objective of reassuring the safety equipment's presence and maintenance, thus achieving the goal of the industry, which is to reach zero accidents on a construction site. An application such as that proposed in this research offers a digitalised solution that helps the safety inspectors guarantee the presence of CPE and report the incidents that occurred on the same application, reducing the need to manipulate several things at the same time. There would be a single screen for the real elements, the virtual ones, and the items that need to be checked related to the CPE checklist UI. This increases the perception of the environment, which is crucial for detecting the safety equipment and inserting the collected data into a database with information on the building site. To summarise, the main points of improvement from the traditional inspection to the proposed AR inspection are as follows:


For the development of this application, approximately 400 h have been invested. However, with the knowledge acquired in this project, creating this same prototype again would take a total amount of approximately three working days, i.e., 24 h. In the case of developing a full application for use on a real construction site, which would entail including all the digital (BIM) models already created with the database of CPE elements available, it would take approximately one month, i.e., 160 h. Moreover, these hours could

be greatly reduced depending on the programming experience of the developer and, of course, if the developer's experience is on similar projects.

Future lines of research could involve the following: (1) iterate the development of the process in order to construct an app at the production level which will reassure the viability of the app's implementation; (2) validate the application at some real construction sites in order to use safety advisors' assessment feedback to improve the app; (3) study the automation of the CPE-based 3D model and the data introduction coming from the BIM software; and (4) automatise the development of the AR applications.

**Author Contributions:** This paper represents the results of teamwork. Conceptualisation, J.M.-S. and A.D.; methodology, investigation, resources, and data curation J.R.-H., J.M.-S. and F.M.-L.R.; writing—original draft preparation, J.R.-H.; writing—review and editing, J.M.-S., F.M.-L.R., A.D. and I.V. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work has been supported by the Ministry of Science, Innovation and Universities of Spain (MICIU) through the BIMIoTICa project (RTC-2017-6454-7) and by the CONICYT for its economic support to Felipe Muñoz, beneficiary of a pre-doctoral grant (CONICYT—PCHA/International Doctorate/2019-72200306). The authors also acknowledge the financial support from the Spanish Ministry of Economy and Competitiveness, through the "Severo Ochoa Centre of Excellence (2019–2023) under the grant CEX2018-000797-S funded by MCIN/AEI/10.13039/501100011033".

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Construction Tasks Electronic Process Monitoring: Laboratory Circuit-Based Simulation Deployment**

**Diego Calvetti 1,\*, Luís Sanhudo 2, Pedro Mêda 1, João Poças Martins 3, Miguel Chichorro Gonçalves <sup>3</sup> and Hipólito Sousa <sup>3</sup>**


**Abstract:** The domain of data processing is essential to accelerate the delivery of information based on electronic performance monitoring (EPM). The classification of the activities conducted by craft workers can enhance the mechanisation and productivity of activities. However, research in this field is mainly based on simulations of binary activities (i.e., performing or not performing an action). To enhance EPM research in this field, a dynamic laboratory circuit-based simulation of ten common constructions activities was performed. A circuit feasibility case study of EPM using wearable devices was conducted, where two different data processing approaches were tested: machine learning and multivariate statistical analysis (MSA). Using the acceleration data of both wrists and the dominant leg, the machine-learning approach achieved an accuracy between 92 and 96%, while MSA achieved 47–76%. Additionally, the MSA approach achieved 32–76% accuracy by monitoring only the dominant wrist. Results highlighted that the processes conducted with manual tools (e.g., hammering and sawing) have prominent dominant-hand motion characteristics that are accurately detected with one wearable. However, free-hand performing (masonry), walking and not operating value (e.g., sitting) require more motion analysis data points, such as wrists and legs.

**Keywords:** electronic performance monitoring; wearable devices; process modelling; machine learning; multivariate statistical analysis

### **1. Introduction**

The construction industry (CI) is a significant player in the world economic scenario, as construction-related spending accounts for 13% of the global gross domestic product (GDP) [1]. However, despite its importance, this sector has shown weak productivity growth at a global scale [2], averaging a 1% annual productivity increase since 1997 [1]. Crafts and trade workers comprise 56% of the sector's employment at the European Union level [3]. Innovation is required to mitigate the impact of workforce shrinkage on the industry, boosting labour productivity on site. To this end, there is an increased relevance in monitoring the industry's primary productive workforce, justifying its importance as a research topic, which is aligned with the natural interests of companies [4] and the digitalisation and automation trends of Construction 4.0 [5,6]. Through this monitoring, companies can better evaluate their return on investment [4,7,8], while also providing supervisors with better information to support workforce development, training and deployment [5]. Authors refer to this monitoring and performance measurement as electronic performance monitoring (EPM) [8–10].

As such, the current technological advances enable new, more reliable methods of data collection, allowing for the real-time monitoring of construction activities. These methods are supported by recent innovations in micro and nanotechnology that enable the sustained assessment of each worker's task process. The systematic control of construction operations

**Citation:** Calvetti, D.; Sanhudo, L.; Mêda, P.; Martins, J.P.; Gonçalves, M.C.; Sousa, H. Construction Tasks Electronic Process Monitoring: Laboratory Circuit-Based Simulation Deployment. *Buildings* **2022**, *12*, 1174. https://doi.org/10.3390/ buildings12081174

Academic Editor: Fahim Ullah

Received: 20 June 2022 Accepted: 31 July 2022 Published: 6 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

can bring immediate awareness of specific aspects of undergoing activities, enabling better decision making [11] and the assessment of the project's productivity in order to increase its performance [12]. Additionally, on-site labour productivity can be correlated with carbon dioxide (CO2) emissions and the generation of sanitary wastewater [13]. In fact, according to Mojahed and Aghazadeh, in the context of construction engineering, productivity is mainly related to the performance achieved within each work activity [14]. Finally, digital twins approaches focusing on managing production in a construction set are vital for the monitoring of construction sites [15,16].

The present research aims to assess this EPM in multiple construction tasks, using wearable devices, machine-learning and multivariate statistical analysis (MSA) data processing tools. The objectives of this work are the following:


#### **2. Background**

Workforce activity classification through wearable devices as inertial measurement units (IMUs) is performed using different sensor combinations, namely: accelerometer [17–21]; accelerometer plus gyroscope [22–26]; and accelerometer, gyroscope, plus magnetometer [27–29]. Additionally, IMUs devices are positioned over multiple body parts [17,20,27] or on different body parts, such as: the spine [30,31]; arm [22–25,28]; arms and waist [21,32]; wrist [18,19]; and wrist and leg [26,33].

Labour process modelling based on workforce motion is more commonly applied in manufacturing working design than in the CI, with few studies targeting a process analysis approach in the CI [21,32]. The approach of modelling and measuring manual work systems sets the way for a comprehensive understanding of construction labour motion productivity. The process flow literature has five classes of activities that comprise all production tasks [34–36]:


The productive state addresses the workforce performance during the development of tasks, which can be either Productive (also referred to as Direct or Effective) work, Contributory (also referred to as Support) work or Nonproduction (also referred to as Ineffective) work [37]. This concept was applied by Refs [21,32,38] to cluster construction activities over Effective–Support–Ineffective work. A motion productivity model establishes nine processes to map craft workforce on-site tasks [5], as presented:


Table 1 presents studies targeting construction tasks activity or process recognition. The maximum number of activities analysed in a single study totals eight activities. It is also observed that, on average, six to seven individuals (subjects) perform the activities. Most studies performed simulations (eleven out of thirteen), while only two experienced a more realistic simulation scenario in a training centre. For clarity, a binary analysis can be identified when only one action is evaluated against another action or an idleness state (stopped).


**Table 1.** Studies on activity/process recognition.


**Table 1.** *Cont.*

Several mathematical and statistical methods can be used to process and analyse the data. In most cases, these methods are used in conjunction with the univariate and multivariate analyses [42] and Monte Carlo simulation [38]. Dynamic analysis methods and neural networks are also widely used. There is a trend for applying artificial intelligence (AI) when faced with the large amounts of data collected by electronic devices to process such information quicker and more autonomously. Academic studies focusing on the classification of human activities/actions develop algorithms based on machine learning, including deep-learning [28,43] and traditional approaches [19,20,44]. Machine learning is a subset of AI and can be seen as an autonomous and self-teaching system for training algorithms to find patterns and subsequently use this knowledge to make predictions about new data [45,46]. The domain of data processing is essential to accelerate the delivery of information based on EPM: the faster and more autonomous the data processing, the more agile the delivery of solutions.

#### **3. Method**

#### *3.1. Research Design*

As highlighted above, most studies on activity and process recognition showcase a binary approach (performing/not performing an activity) [22–25,39,41], with few experiments conducted on site [21,32]. To fill this gap, the present research proposes a laboratory circuit with multiple activities, emulating on-site conditions for testing EPM deployment. A laboratory environment provides a more controlled and labour-saving environment to record and label the actions, as well as test the hardware solutions. Figure 1 presents the data collection and analysis flow chart to clarify the validation approach of the different cases. The main goal is to test and validate the laboratory circuit-based simulation deployment and compare and evaluate two data analysis approaches, assessing their feasibility and performance accuracy.

**Figure 1.** Research flow chart.

#### *3.2. Data Collection*

For an efficient performance monitoring of the construction craft workforce, it is essential to at least assess the hand tasks, the walking–travelling and the idleness. To this end, a circuit concept was established to simulate an interactive work scenario seen in a typical on-site construction project. This circuit purposely aimed to avoid a binary analysis, as it is the authors' opinion that such an approach does not properly reflect a construction worker's behaviour. For this reason, basic daily work activities were selected, which are part of the daily life of workers in different functions. It can be inferred that, given the role of a specific worker, his/her actions can be previously mapped, which would facilitate activity classification. Additionally, according to Adrian (2004), at least 50% of the workforce time on site is spent on non-productive activities (e.g., walking, drinking water, talking to co-workers) [37]. A total of six volunteers were equipped with three devices, two on both wrists and one on the dominant leg's ankle. The inertial measurement units (IMU, similar to watches) devices collected 3-axis data at a sampling frequency of 100 Hz with a 1 s epoch output. Each data point is thus represented as a vector containing the timestamp of the reading and nine acceleration values (one for each axis of the three accelerometers).

Figure 2 shows the circuit deployed in a 150-square-meter indoor laboratory. The circuit was composed of work areas where the volunteers interactively performed the existing 10 activities. The volunteers walked from station to station to perform these activities, walking/travelling around the stations, assumed with letters in the figure. As well, a path sequence was pointed out as evidenced with the numbers; however, the volunteers had the option of carrying two or four bricks at once and shortening the travel between actions B (Masonry collection) and C (Masonry deployment).

**Figure 2.** Laboratory circuit with the path and activity sequence, the first set by the numbers and the following by the letters.

#### *3.3. Data Analysis*

Figure 3 presents the deployed method. After completing the circuit containing ten construction activities, each simulation had its actions labelled every second. These data were then processed and evaluated by two different methods. A graphical analysis of the accelerations collected during each activity summarised the analysis. Finally, a qualitative analysis of the two methods was provided. The data were labelled manually using a synchronised video recording of the circuit. The data points of each activity were:


**Figure 3.** Laboratory experiment method.

Next, the activities were clustered into three small mixed groups based on the process characteristics and the number of data points. First, a mixed group was established with the only "free-hand performing" activity (Masonry) plus two "manual tools" processes (Painting and Roughcasting). A second contained just "manual tools" (Hammering, Sawing and Screwing). A third was composed of "do not operate value" (Wearing PPE; Sitting; Standing still) and "walking" (Walking).

Additionally, the formation of the three groups of analysis with different processes and their respective activities was based on diversification to evaluate the variability in the classification accuracy through the different groups of activities with different motion patterns. However, the group "free-hand performing (Masonry) plus two manual tools (Painting and Roughcasting)" had a mix of motion activities; the masonry action had multiple two hands and body motion activities, while painting and roughcasting had almost static body motion with a prevalence of high-frequency motions on the predominant hand and only a few motions on the non-dominant hand, supporting the actions.

In the group "manual tools (Hammering, Sawing and Screwing)", there was a uniqueness of actions with hand tools and non-movement characteristics of the legs. Additionally, a predominance of high frequency on the predominant hand motions could be noted, which was only supported by a quasi-static motion of the non-dominant hand. However, the three activities demanded very peculiar and distinguished predominant hand motions between them.

Finally, the group "do not operate value (Wearing PPE, Sitting, Standing still) plus walking (Walking)" mixed the walking movement with resting actions and some unusual movement activities, such as putting on protective equipment (e.g., gloves, glasses, helmet).

A total of 155 min of activities was monitored. The process of labelling the actions (second by second) took approximately 26 h. Table 2 shows the number of points (variables) collected by the 3-axis accelerometers positioned in three locations on the volunteers' bodies.

**Table 2.** Data collected.


For data processing, the pre-labelling of activities is necessary. First, the classification algorithms used the labelled data for training. Second, pre-labelling enabled quantifying the accuracy of the analyses. MSA was used to group data according to the characteristics of the variables. Both processes were applied to understand the potential of each method to be used in future applications. In addition, a graphical analysis of the accelerations collected in the 3 axes allowed an assessment of the specific characteristic of each activity, improving the perception of the results obtained by the classification methods.

#### **4. Results and Discussion**

*4.1. Acceleration Data*

This section presents and discusses the activities' acceleration data characteristics and the results of both processing methods applied for classifying the actions (i.e., machine learning and MSA). A cross-analysis focuses on a deep understanding of the tasks and process motion characteristics. Finally, a qualitative analysis of the processing tools is presented based on the findings.

The clustering of activities was realised by targeting a diverse group of motion characteristics. In the first group, the Masonry activity presented a mix of motions, from loading to laying the bricks. It is possible to observe a variety of accelerations in the three axes (X, Y, Z) in both the dominant and non-dominant hands and the leg. In contrast, the Painting activity has more significant dominant hand motions, with a marked acceleration in the Z direction, with little or no effort in the other hand and legs. Finally, in Roughcasting, dominant hand movements' predominance can be identified with more significant accelerations in the Z and X directions. Figure 4 presents the characteristics of accelerations from dominant and non-dominant wrists, as well as the dominant leg for the three activities, making it possible to visualise the distinguished motion patterns. It can be inferred that when classifying these three activities, some misleading results can occur because of the non-linearity of the Masonry activity that might overlap with some motion characteristics captured in the other activities. This becomes even clearer when the classification is based only on the wrist-dominant motions, as the leg variable that will differentiate it from the others is lost, resulting in reduced accuracy.

**Figure 4.** Motion characteristic of Masonry, Painting and Roughcast.

In the second group, a more precise classification/clustering is observed in activities using manual tools. All three activities have similar characteristics: significant dominant hand motions, non-dominant hand support when adjusting the material/element and supporting the body, and virtually no leg motions. The predominance of movements in vectors X and Z stands out in the hammering activity. The predominance of acceleration in the X direction is evident in the Sawing activity. Finally, in the Screwing activity, a linear pattern is seen, with the predominance of vectors Y and Z. This set of activities is presented in Figure 5. It is observed that the accuracy of the classifiers should not be influenced by removing the non-dominant hand and leg data. On the contrary, there is a subtle accuracy increase.

**Figure 5.** Motion characteristic of Hammering, Sawing and Screwing.

Finally, in the last group, except for the Standing still activity, which practically does not present motions with representative accelerations, the other activities are of peculiar motions. For example, volunteers move their hands and legs slightly, even while sitting. When Wearing PPE, no significant leg motions are detected, but the hands have practically random accelerations. When Walking, the limbs' motions and accelerations have a similar cadence in each individual. Machine learning interprets the acceleration patterns more accurately than the other methods used in this study. On the contrary, the simple grouping by multivariate analysis only makes it easy to observe the Standing still state, not evaluating cadences with similar acceleration. The most prominent vectors in each activity are represented in Figure 6 for Wearing PPE, Sitting, Standing still and Walking. When the results are only pertinent to the wrist-dominant data, there are practically no differences in the accuracy. This fact indicates that the wrist-dominant motion detected was peculiar enough to differentiate such different actions.

**Figure 6.** Motion characteristic of Wearing PPE, Sitting, Standing still and Walking.

#### *4.2. Machine Learning*

After data collection, all information points must be characterised manually for the machine-learning approach, which indicates the need to use video recordings as a reference. These points are the identification and labelling of the type of process or action or motion at each moment of the analysis. Then, the process of classifying the data is carried out, which are commonly divided into groups according to the characteristics of the sample. Afterwards, feature extraction is performed to identify the most useful characteristics to classify each group of actions. Next, the classifiers are selected, and each method's reliability (%) evaluation is performed for the sample. Finally, a set of algorithms can be calibrated to carry out future autonomous analyses.

As previously presented, the ten activities were divided into three groups: Freehand performing (Masonry) + Manual tools (Painting and Roughcasting); Manual tools (Hammering, Sawing and Screwing); Do not operate value (Wearing PPE, Sitting, Standing still) + Walking (Walking). Several classification conditions were studied, including the ideal time window size (in seconds) to segment the data; the extraction and selection of relevant artificial features; the adjustment of hyperparameters (time windows and parameter grouping masses); and the training and selection of the classifier. As presented in Figure 7, the hyperparameters and classifier selection was based on the best accuracy obtained through a cross-validation approach, using two training loops.

**Figure 7.** Cross-validation approach comprising two training loops.

Finally, the classifiers were tested in a subject-independent (i.e., classifier trained without any test subject data) and -dependent (i.e., classifier trained with a portion of the test subject data) approach for all activities, using the optimal reasonable time window.

Thirteen different classifiers were evaluated, containing both basic models and ensemble methods:


For an analysis of the subject-independent approach, Figures 8–10 show the classifiers' performance (average balanced accuracy) for each group and window combination. Figure 11 presents the average performance of all groups per window, and Figure 12 presents the average performance of all windows per group.

**Figure 8.** Classifiers' performance for Masonry, Painting and Roughcasting.

**Figure 9.** Classifiers' performance for Hammering, Sawing and Screwing.

**Figure 10.** Classifiers' performance for Wearing PPE, Sitting, Standing still and Walking.

**Figure 11.** Classifiers' average performance for all groups per window.

Figure 8 shows the results for the "free-hand performing (Masonry) plus two manual tools (Painting and Roughcasting)" group, where the best performing classifier showcased a 92.71% accuracy (6 s window with the LSVM classifier).

Figure 9 shows the results for the "manual tools (Hammering, Sawing, and Screwing)" group, where the best performance was a 96.07% accuracy for the vote classifier with a 5 s window.

Finally, Figure 10 shows the results for the group "do not operate value (Wearing PPE, Sitting, Standing still) plus walking (Walking)", where the best performance was achieved for a 6 s window with the GrB classifier—94.66% accuracy.

In summary, all three groups presented a similar range of performance, with all best accuracies at above 92%.

For an analysis of a subject-dependent approach, Figure 13 showcases the performance of all classifiers when applied to all ten activities anda6s window. This approach essentially indicates whether and how much the classifier performance would benefit from gathering the training data of new subjects (i.e., workers) before starting to predict their activities. To help compare both approaches, Figure 13 also shows the same analysis for a subjectindependent approach, enabling a side-by-side comparison.

As such, from Figure 13, it can be concluded that the KNN classifier achieved the best performance of 93.69%, with the AdB classifier ranking in close second with 93.57%. Both these highest accuracies were achieved for the subject-dependent approach, which reached an average performance of 86.08%, throughout all classifiers. This accuracy is roughly 6% higher than the 80.43% average accuracy achieved by the subject-independent approach. In fact, the subject-independent approach with all activities was also far below the accuracies achieved for each group independently, whose highest values were all above 92%, as previously seen in Figures 8–10. Thus, it can be stated that the division of all activities into smaller groups is vital to increase accuracy, while subject dependence can help boost the accuracy even further.

**Figure 12.** Classifiers' average performance for all groups per window.

Nevertheless, even without all favourable conditions, the achieved accuracies are encouraging, with the GrB classifier achieving a maximum of 85.54% when facing all activities and a subject-independent approach (Figure 13).

**Figure 13.** All ten activities at once (6 s window).

#### *4.3. Multivariate Statistical Analysis*

The multivariate statistical analysis aims to verify the formation of clusters for the data collected during the experiment. IBM SPSS Statistics (version 25) was the main software tool for statistical calculations. Microsoft Excel was also used for the graphical formatting of data from SPSS. The analysis by non-hierarchical classification allows the evaluation of the cluster's dimensionality formed by the subjects [47], where a synthesis analysis of the mathematical results is carried out concerning the aspects of the three groups of processes and the ten activities: Free-hand performing (Masonry) + Manual tools (Painting and Roughcasting)—three clusters; Manual tools (Hammering, Sawing and Screwing)—three clusters; Do not operate value (Wearing PPE, Sitting, Standing still) + Walking (Walking)—four clusters.

After applying the non-hierarchical classification according to the number of activities in each group, the results are compared with the labelled data to determine the accuracy of these processes. Initially, to group the clusters' activities, only the absolute parameters of the accelerations (in their three axes) collected on the wrists and one leg were used as variables (83,772 data points). However, nine features were used in the analysis. Next, the same process was performed only with the values of the accelerations (three axes) of the volunteers' dominant hands. Thus, the analysis had a third of the number of variables used in the previous case (27,924 data points). However, three features were used in the analysis.

Table 3 presents the set-up requirements for a non-hierarchical classification in Freehand performing (Masonry) + Manual tools (Painting and Roughcasting) processes and tasks, making three clusters fit with the activities. To group the activities in clusters, the absolute parameters of the accelerations (in their three axes) collected on the wrists and one leg were used as a total of nine variables (27,414 data points).


**Table 3.** Set-up of three clusters.

Table 4 presents the interaction history. Iterations stopped because the maximum number of iterations was performed. The maximum absolute coordinate change for any centre is 0.836, and the current iteration is 10. The minimum distance between the initial centres is 584.024. Table 5 shows the distances between the final cluster of activity centres, and Table 6 presents the number of cases in each cluster.

**Table 4.** Interaction history.


**Table 5.** Distances between final cluster centres.



In effect, each of the 3046 lines (activities identified in each second) was identified by a cluster. A small extraction of this information is shown in Table 7. Moreover, a true or false analysis was carried out line by line to identify whether the indicated cluster was equal or not to the real label. Thus, when the correct results are totalled, the accuracy of the analysis is determined. Finally, the results for the multivariate analysis of "Free-hand performing (Masonry) + Manual tools (Painting and Roughcasting)" indicated 1855 correct values in 3064, reaching an accuracy of 60.90%. The exact process of clustering only three axes of the dominant hand (9138 data points) achieves 32.50% accuracy. Table 8 presents the set-up requirements for a non-hierarchical classification in Manual tools (Hammering, Sawing and Screwing) process and tasks, creating three clusters of activities. To group the activities into clusters, the absolute parameters of the accelerations (in their three axes) collected on the wrists and one leg were used—a total of nine variables (32,796 data points). Table 9 presents the interaction history. Iterations stopped because the maximum number of iterations was performed. The maximum absolute coordinate change for any centre is 0.547, and the current iteration is 10. The minimum distance between the initial centres is 575.296. Table 10 shows the distances between the final cluster of activity centres, and Table 11 presents the number of cases in each cluster.

**Table 7.** Verification process.



**Table 8.** Set-up of three clusters.

**Output Created** Resources Processor Time 00:00:00.44 Elapsed Time 00:00:00.00 Workspace Required 1944 bytes Variables Created or Modified QCL\_1 Cluster Number of Case QCL\_2 Distance of Case from its Classification Cluster Centre

**Table 8.** *Cont.*

**Table 9.** Interaction history.


**Table 10.** Distances between final cluster centres.


**Table 11.** Number of cases.


Each of the 3644 lines (activities identified in each second) is identified by a cluster. A small extraction of this information is shown in Table 12. Again, the true or false analysis was carried out to evaluate the accurate labelling. The multivariate analysis of Manual tools (Hammering, Sawing and Screwing) process and tasks indicated 2772 correct values in 3644, reaching an accuracy of 76.07%. In contrast, using just three axes of the dominant hand (10,932 data points), it achieved 76.23% accuracy.

Table 13 presents the set-up requirements for a non-hierarchical classification for the group containing the activities of Wearing PPE, Sitting, Standing still and Walking. To obtain the four clusters of activities, data of the wrists and one leg were used as nine variables (23,562 data points).


#### **Table 12.** Validation process.

**Table 13.** Set-up of four clusters.


Table 14 presents the interaction history that stopped because the maximum number of iterations was performed. The minimum distance found between the initial centres is 531.495. With ten current iterations, the maximum absolute coordinate change for any centre is 3.637. Table 15 presents the distances between the final cluster of activity centres. Table 16 shows the number of cases in each cluster.

**Table 14.** Interaction history.



**Table 15.** Distances between final cluster centres.

**Table 16.** Number of cases.


A small extraction of the clustering information regarding the processing of the 2618 lines (activities identified in each second) is shown in Table 17. The multivariate analysis of the activities of Wearing PPE, Sitting, Standing still and Walking indicated 1249 correct values in 2618, reaching an accuracy of 47.71%. Finally, an accuracy of 46.60% was achieved by clustering the data from three axes of the dominant hand (9138 data points).

**Table 17.** Validation process.


Tables 18 and 19 present a summary of all the analyses. The process Manual tools with the three activities (Hammering, Sawing and Screwing) achieved higher accuracy in both situations—Wrists and Leg (76.07%) and Wrist-dominant (76.23%). Additionally, it was the only case of the highest accuracy, only clustering the Wrist-dominant data. As seen, the group of two processes and three activities, "Free-hand performing (Masonry) + Manual tools (Painting, Roughcasting)", achieved an accuracy of 60.90% (Wrists and Leg) and (32.50%). In this case, the analysis by using only the wrist-dominant data decreased the accuracy by almost a half. Finally, the lowest accuracy was identified in the group of two processes and four activities—"Do not operate value (Wearing PPE, Sitting, Standing still) + Walking (Walking)"—47.41% and 46.60%, respectively, for Wrists and Leg, and Wrist-dominant data. In this case, a slight difference was noted.

In the "Free-hand performing (Masonry) + Manual tools (Painting and Roughcasting)" group, when only the data of the dominant wrist was considered, a high increase in the maximum absolute coordinate change for any centre was observed. Additionally, the distances between the final cluster centres decreased significantly. This was not verified in the other two cases, which maintained approximate values in both scenarios. Forward in the next section, a summary analysis will be performed.


**Table 18.** Classification results summary.

**Table 19.** Clustering results summary.


*4.4. Classification and Clustering Cross-Analysis*

Figure 14 illustrates the accuracy achieved by the machine-learning process and the multivariate statistical analysis. The classifications performed through machine learning reached high precision in all three cases. Multivariate statistical analysis evidenced moderate accuracy. Subsequently, an analysis is developed to highlight the different classification accuracy results when using the three collection points and is carried out only with the wrist-dominant data. The objective is to explain why there is a loss of approximately 50% of accuracy in the first group, while in another case (Manual tools), there is a subtle increase in accuracy, and in the last group, only a difference of less than 1% is observed. Finally, based on the results and the analyses, it is possible to put forward an overall evaluation of the machine-learning methods and multivariate statistical analysis (see Figure 15). The purpose of these classifiers is to avoid manual data processing, as in a real-life construction situation, the large volume of data would result in extensive manual work.

Accuracy 100%

Free-hand performing (Masonry) + Manual tools (Painting and Roughcasting)

Manual tools (Hammering, Sawing and Screwing)

Do not operate value (Wearing PPE, Sitting, Standing still) +

Walking (Walking)


**Figure 15.** Comparative evaluation.

It can be concluded that the multivariate statistical analysis method alone is not able to label actions. However, multivariate analysis can speed up the labelling work, since a preliminary process can seek to cluster the data and facilitate their visual interpretation. The multivariate analysis method is more straightforward than machine learning and can

achieve a moderate accuracy with fewer features to vectorise, demanding lower expert knowledge and computational capacity. The enormous potential of machine learning is in creating algorithms that, once able to interpret an activity (based on acceleration), can perform this task without the need for pre-labelling or training sets. The idealised format of an activity circuit can assist in the algorithms' calibration, since a new individual can be monitored in a known and pre-established sequence of activities/actions.

#### **5. Conclusions**

The dynamic circuit proposed in this paper makes EPM laboratory experimentation more similar to the on-site reality. The option of grouping the activities with different motion characteristics proved essential for the analysis, since it was demonstrated that having heterogeneous motions/acceleration activities hampers the classification process. Future research in the field of EPM and activity classification should observe this practice of clustering activities with distinct motion characteristics. Moreover, this is an accurate picture of the work developed for the craft workers on site. The activities' classifications and their grouping over a process analysis allow a deep understanding of the motion characteristics. However, process modelling analysis can provide better performance evaluation.

The experiment conducted to test the circuit feasibility deploys electronic monitoring, using wearable devices (IMUs) to collect motion acceleration of wrists and the dominant leg. Activities with multiple motions characteristics, such as free-hand performing (e.g., masonry), walking and do not operate value (e.g., Wearing PPE and Sitting), require more motion analysis data points, such as wrists and legs. On the other hand, processes conducted with Manual tools (e.g., painting, roughcasting, hammering, sawing and screwing) have prominent dominant-hand motion characteristics that are easily detected with just one wearable. In summary, processing the data with two approaches (i.e., machine learning and MSA, in a laboratory circuit with six subjects using three activity groups resulted in the following:


In practice, the approach of classifying workforce activities is to better understand and map the on-site processes. Proper activity classification is crucial for modelling the construction process and applying lean concepts and unnecessary motion elimination. Task data analysis can quantify the time spent by the worker using a manual tool, walking–travelling or carrying elements, for instance. These data can be used to implement improvements, for example, providing electric tools or bench stations and reorganising the site stock to avoid long walks to collect elements and accessories.

The main contribution of this research is to establish and test an EPM laboratory circuit-based simulation that introduces a way to develop, test and improve in-house EPM solutions for deployment on site. The laboratory circuit is the appropriate testbed environment to develop data processing approaches, for example, mixing methodologies to improve the accuracy and outcome lead time. Additionally, in the laboratory, multiple wearable devices and other technologies (e.g., filming) can be combined in different ways, changing data collection points and assessing the impact of these changes on the solutions' autonomy, scalability and user comfort.

It is expected that the development of similar circuits in other locations will enable comparisons with the results presented in this paper. Further research will focus on setting a larger circuit, adding activities such as inspection duties, electric tools use and machine operations. Additionally, other mixed approaches to electronic performance monitoring will be tested using images, sound and geolocation. Finally, developing faster and more accurate data processing algorithms is a critical goal for this type of solution to be deployed on site.

**Author Contributions:** Conceptualisation, D.C.; methodology, D.C. and L.S.; software, D.C. and L.S.; validation, D.C., L.S., P.M. and J.P.M.; formal analysis, P.M., J.P.M., M.C.G. and H.S.; investigation, D.C., M.C.G. and H.S.; resources, J.P.M., M.C.G. and H.S.; data curation, D.C. and L.S.; writing original draft preparation, D.C., L.S., P.M. and J.P.M.; writing—review and editing, D.C., L.S., P.M. and J.P.M.; visualisation, D.C., L.S., P.M. and J.P.M.; supervision, J.P.M., M.C.G. and H.S.; project administration, J.P.M., M.C.G. and H.S.; funding acquisition, J.P.M., M.C.G. and H.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** Base Funding of the CONSTRUCT—Instituto de I&D em Estruturas e Construções—funded by national funds through the FCT/MCTES (PIDDAC): UIDB/04708/2020. This work is supported by the European Social Fund (ESF), through the North Portugal Regional Operational Programme (Norte 2020) [Funding Reference: NORTE-06-3559-FSE-000176].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Field Work's Optimization for the Digital Capture of Large University Campuses, Combining Various Techniques of Massive Point Capture**

**José Javier Pérez, María Senderos, Amaia Casado and Iñigo Leon \***

Department of Architecture, University of the Basque Country UPV/EHU, Plaza Oñati 2, 20018 Donostia-San Sebastián, Spain; josejavier.perez@ehu.eus (J.J.P.); maria.senderos@ehu.eus (M.S.); amaia.casado@ehu.eus (A.C.)

**\*** Correspondence: inigo.leon@ehu.eus; Tel.: +34-943-01-7192

**Abstract:** The aim of the study is to obtain fast digitalization of large urban settings. The data of two university campuses in two cities in northern Spain was captured. Challenges were imposed by the lockdown situation caused by the COVID-19 pandemic, which limited mobility and affected the field work for data readings. The idea was to significantly reduce time spent in the field, using a number of resources, and increasing efficiency as economically as possible. The research design is based on the Design Science Research (DSR) concept as a methodological approach to design the solutions generated by means of 3D models. The digitalization of the campuses is based on the analysis, evolution and optimization of LiDAR ALS points clouds captured by government bodies, which are open access and free. Additional TLS capture techniques were used to complement the clouds, with the study of support of UAV-assisted automated photogrammetric techniques. The results show that with points clouds overlapped with 360 images, produced with a combination of resources and techniques, it was possible to reduce the on-site working time by more than two thirds.

**Keywords:** LIDAR; TLS; UAV; point cloud; 3D modelling

#### **1. Introduction**

The global environmental situation is critical, with the depletion of natural resources, global warming and CO2 emissions leading to greater environmental awareness [1]. The building sector alone accounts for 18.4% of total anthropogenic greenhouse gas emissions [2]. It is necessary for cities and the built environment to fulfill their potential to enhance energy efficiency [3]. Digitalization, defined as the development and deployment of digital technologies and processes, is considered crucial for the required transformation of the construction industry to improve productivity according to the report of World Economic Forum [4].

In this sense, our research took two main lines from an architectural perspective: On the one hand, the optimization of the digital capture of a constructed setting [5], and the use of digital 3D models for environmental assessment of urban settings [6]. In early March 2020, we commenced a research project that linked both areas of research. We were then faced with the crisis caused by the COVID-19 pandemic, which led to the state of national confinement decreed on 14 March 2020 in Spain [7], as was the case in many other countries. The first task of the project consisted of the digital capture of two large university campuses. This process usually entails a great deal of on-site work. Given the situation of confinement imposed by COVID-19, it was very difficult to spend long periods of time on site to take data readings. Mass capture of points in the urban settings was necessary, to obtain multiple data (coordinates, distances, surface areas, angles, temperatures of facades, etc.), and this had to be performed at breakneck speed with the fewest possible resources.

**Citation:** Pérez, J.J.; Senderos, M.; Casado, A.; Leon, I. Field Work's Optimization for the Digital Capture of Large University Campuses, Combining Various Techniques of Massive Point Capture. *Buildings* **2022**, *11*, 274. https://doi.org/ 10.3390/buildings12030380

Academic Editor: Fahim Ullah

Received: 21 February 2022 Accepted: 16 March 2022 Published: 18 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Several options had to be studied to find a combination of resources and techniques. The decision was made to start the work by using Light Amplification by Stimulated Emission of Radiation (LiDAR) points clouds, captured by manned Airborne Laser Scanning (ALS), carried out by the government in both cities (a resource that is free in many countries). The LiDAR ALS points clouds are an easily accessible and cheap resource, but their accuracy and performance need to be complemented by other capture techniques. The research conducted with the LiDAR ALS clouds, access to which is instant, open and free, including analysis, editing and optimization, enabled the additional techniques needed to complete the final points cloud of each campus to be estimated. In this, particular, case study, Terrestrial Laser Scanning (TLS) was considered to be the fastest option to complement the final points cloud of the urban area, extending the study to the support of automated photogrammetric techniques assisted by Unmanned Aerial Vehicle (UAV).

Design Science Research (DSR) is a methodology that can provide solutions for research through the use of three-dimensional models. It employs techniques such as casestudies, data collection and document analysis, among others, and centers on creating and optimizing artifacts to improve processes and their operative performance [8]. With this methodology, research objectives are approached more pragmatically than in explanatory scientific investigation [9].

After the digital capture of the urban setting in 3D, the second phase of the project focused on the environmental assessment of the campuses. The tool used was Neighborhood Evaluation for Sustainable Territories (NEST), a tool based on life cycle evaluation methodology (ACV) [10,11]. Although some results of this assessment have already been published [12], this article does not focus on this phase, it only shows the minimum information necessary to give context to the research as a whole.

This article focuses on the results of the optimization work performed on the digital capture of the two university campuses up to when the 3D simulation models are obtained. The conclusions include the result that after previously working with the LiDAR ALS points cloud of the Government of Navarra, the normal on-site reading period with TLS, estimated at 52 days for the campus of Pamplona, could be reduced to 7 days. The combination of devices, software and applications that were used made it possible to reduce the scanning time with overlapped 360 image capturing by more than 75%, in comparison to customary scanning times on the market. We found that even with such a fast capture time we were able to obtain errors of 1 mm, with a strength and overlap that was accepted as valid by the processing software. Therefore, this article could be of great benefit to the scientific community engaged in work of this nature, since it would help them to be more efficient and make effective use of resources.

The article is structured as follows: The introduction consists of two sections. Section 1 contextualizes the research. Section 2 describes the case study, including the current state of research into different capture techniques. Section 3 describes the Methods and Materials. Section 4 describes the results of generation of LiDAR point clouds, modelling and simulation. Section 5 presents the Discussion, and the article ends with the final conclusions.

#### **2. Digital Survey of Large Urban Areas in a Short Time, Study Cases**

The case studies focus on two universities in northern Spain: the campus of the University of the Basque Country (UPV-/EHU) in Donostia-San Sebastián (DSS) and the University of Navarra (UNAV) in Pamplona (Figure 1).

The UNAV campus has an area of approximately 113 ha, which includes large open grassy areas, slightly wooded areas and a riverbed flanked by a dense mass of trees. The buildings cover only 6.8% of the total area. The UPV/EHU university campus in DSS has a much smaller area (approximately 18 ha) and a much higher building density, with a much lower proportion of green areas. As previously mentioned, the aim of this work is, to carry out the field work in the shortest time possible, with the fewest number of resources.

**Figure 1.** (**a**) Aerial view of the UNAV university campus in Pamplona; (**b**) aerial view of the UPV/EHU university campus in DSS.

The modeling phase in NEST requires a prior graphic survey of the current state [13]. Depending on the reason for using the model, it will require a sufficient degree of precision to accurately determine the geometric configuration of buildings and their surroundings [14]. Although the NEST 3D model does not require excessive precision [15], the survey does require definition of the building envelopes [16], marking the number of floors, window configurations and opaque elements, and of the spaces occupied by roads and highways; green spaces and trees must also be defined [17], so that forest biomass can be calculated [18–20].

Given the limited time available for carrying out fieldwork, the use of massive point capture techniques will allow highly precise geometry to be obtained in digital format in a very short time. The combined use of digital geometric data collection techniques [21], is currently the most effective procedure for conducting a precise geo-referenced architectural survey [22]. In such cases, it includes a topographic survey with a total station [23], terrestrial laser scanner and short-range photogrammetry assisted by an RPA (Remotely-Piloted Aircraft) or UAV [24–27].

LiDAR technology also makes it possible to acquire massive amounts of 3D geospatial information in urban scenarios [28–31]. LiDAR technology measures the properties of reflected laser pulses to determine the range of a distant object [32]. That range is obtained by measuring the delay time between transmission of a laser pulse and detection of the reflected signal [33]. Due to LiDAR's ability to generate 3D data with high precision and spatial resolution, a new era is opening for the development of research objectives such as the one presented [34]. LiDAR scanning can be classified in four categories: Satellite-based Laser Scanning (SLS), Airborne Laser Scanning (ALS) [35–38], Mobile Laser Scanning (MLS) and Terrestrial Laser Scanning (TLS) [39]. ALS is ideal for large areas of cities [40,41]; it can be conducted by UAVs or by manned aircraft, which usually fly at higher altitudes and capture larger areas than UAVs, though the latter are cheaper and less polluting, among other advantages [42–44]. SLS data points can be tens of meters apart and the respective point clouds are therefore unsuitable for extracting geometries from urban features such as buildings or masses of trees [45]. TLS data has the highest point density and can be used to specify data for those urban elements at individual level [46–48]. Some publications claim that TLS sometimes has poor mobility and occlusion issues that make it difficult to collect data on an urban scale. When TLS is not effective, MLS has been used in some research, such as for collection and analysis of information on trees in urban areas [49,50].

Various techniques have been studied, and the method that best fit the objectives of this work involved the combination of different resources: ALS LiDAR clouds captured by public administrations, point clouds captured using TLS and, finally, point clouds produced from automated photogrammetry assisted by UAVs.

#### **3. Methods and Materials**

In this section the technologies enabling the massive point capture to engender the point cloud of each university campus in a very short time [5], are explained, specifying the techniques, methods and materials used. The research design is based on DSR. DSR is used to design and assess manmade artifacts meant to resolve real-world problems [51]. This method helps find practical solutions for common problems affecting design, with a view to achieving expected results [8], and employs computer-based tools to streamline processes [51]. When a problem is associated to a physical object, the respective solution may appear as a 3D model, plan or drawing; when it requires optimizing an action, the solution may take the form of new digital software or be developed as a flowchart diagram [51]. DSR includes other non-habitual forms for conveying knowledge, such as models or constructs [52]. That is why DSR expresses knowledge based on different formats not commonly found in other scientific investigations, such as, for example, 3D models, architectures, design theories or principles and artifacts [53]. Two main activities are put forward in design science: to build the solution and to evaluate it [54]. The construct is a stage within the process of creating an artifact that can be used to resolve a specific problem. The evaluation is the action that must validate how effectively that artifact serves the purpose for which it was created. This is precisely what is going to be conducted in this investigation: to make, evaluate and optimize a 3D digital model of the urban area in the form of a point cloud with 360 image that contains all the information needed to achieve the project's objectives. The construct stage must necessarily be iterative and incremental, as the evaluation phase will endow it with the feedback needed to optimize the solution. DSR enables relevant problems to be resolved based on applied research appearing in some scientific investigations linked to architecture [55].

#### *3.1. Analysis of LiDAR Clouds Obtained by Public Services*

In Spain, different public services offer that LiDAR data, thereby simplifying the data capture process for these kinds of projects, with the respective point clouds obtained using manned aircraft. The great advantage of these clouds is that they are public and can be consulted for free. In the case of point clouds obtained by ALS systems, the latest sensor technology has significantly increased the number of laser light beams per square meter. As a result, the density of the point clouds generated during the data collection process shows a range of between 12–30 points/m2, compared to the range of 1 point/m2 obtained by previous sensors. The Chartered Community of Navarre was one of the first European regions to apply LiDAR technology using these new sensors, specifically the Leica Single Photon LiDAR (SPL100). A sensor is able to capture light particles with a laser light beam that can be divided into a 10 × 10 matrix, operating in practice as 100 sensors in parallel, each of which is captured by an independent channel of the detector. The experimental flights were conducted in 2017; after processing and classifying the data obtained using Artificial Intelligence (AI) techniques, it was possible to cover an area of 10,391 km2, generating meshes of 1 × 1 km, with a point cloud density of 14 points/m2 and a precision of 20 cm on the XY axis and 15 cm on the Z axis.

The Navarre government's partial LiDAR clouds from 2017 were initially used, and a point cloud of the entire UNAV Campus in Pamplona was composed. Different cuts to the cloud were performed at strategic points; the precision and suitability of the cloud were also analyzed to study the combination of techniques (Figure 2).

The campus cloud was segmented to form detailed sets of buildings and check their geometry with the density value of 14 points/m2. The definition of that façade would apparently suffice to obtain a 3D simulation model (Figure 3).

Dimensional checks of the result will, subsequently, be carried out to ascertain which buildings need to complete the point cloud with other LiDAR techniques.

(**b**)

**Figure 2.** LiDAR 2017 point cloud, density: 14 points/m2. UNAV campus: (**a**) 3D color cloud; (**b**) vertical section of the cloud through the north façade of the central campus building.

**Figure 3.** 2017 LiDAR cloud density: 14 points/m2. UNAV campus: (**a**) 3D detail of the central campus building; (**b**) north façade of the same building.

To achieve the objectives of this research, other data of interest included the estimated approximate volume of the campuses' forest biomass; detailed tests of tree masses were accordingly conducted. It was thereby possible to verify another fundamental characteristic of these new sensors, used in 2017. They enable capture of the terrestrial relief, devoid of any artificial and/or natural element other than the ground (DTM—Digital Terrain Model), and of the earth's surface with all built or natural bodies on it (DSM—Digital Surface Model). The use of specific wavelengths enables penetration between tree masses, capturing the lower ground level, which facilitates the height measurement of those masses. In this regard, an example of the capture of plant masses at the UNAV university campus in Pamplona is shown below. Cloud cuts were conducted in wooded areas. In the case of the densely populated vegetation zone, the scanner's ability to penetrate the tree mass and record the ground level is observed [14,42], (Figure 4). Although the trees' compactness makes it difficult to fully record the respective mass, the information captured allows for approximate measurements of the height and volume of the vegetation, with a precision that can be estimated to the nearest decimeter.

(**b**)

**Figure 4.** UNAV campus, vegetation strip example: (**a**) 2017 LiDAR Cloud Plan, density: 14 points/m2; (**b**) profile of tree mass and level of ground under that mass.

After analyzing the possibilities of the LiDAR cloud of the UNAV campus in Pamplona, with a density of 14 p/m2, the LiDAR clouds currently available for the territory of Gipuzkoa province, where the UPV/EHU campus is located in DSS, are analyzed. In that province, LiDAR clouds captured in 2012 and 2017 are currently available (Figure 5).

The 2012 LiDAR flight presents meshes of 2 × 2 km with a density of 1 point/m2, while the 2017 LiDAR flight presents 500 × 500 m meshes with a density of 2.2 points/m2. As in the UNAV's case, a partial cloud was created for the entire UPV/EHU campus, for both the 2012 and 2017 clouds. Partial sections of the campus buildings and trees were likewise made to compare the accuracy and usefulness of the clouds.

Although specific measurements of these LiDAR clouds will be presented in the Section 4 with the analysis of these two examples from the UPV/EHU campus, several limitations can be appreciated. In the 2012 cloud, total height of buildings could be obtained; however, the volumes of buildings are not intuited, nor are vertical stripes marked. In addition, the profile of tree masses presents excessively isolated points, and it cannot be determined whether they are masses or specific trees. In the 2017 cloud of the same campus, building heights are correctly appreciated, volumes are marked with vertical stripes and there is greater definition of tree masses. Even with this definition of 2.2 points/m2, building façades could not be modeled nor could biomass volumes be calculated, unlike what was seen in the cloud of the UNAV campus.

**Figure 5.** 2012 flight LiDAR point cloud of the UPV/EHU university campus in DSS. Cloud density: 1 point/m2.

The precision of these clouds will condition subsequent data collection at the two campuses to complete the point cloud that allows the 3D simulation model to be achieved.

#### *3.2. Data Collection to Complete the LiDAR, MLS and TLS Clouds*

TLS and MLS technologies make it possible to obtain highly accurate point clouds. However, managing those technologies to capture urban environments of a certain size requires in-depth study to ensure that they can be effective, sustainable and relatively cheap. MLS scanners are generally much more expensive than TLS scanners. However, depending on the urban environment, they can reduce execution times and therefore be a more efficient option. In any case, the scanning of such areas must be planned very well so that the point clouds are not excessively dense and can be handled by standard hardware.

Regarding the MLS LiDAR options, some work has opted for models, such as the Leica Pegasus, which allows reality to be captured from a vehicle, train or ship. It is an expensive option that requires very specific capture conditions for large areas in cities, though it is very useful for capturing linear infrastructures. Was not considered due to the characteristics of the two campuses. One option that was tested is the Leica BLK2GO handheld scanner, which captures moving images and point clouds in real time, using SLAM (Simultaneous Localization and Mapping) technology to record their course through space [56,57]. That scanner combines dual-axis LiDAR, a 4.3 Mpx 360◦ panoramic viewing system, a 12 Mpx high-resolution camera for detailed photos and an inertial measurement unit that enables self-navigation, capturing 420,000 pts/s with a capture range of 0–25 m. System performance based on SLAM technology offers 6–15 mm relative accuracy and 20 mm absolute positioning accuracy for maximum range [5]. Considering the size and characteristics of the campuses, the scanning process with this device was ruled out both due to execution times and the excessive amount of information that would be captured.

Bearing in mind the accuracy of the LiDAR clouds discussed in the previous point, TLS was deemed the fastest, most efficient and least problematic option to complete the clouds, although it is true that specific locations can be complemented with captures made using UAVs. The geometric data capture technique using terrestrial laser scanning allows this capture to be performed quickly and expeditiously, capturing a large amount of information at very high speed, from medium and long distances and with a high degree of accuracy. The generated high-density point cloud can be supplemented by 360◦ panoramic photography taken at each scan position. The point cloud with the overlapping 360 image, makes it possible to configure a three-dimensional visual environment wherein it is feasible to make millimetric measurements and develop virtual visits. Some scanners also have

a built-in thermographic camera that can discern the temperature of each of the millions of points captured when scanning. The temperature of the facades is of special interest in this type of project in which energy improvement is proposed by means of passive solutions such as energy-minded reform of building façades. The five methodological stages followed when scanning the two campuses with TLS will be described next.

#### 3.2.1. Survey of Control Points in UTM Coordinates Using a Total Station

The main objective of the control point system is to obtain a three-dimensional digital model of the geo-referenced survey in absolute UTM coordinates. Obtaining a georeferenced model is not an essential requirement in cases where the data collection procedure is accomplished by laser scanning, since a local coordinate system can be used. However, this information's implementation in the photogrammetric processing ensures greater accuracy of the three-dimensional digital model. Moreover, the control points guarantee the rigor and accuracy of the data processing and facilitates the union between different captures. The materialization of those points is performed using adhesive targets for fixation on the different supports, in the form of rigid plates of variable size. The control points, located on the ground, are permanently referenced by topographic nails for their maintenance during execution of the work.

The checkpoint system layout follows the following criteria:


#### 3.2.2. Scanning Plan

For the scanning process to be effective, it is recommended that a prior study be conducted of the scan positions for the set to be captured. With respect to university campuses, the capture focused on two important aspects:


Because the situation generated by the COVID-19 health crisis meant that movement was very limited, optimizing the fieldwork was vitally important, it meant to reduce the scanning process to the shortest time possible. To be efficient, it was essential to study the campuses' cartography before proposing a scanning plan. The UNAV campus in Pamplona, has terrain with large slopes in some areas and that this needs to be taken into account when preparing the scan position plan. If a plan is drawn up without taking the slopes into account, as if the terrain were flat, the distances between scan positions are horizontal projections of the actual distance. The research focus in this article is on reducing the number of scan points as much as possible, therefore, the need to take the slopes into account in the scan plan is an important one.

As mentioned above, the two campuses have very different characteristics that affect the scanning plan. The Donostia campus is a relatively flat urban campus without large areas of trees; it is therefore practically possible to arrange its scanning plan using an orthophoto. The work was divided into three different areas that cover the entire UPV/EHU campus. However, the Pamplona campus is overwhelmingly complex. The extent of the campus and the large number of buildings are a challenge that cannot be practically covered in a short time by a terrestrial laser scanner. In many cases, the terrain's unevenness exceeds the height of buildings, which are located at very different heights and very far apart. Furthermore, the medium-height vegetation and above all the large trees in many cases prevent the capture of many building façades' geometry. If it were not for the high quality of the 2017 LiDAR clouds, this would be an overwhelming task in a short period of time, even using a scanner as versatile as the RTC 360. On the UNAV campus, simplified CAD planimetry was used to conduct a prior study of possible scan positions. Considering the high number of scan positions and the large area covered by the cloud take, it was decided to divide the work into seven zones. However, this depends a great deal on the power of the computer that will be used when processing the clouds.

#### 3.2.3. Laser Scanning, Pre-Processed in the Field with Mobile Devices

The building façades on the different campuses were scanned, as well as the exterior environments, with special attention given to green areas and vegetation. In the survey of the buildings the main objective was to measure the façades' dimensions, differentiating the sizes of window openings and opaque surfaces. A minimum of three scans were conducted for each façade of the campus buildings. Diagonal scans were also performed to capture the internal faces of the façades. Depending on the distance between buildings, the remaining scans were distributed on the ground. A color point cloud treatment was conducted, since a 360◦ panoramic view was also captured at each scan point. Two terrestrial laser scanners were used: an RTC 360 and a BLK 360, both from Leica Geosystems (Figure 6). They stand out due to three characteristics: they are extremely light, it is not necessary to spend time levelling and they capture 360◦ spherical images in HDR in a short time, generating a color point cloud [23]. We have worked with scanners of other brands, and we are aware that in such cases capture with a 360 HDR image under 8 min is complicated. The two Leica scanners used in this study enabled the project objectives to be achieved. The characteristics and features of the devices are very important in enabling us to obtain the results mentioned in the article.

The first one was the BLK360, which has a registry range of 60 m and gathers 360,000 points/second. It has a pair of special features that make it very interesting. One is that it has a 360 thermal camera that is very useful for sustainability and energy efficiency issues. The other is that it is very small and only weighs 1 kg. It acted as a complement in

some specific tasks for the second scanner, which was the main device used in field capture. The second device, the RTC 360, has a registry range of up to 130 m, which enables it to cover large areas. It is especially interesting from an energy perspective, because it has VIS technology that enables it to automatically register device displacement without targets from one scanning point to another, so that the partial points clouds are registered in a special location related to the other scans. The device's measuring rate is up to 2 million points/second, and for high resolution, (3 mm@10 m), it can scan in 1:42 min without HDR. Bearing in mind that medium resolution is often enough in many cases, a point cloud with 360 image can be obtained in a scan of less than 2 min. We can safely say that this is "little time" in comparison to other scanners. In fact, we have checked and found that the 360-image captured with other 3D laser scanners in 8 min is of poorer quality than the one captured in one minute with these scanners.

Unlike other scanners, the BLK and the RTC enable pre-processing work to be performed on site, via a mobile device (tablet or mobile phone), thanks to the Leica Cyclone FIELD 360 application. While the scanner is capturing points, we can check the results obtained and move forward with the processing work, uniting the points clouds of each scan position. These scanners transmit a Wi-Fi network that enables the mobile device to be linked to the scanner, so that all the scanner data can be transferred in real time to the Cyclone FIELD 360 application. Any tablet or mobile phone that uses iOS or Android can be used for this purpose. Represents an advance that further streamlines work, besides enabling the campus digitalization work to be evaluated, optimized and validated to obtain the points cloud in 3D. It also enables the basics of the DSR method to be complied with: implement, evaluate and optimize.

The scan positions pre-processing app has several work tabs. In the "map" format the clouds are joined in a plan following the "cloud-to-cloud" method, and the accuracy of the union in plan, section and perspective can be consulted. The union must always be conducted between two nearby point clouds, which will be shown in two different colors to facilitate the process (usually orange and blue-cyan). The "360" section allows immersion in each scan point, to view details. Lastly, a specific cloud or the assembled set of clouds can be viewed in 3D (Figure 7).

#### 3.2.4. Information Processing—Point Clouds, 360◦ Images

The processing software used, Leica Cyclone Register, presents an environment similar to that of pre-processing but allows working with more tools to make the final result more accurate. Any operation carried out with the pre-processing software can be reversed; it enables visualization of the error, overlap and strength of each union between clouds. The process is simplified into four steps associated with four tabs: Import Data, Review and Optimize (the pre-processed data), Finish (and generate different files) and Produce an Accuracy Report. After analyzing the joints' suitability, they must be reviewed and optimized according to the link error, overlap and force parameters. As already mentioned, measurements of distances, surfaces and angles can be made in the process. The cloud of the complex or that of a specific scan position can be displayed in 3D. To optimize the ensemble, this process is conducted "cloud by cloud" by analyzing two nearby clouds (Figure 8). At the end of the process, the error is displayed as a graph. The process can be repeated as many times as necessary.

Once the result has been achieved with the expected accuracy, the process ends. The software allows the results to be exported to different file formats, and, also, generates a log report with a large amount of process data to validate the work carried out.

#### 3.2.5. Results and Reports

After completing the connection between clouds and verifying that the quality of the connections is appropriate for the project objectives, the software enables the generation of an exhaustive precision report.

The cloud can also be exported to multiple formats (LGS, e57, RCP, etc.) so that the project partners can work in their environments and with their software. These files are usually exported to intermediate software such as Recap, which allows the cloud to be imported into common modeling programs (Revit, Sketchup, etc.). The partial clouds of the different zones were exported to LGS format to view and extract the data from the free Jetstream Viewer. The whole set was not joined, to avoid working with an excessively dense file.

**Figure 7.** Cloud pre-processing with the Leica Cylone Field 360. DSS Campus, Zone 3: (**a**) union of the clouds in plan and 360◦ panoramic images; (**b**) inion in "cloud-to-cloud" section between two clouds; (**c**) 3D cloud join preview.

**Figure 8.** Editing and optimization of point clouds "cloud-to-cloud", using Cyclone software. DSS Campus, Zone 2.

#### 3.2.6. Visualization—Obtaining Data to Feed the Model

The display of the model has two graphic options: one to display the point cloud (in color if 360◦ images were obtained) and the other to display the model in 360◦ panoramic image format. In the second option the point cloud is overlaid in the background with all the data at each point. So, if we want to check point coordinates, distances between two points, surfaces in m2, angles or surface temperatures at a point, this data is extracted quickly and easily. Furthermore, the interface allows navigation in the model with different options (flight, orbit, etc.), so that the campus can be "visited" virtually with options not possible in the real model. With this resource, part of the field work can be transferred to the office, from where the campus can be visited and perfectly measured in a very short time, allowing to measure even places that would be inaccessible in the real model. The results can also be viewed using other 3D viewing software such as the Leica TruView viewer or other external software such as Autodesk ReCap.

#### *3.3. Capture by UAV and Structure from Motion (SfM) Photogrammetric Processing*

The use of UAV-assisted photogrammetry makes it possible to complete the digital model of the survey [58], in cases where relevant geometric data exists outside the laser scanner's range [59]. Examples include roofs [60], eaves and, generally, all horizontal surfaces above the origin of the projected laser light beam [18]. The data on building roofs is largely meant to obtain the surface of the buildings' current solar panels, which is relevant information for the simulation of the 3D model with NEST.

Two options can be incorporated in a UAV: an MLS device that can instantly obtain LiDAR clouds or a camera that captures data through automated photogrammetry. The first option requires higher performance of the device. An interesting option is the BLK2FLY, a UAV with built-in MLS that is very light. It features a GrandSLAM sensor fusion of LiDAR, radar, cameras and GNSS for full scan coverage, plus optimized flight paths. It also has advanced obstacle avoidance for greater flight safety. Although it is a very interesting option, because it can be integrated in the same flow and software proposed in this article, it was ruled out for this work because it is more expensive. A similar high-priced asset is the DJI Zenmuse L1, with LiDAR built into the UAV. The advertising for the Zenmuse L1 mentions a coverage of 400 Hectares in one sole flight at 100 m height and a speed of 13 m/s. Some unofficial practical trials, mention a lower capacity for capturing 80 or 90 Ha. in one single flight. Other sources state that the combined capture between LIDAR and the photographic series, at 300 feet and with a GDS of 10 cm/pixel, would cover little more than 5.5 Ha. in a 27 min flight. Such data is doubtlessly very promising, but we should not forget that the recent appearance in the market of these new tools means that

for now at least there are insufficient practical trials and contrasted scientific studies on their performance. We will have to wait a while for further studies and contrasted research to see the results in the appropriate publications.

To supplement the clouds previously obtained, the occasional support of automated photogrammetry by means of UAV was chosen. UAV-assisted photogrammetry or data collection using RPAs and SfM photogrammetric processing [27], facilitate data collection, due to the process's simplicity and the breadth of the working range [61]. The constant innovation process of this new technology has developed intelligent flight planning and control applications that enable semi-automatic design and execution of data collection. These applications very considerably reduce the time devoted to fieldwork for data gathering, even for work ranges of large areas [43,44]. The flight altitude determines the width of the data capture range [62], and, at the same time, its resolution or the GSD value (ground sample distance or effective pixel size of the model's bitmap), in such a way that the respective value is proportional to the working range and inversely proportional to the captured data's resolution. If the capture does not require high levels of precision, that is why a balanced relationship between capture range and GSD value can facilitate an efficient data collection process, providing optimal results. The processing of the photographic take enables a textured geometric mesh to be obtained, which can be exported to a point cloud format, and thus integrated in the point cloud previously generated in the processes described above (ALS and TLS) [63].

The biggest drawback when using this technology concerns the flight restrictions imposed by the respective air safety agencies and the administrative authorizations required to carry out the flights. If you are an official drone operator in Spain with the appropriate permits, you can fly over almost any area, but processing such permits can take a very long time, and this can be a problem when you are planning for data readings. On many occasions the work and delays involved in this administrative procedure ended up affecting the choice of one type of technology over another. At the UPV/EHU campus it was therefore not considered necessary to complement the cloud with this technology, while on the Pamplona campus it was used to complete specific elements in areas with few valid references for TLS.

An automated flight configuration simulation for data collection from a 1 ha area is shown below. To improve the geometric characterization of the vertical planes, the combined use of one nadir shot (gimbal tilt—90◦) and two oblique shots (gimbal tilt—45◦) was configured (Figure 9 and Table 1).

**Figure 9.** Automatic flight configuration for a surface of 1 ha in the UNAV university campus in Pamplona: (**a**) UAV shots and Gimbal inclinations; (**b**) data in the DJI-GS Pro app.


**Table 1.** Automated flight configuration chart for the coverage of a 1 ha area and with photogrammetric model resolution of 1 cm/pixel.

#### *3.4. Modeling and NEST-Sketchup*

The point cloud can then be imported into 3D modeling software to generate the model (Autodesk Revit, Sketchup, etc.). The process involved two phases. Initially, the residential buildings in the campus were modeled (Figure 10); finally, the university buildings were modeled in more detail.

**Figure 10.** 3D model for the environmental assessment. UPV/EHU DSS campus, phase 1.

From the final cloud obtained, accompanied by the merged panoramic image, the data and measurements needed to complete the model were obtained. With these measurements the model was generated in NEST-Sketchup and the different reform scenarios were simulated.

With the NEST tool an environmental assessment can be performed from the LCA perspective using different elements such as buildings, means of transport or urban lighting. An urban plan for a totally new urban area can be analyzed, or an evaluation of an urban renewal area. This tool was developed based on a doctoral thesis [64]. It is a plug-in for the SketchUp 3D modeling software widely used in architecture and urbanism. The interface is therefore very intuitive, based on a graphic environment that allows theoretical aspects to be related to the built reality. NEST directly obtains the evaluation from the 3D model of the urban area studied and calculates it based on a set of indicators created from a scientific perspective.

The 3D model with information has to be created to carry out the evaluation. To do so, the first thing that needs to be conducted is to model the geometry of the buildings and of the roads, parking, green spaces, etc.:


Then the model needs to be fed with the information necessary to run the environmental evaluation. NEST considers four main elements in the urban scope: building features, ground typologies (green space, road, etc.) and public amenities such as urban lighting and user transport. Most tools that evaluate such environments do not assess all the LCA stages set out in standard ISO 14040 [65]. Some consider only the operative energy use phase or the product phase. NEST assesses the environmental impact of more stages, as indicated in (Table 2).


**Table 2.** Building life cycle stages defined by NEST.

#### *3.5. Materials for Method*

The usual resources that were used and/or tested in the research developed for this project will be summarily listed. Resources that could have been used to obtain the ALS LiDAR clouds will not be included, since they were captured and processed by different public services, and it is not precisely known which resources were used.

#### 3.5.1. Material Resources Required

To apply the techniques described, material resources or devices that allow this data to be captured in the field must be used. The characteristics of such devices have been described in previous publications [5,23], so it is not necessary to include them (Table 3).

**Table 3.** Material resources used in the massive point capture.


#### 3.5.2. Software and Hardware

To complete the entire work process, several types of software are necessary. The software used for development of the phases described in the work process for LiDAR clouds will be summarily specified: referencing of the work, capture and pre-processing of different field scans, processing of the different clouds obtained in each scan position to obtain the final set, visualization of the final product and accurate measurement of the work conducted. Some software is specific to the scanner brand used in the fieldwork, while other software can work with files generated by various brands (Table 4).




**Table 4.** *Cont*.

\* Leica.

#### **4. Results**

Although part of the results were shown in the previous section to enable better and more graphic understanding of the methodological process, this section focuses on the partial results that will allow the 3D simulation model to be obtained. Some quantitative data of the LiDAR cloud, the error from the union of the TLS clouds and the UAV flight operation will be shown. In addition, graphic results of the cloud will be explained, so that the way data is extracted to generate the 3D model can be checked. Finally, some values from the environmental assessment will be presented, though these are not the direct objective of this publication.

#### *4.1. ALS LiDAR Clouds*

In this section, the results of the ALS LiDAR point cloud produced using the LiDAR of public services involved in the project will be analyzed. Two main aspects will be studied: the cloud's suitability for modeling the façades of campus buildings, considering that the buildings' volumes, the windows and opaque parts of façades must be defined; and the suitability for calculating biomass volumes by making sections of trees masses in the cloud.

As for the campus buildings, twenty buildings were evaluated at the UPV/EHU campus in DSS, while 31 were evaluated at the UNAV campus in Pamplona [12]. Before determining the scanning plan strategy for TLS, the 51 buildings had to be analyzed by making partial sections of the ALS cloud. Below is an example of measurements made in a building on the UNAV campus, where the cloud has a density of 14 pts/m2 (Figure 11).

(**b**)

**Figure 11.** 2017 LiDAR cloud, density: 14 points/m2. UNAV campus: (**a**) measurements on the point cloud of the campus's central building; (**b**) graphic survey of the same building façade.

As an example, a series of basic measurements were made to calculate building height, façade surface and openings on the north façade of the central building at the UNAV university campus in Pamplona. The point density value of 14 p/m<sup>2</sup> allows approximate measurements of part of the building elements to be obtained, whose accuracy may sometimes be sufficient for the 3D simulation model. The results obtained in the example are shown below (Table 5).


**Table 5.** Example of façade measurement table in the ALS LiDAR cloud.

It was thus possible to obtain measurements for all buildings on the UNAV campus; if any additional measurement was necessary, supplementary measurements from the cloud obtained with TLS techniques was used. On the UPV/EHU campus, the LiDAR clouds in Gipuzkoa province were not able to obtain the same results. It was only possible to obtain measurements of façades (height and width), but not of window opening sizes. To conduct that, a more exhaustive capture had to be performed with TLS. The data used to model buildings on the UPV/EHU campus in DSS was extracted directly from the LiDAR clouds obtained with TLS techniques.

The results were also analyzed to obtain the campuses' approximate forest biomass volume. The measurement of an isolated tree or vegetation element will be used here as an example, starting with analysis of the suitability of the LiDAR clouds at the UPV/EHU campus. In Figure 6, the 2012 cloud barely shows points of vegetation or soil. The 2017 cloud of the two campuses is analyzed in comparison (Figures 12 and 13).

**Figure 12.** Isolated vegetation element. 2017 LiDAR cloud, density: 2.2 points/m2. UPV/EHU campus in DSS.

In the cloud in Figure 12, the terrain's configuration can be observed, though it is difficult to measure tree volume. After making several measurements in that LiDAR cloud, the estimate obtained has a maximum precision of 30 cm in the XY axes and 20 cm in the Z axis. In contrast, it was verified that in the LiDAR cloud of the UNAV campus in Pamplona, the tree masses have enough precision to make measurements. In Figure 13 the tree's height is exactly 22.45 m.

Many publications present multiple ways to calculate tree volumes [74]. Considering the requirements of the NEST evaluation software, calculations have been performed in two ways: for isolated trees such as the one in the example, their volume is assimilated to a cone, cylinder or sphere [75]. In Figure 13b it is assimilated to a cone. The tree's total height and the base of the branches are measured; the volume of the cone is then calculated. For continuous masses, such as in Figure 4, partial sections of the cloud are used, and the approximate volume is extracted considering the contours of the mass. To calculate the final volume of the campus's biomass, that tree data must be completed with the volumes of green areas associated to the terrain's green surfaces. In the case of the UNAV campus, its total area was calculated as being 1,547,278 m2, with a green space surface area of 1,082,210 m2, accounting for 70% of the total. At the UPV/EHU campus in DSS, the total area of the campus was estimated to be 565,140 m2, with a green space area of 168,816 m2, accounting for approximately 30% of the total. Numerous publications have explained different ways of calculating forest biomass [75–77]. With this data it was possible to feed the NEST model to carry out the evaluation.

**Figure 13.** 2017 flight LiDAR point cloud, density: 14 points/m2. Isolated vegetation element where the complete section of its mass is observed, as well as level of the ground.

#### *4.2. TLS LiDAR Clouds*

The measurements that could not be obtained from the ALS LiDAR cloud were made from the cloud supplemented by TLS techniques. All kinds of geometric data (lengths, angles, areas, etc.) and thermal data of the points in the cloud can be extracted from that cloud using laser scanning. At the end of this section, in the visualization part of the resulting clouds, some examples of quantitative results in cloud measurements can be seen.

Although we have shown that the field work in 7 days enabled enough results to be obtained to enter all the necessary data in NEST, we shall show how the previous estimate of 7 days was calculated. The basis for everything is to draw up different scanning plans, placing the scan positions at different distances. The lesser the distance, the more scan points there are, and therefore, more scanning time. The scanning time can be calculated with the data we show in response 2.6. We could estimate a mean scanning time of 2 min (the maximum would be 2:42 with an HDR image in color).

3D laser scanners present data at resolutions or precisions expressed at 10 m. In the case of the RTC360, with a measurement rate of up to 2 million points/second, the data is as follows: For high resolution, (3 mm@10 m); for medium resolution, (6 mm@10 m); for low resolution, (12 mm@10 m). If we want to make sure of these resolutions, the logical thing to do is establish a scanning plan with the scan points at 10 m. However, such a decision would mean that the scan with TLS would take much longer. The UNAV campus is 113 Ha., 70% are green spaces, 10% is occupied by buildings and the other 20% consists of car parks, roads, etc. Three calculation scenarios were established:

In (I) we set out to estimate a scanning plan with a benchmark distance of 10 m. 100 scan positions/Ha. were estimated in the green spaces, making a total of 7910 scan/pos. 31 buildings associated with the campus and its activities had to be modelled and simulated. Only the exterior geometry of the facades had to be captured. In all, 775 scans were estimated for the facade perimeters (an average of 25 scans per building). The other areas, (parking, roads, etc.) were not as important for the environmental evaluation of the model, although they take up a lot of surface area, and so a total of 678 scan/pos was estimated. The total scan positions was 9363 that, with a scanning time of 2:42 min, made for a total of

52 days of field work without including the displacements of the scanner from one position to another.

In scenario (II) we estimated a scanning plan at 33 m in the green spaces, with an estimate of 16 scan positions per Ha. with a total of 1265 scan/pos. Scan positions were planned at 30 m in the building perimeters, but with a minimum of 3 scans per facade. 258 scan/pos were calculated for the characteristics of the buildings. In all, 339 scan/pos were estimated for the rest. The total number of scan positions was 1862, which with a scanning time of 2:42 min makes a total of 10.5 days of work without including the displacements of the scanner from one position to another. This would mean over 2 weeks' work.

In scenario (III) we estimated a scanning plan in the green spaces at distances under 50 m, with an estimate of 8 scan positions per Ha. with a total of 632 scan/pos. The previous plan of 30 m with 258 scan/pos was maintained for the building perimeters. In all, 169 scan/pos were estimated for the rest. The total number of scan positions was 1059, which with a scanning time of 2:42 min makes a total of 6 days field work. Displacements of the scanner from one scan/position to another (a minimum estimate of 30 s) adds one more day. This makes for a total of 7 days. The estimates calculated for working days of 8 h/day, although in early April there was more than 12 h of natural light a day, which gave a degree of margin for contingencies in the 7 days.

After presenting the estimation of the reduction of scan process in the field, next section shows how the cloud processing results obtained with TLS can be improved. Since the LiDAR clouds already have basic information on which to add the TLS clouds, the error and overlap of the resulting clouds is not the same as when scanning a single building using only TLS techniques. The main accuracy parameters to upgrade are: set error, overlap, strength of link and cloud-to-cloud error. An example of the direct result after the scanning and pre-processing phase of Zone 2 of the UPV/EHU campus in DSS is shown (Figure 14).

**Figure 14.** Review and optimization of the scan of the Donostia-San Sebastián campus in plan, using the Leica Cyclone Register, Zone 2.

As detailed previously, the processing phase has several stages separated in four tabs in the software. The first allows the data collected on-site to be imported. In this case, as pre-processing or pre-registration has been conducted, the scan points appear linked in the import from Cyclone FIELD 360 to Leica Cyclone REGISTER 360. At this point, the processing software analyzes the joint data in the field and assigns a color to each joint based on its strength and accuracy. Green indicates the highest strength and red the lowest strength; two other colors, yellow and blue, are in between. In the case of Figure 16, the input data for the processing marks the following results: set error 1 mm, overlap 33%, strength 34%, cloud-to-cloud error 1 mm. This data could be optimized in REGISTER 360 by optimizing the cloud-to-cloud joints, though the 1 mm assembly error is more than enough

to meet the needs of the 3D model in NEST. Furthermore, in environments with large vegetation, although the scan's accuracy is high (1–3 mm error), it may happen that the overlap between clouds is not as appropriate, due to the singularities of moving branches and leaves (Figure 15).

**Figure 15.** TLS LiDAR clouds in the tree-lined area of the UPV/EHU DSS campus. Forest biomass data can be obtained from these clouds, which offset the lack of ALS LiDAR clouds.

At the UNAV campus, the processing stage was very similar to that of the Donostia campus. Although, since the ALS LiDAR cloud is much more accurate, the scanning points per m2 of campus are much lower. Considering also that 70% of the campus comprises green surfaces, means that there are no very reliable references between adjoining clouds. After the scan data dump before processing, the initial set therefore had much fewer joints in green (because of that lack of strength and overlap). Although the error was acceptable (2 mm), initially, the force was only 22% and the overlap 19%. Indeed, in some areas there is a previous joint that the software stopped linking so that it could be studied and improved during processing (Figure 16).

**Figure 16.** 3D point cloud. Review and optimization of scan of the UPV/EHU Campus, DSS, Zone 1. Description of the joints before the processing and optimization stage.

As the scan positions have been created at longer distances than usual to streamline the process, the pre-registration work conducted on site undergoes a revision. Since image 16 is in perspective, it is harder to see the color of the unions between scan points. When the data obtained in the field is imported, the software analyzes whether the points of a cloud of a scan position overlap enough with those of the previous and posterior clouds. The software checks and colors the unions in line with three concepts: Error, strength of union and overlap between clouds. If they have any undesirable parameters, the may color them in red, yellow or blue or it may not directly propose the union because it is outside a minimum range. What happens then, is that an optimization process commences analyses and improves the cloud-to-cloud union (as in Figure 8), between the two clouds that do not have a connection line in green (Figures 14 and 16). If we can improve the overlap parameters in this optimization process, the line of union changes to green and it is

accepted as valid. If we cannot change the union to green in the optimization process, there is still the option of carrying out another scan on site the next day and further strengthening the cloud of the set. This means that the field work should be processed every day, and so it is importance to use the tools mentioned in this article. The devices, software and applications enable pre-processing that greatly reduces the amount of daily processing work; they also make it easier to check that the on-site capture is satisfactory.

Once the processing stage is finished, to view the results and extract the necessary information the most appropriate file format is LGS. The free Leica Jetstream Viewer software enables the viewing and consultation of data from the digital model comprising the set of point clouds and the 360◦ images of each of the scan positions. This application allows a visual and metric inspection to be carried out virtually along the route. With this resource you can accomplish the analysis, verification and data extraction tasks needed for the modeling and simulation process in NEST, such as distances, areas, angles or even surface temperatures (Figure 17).

**Figure 17.** Visualization of results of the TLS LiDAR clouds in Jetstream Viewer, obtaining quantitative data for the model: (**a**) 3D colored cloud visualization, geometric data, Gipuzkoa School of Engineering building, UPV/EHU campus in DSS; (**b**) southwest façade of the same building. Visualization from the 360◦ image of the building, geometric data (green and gray) and thermal data (red).

Before starting to survey the campuses with TLS, the feasibility was analyzed in terms of resources and time needed to complete the ALS LiDAR cloud. TLS runtimes were calculated for comparison to UAV ones, with automated photogrammetry. In the case of the UPV/EHU campus in DSS, the configuration of the urban environment, its dimensions and the buildings' closeness enable very fast complementary capture with TLS, for which additional work of three days was estimated, in three zones. However, the UNAV campus in Pamplona presents groups of buildings, though with large distances between some of them. Capturing the entire campus with TLS would have taken several weeks, with an overwhelming amount of information. However, as the ALS LiDAR cloud, with a density of 14 pts/m2, provided very acceptable measurement results, the complementary survey with TLS was estimated, after previous scanning plans, to take seven days, one day for each zone. After the TLS calculations, the resources were estimated with the UAV to decide which would be the fastest and most efficient combination.

#### *4.3. Capture by UAV and SfM Photogrammetric Processing*

The combination of data collection by UAV and the SfM automated photogrammetry method facilitates documentation due to the simplicity of the process and the width of the working range [78,79]. The study focuses on an assessment of the operation and efficiency of the capture for these large urban areas [14]. It concerns assessment of whether it could be faster and more efficient than additional captures with TLS to complete the ALS LiDAR cloud of the campuses. At the UPV/EHU campus, flight restrictions due to the location in controlled airspace made operations difficult. For that reason, it was ultimately decided to not complete the clouds with UAV-assisted photogrammetry.

The survey forecast was performed at the UNAV campus in Pamplona. The programmable application (DJI-GS Pro) allows configuration of the automatic flight mission through navigation based on satellite positioning (GNSS). The UAV used is a DJI Phantom 4 Pro model, equipped with a 1" CMOS sensor that reduces radial distortion and improves the metric quality of the SfM restitution method [80]. A grid comprising 147 square sectors measuring 100 m on each side and an area of 1 Ha was created, establishing a low-altitude flight parameter (37.3 m) to guarantee the model's accuracy [5]. This altitude offers a GSD factor of 1 cm/pixel with an overlap between photos, front and side, of 75%. The combined use of nadir shots (gimbal tilt—90◦) and oblique shots (gimbal tilt—45◦) was determined. One nadir and four oblique shots were planned per sector, the latter aligned with the four trajectories that join the vertices of each sector to its center (Figure 18).

The results of the resource feasibility study are presented in Table 6.



Capturing data from the entire campus would require 245 flight hours, which, added to the preparation work, could mean a field task of more than one month. As previously justified, the additional works of the ALS LiDAR cloud with TLS techniques was estimated to take seven days, so the UAV was used to complement the TLS data in those sectors of the 147 planned sectors where there are no buildings.

**Figure 18.** (**a**) Work planning prevision in 147 sectors; (**b**) planning of flight directions in one of the sectors, from top to bottom: 1. Shooting overlap, 2. Nadiral shooting/G.P.A.: −90◦/C.A.: 0◦, 3. Oblique shooting/G.P.A.: −45◦/C.A.: 45◦, 4. Oblique shooting/G.P.A.: −45◦/C.A.: 135◦, 5. Oblique shooting/G.P.A.: −45◦/C.A.: 225◦, 6. Oblique shooting/G.P.A.: −45◦/C.A.: 315◦, 7. Orthophoto, (G.P.A: Gimbal Pitch Angle; C.A: Course Angle).

#### *4.4. Modeling*

The point cloud of the ensemble will be used to perform the simulation 3D model of each campus in NEST (Figure 19).

**Figure 19.** Final 3D model of the UNAV campus in Pamplona for simulation in NEST, (see Appendix A).

The resulting final cloud can also be segmented to model isolated buildings in greater detail and make specific simulations of the campuses (Figure 20).

**Figure 20.** BIM model in Autodesk Revit software. School of Engineering building. Campus UPV/EHU DSS, Zone 2 of the scan.

#### *4.5. NEST—Environmental Assessment Results*

As previously stated, it is not the purpose of this article to analyze the results of the assessment, which has already been dealt with in other specific articles [11,12]. In any case, to round up the flow of the project, we felt it was appropriate to show a very reduced sample of the results given by NEST. An image of the simulation model and a summary table of the two campuses will be presented (Figure 21 and Table 7).

**Figure 21.** NEST model with simulation of CO2 impacts. UPV/EHU DSS campus.

**Table 7.** Results obtained in the NEST simulation of the improvement scenarios for the years 2030 and 2050.


\* See Table 2.

#### **5. Discussion**

The starting hypothesis focused on the possibilities of obtaining rapid digitization of two university campuses that occupy a large area of the cities where they are located. The lockdown situation caused by COVID-19 did not allow long stays in the field to collect data, and a combination of techniques were studied so that the survey could be performed in just a few days.

After analyzing the results, it can be stated that it was possible to capture both university campuses in 3D, while field work time was reduced to just 10 days (7 + 3), with a satisfactory outcome that was economical and efficient. It was possible to complete the fieldwork in ten days, thanks to a combination of resources and techniques. After studying different technologies and resources, the work was accomplished by using ALS LiDAR point clouds captured by public services and by capturing complementary point clouds with greater density and accuracy by means of TLS; finally, UAV-assisted automated photogrammetric techniques were used to complement previous clouds. The study of each of these technologies used to achieve the objectives of this work enabled discussion of several specific advantages and disadvantages for the specific situation arising in this article.

The ALS LiDAR clouds obtained by public services are free and thus did not involve any investment of resources, finances or time. Although, it was discerned that those LiDAR clouds for the two cities in question have very different qualities, which affected the approach to the work and its development. The LiDAR cloud of the UNAV campus in Pamplona, with a density of 14 points/m2, allows measurements to be made with sufficient precision to do the simulation model in NEST. However, the LiDAR cloud of the UPV/EHU campus in the city of Donostia-San Sebastián, with a maximum density of 2.2 points/m2, has many limitations. Regarding the measurement of parts of building elements, it would be limited to dimensioning the total height of the built volume, without being able to make other kinds of measurements such as height of standard floor plan, opening sizes and even, in some cases, size of the façade surfaces. With respect to plant masses, it is also verified that the cloud's low density does not allow estimates of mass volume and in some cases, not even basic measurements of the plant element's height. That is why it was decided to complement the LiDAR data with massive point capture based on TLS, since that technique allows capture in the shortest possible time. Considering the characteristics of the LiDAR clouds found, the number of scanning points for each campus was more or less intensified. We have used this procedure to try out scan points at longer distances than usual (30 m, 50 m and 80 m, without exceeding the capture distance limit presented by the laser scanner, which is 130 m). It should be borne in mind that this work with TLS sets out to complement the LiDAR cloud of the Government of Navarra. With these tests we found that most scan points at distances of 30 m give acceptable results (unions in green) and that the maximum range of the scanner at high resolution is actually 60 m, even when the capacity is 130 m.

Regarding the complementary work of the point cloud with TLS techniques, the UPV/EHU campus has a higher scan position density than Pamplona, since its 2017 ALS LiDAR cloud is much less precise than that of the UNAV campus. The work with TLS showed some differences at each campus. Considering the maximum range of the scanner (130 m), a minimum of three scans per building façade could initially be considered. However, due to the characteristics of each campus, the pre-registration and processing work had different disadvantages. At the UPV/EHU campus, when joining the adjacent clouds, the pre-registration was strong enough because it was a flat area with high building density and little vegetation. Better cloud strength and overlap results were obtained, even though the scanner used the same millimetric precision. However, at UNAV, the campus's size makes the buildings more distant, so when scanning a building it is not easy to record many points from another adjacent building to strengthen the joints. In addition, the dense vegetation and the unevenness of the steep terrain make the overlap and strength of the final clouds lower. In any case, in Pamplona, this aspect was supplied with the good quality of the 2017 ALS LiDAR clouds.

The fact that there are large green areas without buildings on the UNAV campus imposes an added difficulty for quick capture with TLS. For that reason, before starting the fieldwork with TLS, it was evaluated whether UAV-assisted automated photogrammetry could be a faster and more efficient resource. Automated photogrammetry with UAV enables capturing the points of areas that cannot be accessed by the TLS scanner's laser beam, such as building roofs. Automation simplifies the process by reducing data collection times and offering a greater work range, besides enabling more efficient management of the capture process. However, the UAV, which is a cheaper device than the TLS, has several disadvantages for the challenge set out in this research. It requires certification with a professional pilot's license and its use is conditioned in multiple areas, both under airport control and over urban centers, where it is necessary to request authorizations well in advance. Moreover, in urban environments such as the UPV/EHU campus in Donostia-San Sebastián, ensuring flight safety over crowds of people and vehicles can be another handicap. In the case of the Pamplona campus, this last aspect does not affect it excessively, though its large size implied that data collection in the field would take more than four weeks, well above the forecasts for TLS. Use of the UAV was therefore limited to some sectors of the campus where there are no buildings.

#### **6. Conclusions**

Considering, on the one hand, the two case studies proposed (the university campuses of the UNAV in Pamplona and the UPV/EHU in Donostia-San Sebastián) and, on the other, the challenges that conditioned achieving the research objectives, three massive point capture resources were combined. The work started by using existing public ALS LiDAR clouds. Multiple technologies were then studied to reach the conclusion that the most appropriate technique for these two cases should focus on capturing LiDAR clouds with TLS. In specific cases, those two resources could be complemented by a massive capture of geometries and textures using UAV-assisted photogrammetry.

The ALS LiDAR point clouds captured by public services are an immediate and cheap resource, though their accuracy and efficiency depend on the quality of the devices used to capture them. Clouds with a density of 14 pts/m2, captured with sensors such as those used by the Navarre government, allow measurements to be made on parts of building elements and vegetation. Below these densities, their usefulness for making such measurement has many limitations.

Regarding the complementary techniques proposed, some publications claim that UAV-assisted photogrammetry is a faster and more efficient technique than TLS. However, in this research, it has been possible to analyze and conclude that if the work starts by using ALS LiDAR clouds, then TLS may be a faster option to complete the final point cloud of the whole urban area. The results show that with points clouds overlapped with 360 images, produced with a combination of resources and techniques, it was possible to reduce the on-site working time by more than two thirds. TLS also has an advantage over photogrammetry since it allows 360◦ panoramic images overlapped with the point cloud. Taken together, this single file in .lgs format, with the two resources (points cloud + 360 image), creates an immersive experiences in the office that also makes it possible to know the campus in greater detail and extract all the data: points coordinates, distances, areas, angles and temperatures, directly in the 360◦ image with high quality resolution. The joint file facilitates detailed in-office study of the urban areas captured to make a 3D simulation model. In addition to fieldwork for the survey, visits to the field for other types of data collection in general can accordingly be reduced. In future research work, that digital model can be optimized in later stages until becoming a very effective digital twin (ref.). University campuses can be endowed with multiple sensors and measurers to monitor their environmental impacts, which can be recalculated in real time from the digital twin. This can enable optimization of the processes and work involved in managing university campuses [81–84].

The combination of ALS LiDAR clouds with TLS scanning clouds made it possible to obtain the 3D model for the environmental assessment of university campuses with NEST. The digitization of these two urban environments has been able to be conducted with a field job executed in a very short time, moving the work mainly to the office, with online collaboration and adjusting to the mobility restrictions imposed due to COVID-19.

**Author Contributions:** Conceptualization, I.L. and J.J.P.; methodology, I.L. and M.S.; software, J.J.P., I.L. and A.C.; validation, M.S., A.C. and J.J.P.; formal analysis, I.L. and A.C.; investigation, I.L. and J.J.P.; data curation, J.J.P., I.L. and M.S.; writing—original draft preparation, I.L. and J.J.P.; writing review and editing, M.S., I.L. and A.C.; visualization, M.S. and A.C.; supervision, I.L. and J.J.P.; project administration, I.L.; funding acquisition, I.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Nouvelle-Aquitaine/Euskadi/Navarre Euro-region (AECT). Project co-financed through the second session of the 2019 AECT call for projects.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data can be found on the collaboration platform of the University of the Basque Country (https://ehubox.ehu.eus/login accessed on 20 January 2022) and are available for restricted access.

**Acknowledgments:** We would like to thank the University of Navarra and the Arquitectura AH Asociados studio for their work acquiring data from UNAV for the baseline inventory. Additionally, to Alba Arias, Xabat Oregi and Cristina Marieta for the work carried out in the research project.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

Procedure of graphic tool NEST-Sketchup (Figure A2).

**Figure A2.** *Cont*.

(**d**)

(**e**)

(**f**)

**Figure A2.** NEST-Sketchup procedure: (**a**) NEST Interface in Sketchup; (**b**) modeling process; (**c**) urban planning parameters; (**d**) characterization of buildings; (**e**) population distribution and transportation scenarios; (**f**) visualization of results [85].

#### **References**


### *Article* **Challenges for the Implementation of BIM Methodology in the Execution of Underground Works**

**José-Manuel Baraibar 1,\*, Jesús de-Paz <sup>2</sup> and Jokin Rico <sup>2</sup>**


**Abstract:** After a few years of the coexistence of the building information modelling (BIM) methodology with the architecture, engineering, and construction professions, its main uses are often limited to 3D modelling and collision checking between different disciplines. However, while this way of working demonstrates opportunities for optimization and clear benefits, there is still much potential for the BIM methodology to be explored. In the scope of a particular underground work, the Arnotegi tunnels of the Bilbao Metropolitan Southern Bypass, a specific contractual framework favouring the collaboration among stakeholders has been defined to implement the use of this methodology by the main participants in the project, encouraging more advanced uses, such as the use of the model as an integrator of the information contained in the common data environment. Due to the very essence of tunnel construction and the relative geotechnical uncertainty of the terrain, the tunnel model evolves day by day during the course of the work, with information being shared in real time between all those involved. This approach has made it possible to improve the quality of decisions and the perception of important information by presenting it in a transparent and easily interpretable way.

**Keywords:** underground works; BIM; innovation; common data environment

### **1. Introduction**

The tunnelling industry is constantly evolving. From the time of ancient civilisations to the present day, builders and engineers have always sought to apply new technologies to this age-old art [1]. The 19th century saw the great explosion of the tunnelling industry with the development of the railways. The 20th century saw the consolidation of drilling and blasting methods, and the end of the century saw a revolution in the sector with the advent of tunnel-boring machines. Today, both TBMs and the machinery needed to bore tunnels, according to the New Austrian method (NAM), incorporate a wealth of sensors and automation technology that make it possible to execute tunnels with performance and safety conditions that were unimaginable only a few years ago. Moreover, the increasing digitization of information surrounding the design, construction, and operation of underground works is currently one of the main catalysts towards the transformation of the sector [2].

In recent times, it is already a fact that building information modelling (BIM) methodology is transforming the architecture, engineering, and construction professions [3,4]. BIM is a collaborative working methodology for the creation and management of construction projects with the main objective of centralising all the relevant project information in a digital information model created by all its stakeholders.

There is no longer any doubt that BIM is progressively transforming building and civil engineering projects above ground, although the current uses are mainly for visualizing projects in 3D and for detecting collisions between different disciplines [5]. However, its use in projects related to underground works is still residual [6–8], and it is precisely in these

**Citation:** Baraibar, J.-M.; de-Paz, J.; Rico, J. Challenges for the Implementation of BIM Methodology in the Execution of Underground Works. *Buildings* **2022**, *11*, 274. https://doi.org/10.3390/ buildings12030309

Academic Editors: Fahim Ullah and Theodore Stathopoulos

Received: 27 January 2022 Accepted: 3 March 2022 Published: 5 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

type of projects with high investment, great technical complexity, and a high cost in case of error, where this collaborative work methodology can find its maximum application.

This article describes the use of the BIM methodology in the execution of the Arnotegi tunnel in the extension of the Bilbao South Metropolitan Bypass. The case described is an example of the application of the BIM methodology in a complex project in which, in addition to the traditional uses, an attempt has been made to advance in methodology, concentrating the information, both from the project and that generated during the work in a common data environment linked bidirectionally with the model, to facilitate the flow of information in real time for all those involved, favouring decision-making and ensuring execution times and their associated costs.

This article is organised as follows: The first part includes the state of the art and the main previous references on the use of the BIM methodology in underground construction works. Subsequently, the Arnotegi tunnels project, the main stakeholders, and the adopted contractual scheme are described. Finally, the article details the specific system of implementation of the BIM methodology and the obtained results.

#### **2. Background**

The dream of managing all the tunnel information in a structured database is not new. The first relevant precedent in our environment is the TUNCONSTRUCT project (Technology Innovation in Underground Construction, funded between 2005 and 2009 within the framework of the European Research Area). This project aimed to transform the European underground construction industry into a high-tech, high value-added industry capable of providing innovative, sustainable, and cost-efficient solutions [9].

The backbone of the project consisted of the definition of an underground construction information system (UCIS) that would allow the recording and instant access to all tunnel data throughout the entire life cycle, from the design and construction stage to the in-service stage (Figure 1). At the time, this UCIS system was based on the latest database technology available and based its need on the fact that more and more data was being produced by different parties and sources. In addition, the project was aware that there was a demand to access all data from anywhere, at any time, although this situation was far from the reality at the time of its development [10].

**Figure 1.** Example of visualization of construction stage in underground works.

Since then, the TIAS (tunnel information and analysis system) project was launched in Greece, which consisted of the generation of a database integrating 62 tunnels on the Egnatia motorway. The data contained came from different sources: boreholes, geotechnical surveys, laboratory tests, geological behaviour, hydrogeology, design parameters, support and lining information, construction incidents, and costs. This system was intended to structure all the accumulated knowledge to provide engineers with valuable information when designing new tunnels in massifs similar to the one in the area examined [11].

The initiatives described refer to the most relevant attempts to date to generate structured bases for organising the information surrounding the construction of the tunnel. To make this data even more valuable, the technology is now mature enough to be fully integrated into the methodological concept of BIM, although so far there are very few references of significant use in underground works and generally referred to case studies [7,12].

In recent years, there are references in the literature that have specifically addressed conceptual frameworks based on the BIM methodology to improve the management of underground projects using the drill-and-blast method [13].

In the execution of the Arnotegi tunnel, it has been possible to contribute to this leap in the integration of the BIM methodology, where it has been able to reveal its full potential, both as an instrument and as a procedure.

It should be noted that this transition is taking place in an environment in which, despite the specificities of tunnel projects, there are no standards at the European level for their design, let alone harmonised guidelines for the use of the BIM methodology. At the European level, the design and management of tunnels is developed on the basis of national knowledge and experience, with the use of design standards imposed by each client. In 2017, the European Commission initiated the setting up of a commission to define the needs for standardisation in the design of underground infrastructures with particular emphasis on tunnels [14]. In this context, the most advanced country in the digitisation of underground space, in line with its overall progress in the use of the BIM methodology, is likely to be the United Kingdom. The UK has some of the most agile and adapted standardisation systems for working with digital images of underground space. A national geospatial data strategy was adopted in 2020, which contains a special section on digital subsurface information [15].

#### **3. The Case. Description of the Underground Works: The Arnotegi Tunnel**

The construction of the Arnotegi tunnel (Figure 2) is included in the works of the so-called Section 9A of the infrastructure of Phase IA of the Bilbao Southern Metropolitan Bypass, promoted by Interbiak-Provincial Council of Biscay. It is one of the three sections of the next expansion phase, which aims to connect the original bypass, inaugurated in 2011, with the AP-68 motorway [16].

**Figure 2.** Arnotegi road tunnel floor plan.

The area of the works is located in the municipality of Bilbao, Spain. The Arnotegi tunnel is a double-road tunnel, with one tube for each direction of traffic. The length of the tunnel in the mine corresponding to the carriageway in the Cantabria direction (Axis 1) is 1727 m, while the length of the tunnel in the mine excavated for the carriageway in the Donostia direction (Axis 2) is 1722 m. The tunnel has a truncated circular cross section, with an internal section of 85 m2 (Figure 3), and its main characteristics are shown in Table 1.

From a geological point of view, the Arnotegi tunnel runs through Cretaceous rocky terrain made up of siltstone with sandstone levels and with no significant water inflow. The tunnel was excavated using the drill-and-blast method, except in particular cases such as gallery intersections where mechanical excavation was used. According to the geotechnical specific characteristics of the excavated materials, five types of support were designed, including different amounts of bolts, shotcrete, and metallic trusses.

**Figure 3.** Functional section of the Arnotegi tunnel (5 m vertical clearance).

**Table 1.** Main geometrical characteristics in Arnotegi tunnel.


#### **4. Main Actors and Proposed Contractual Model**

The main parties involved follow the usual pattern for infrastructure projects at the national level. The promoter of the works is also the project manager. This public authority independently hires the contractor and the technical assistance team to carry out the works (Figure 4).

**Figure 4.** Diagram of the main participants in the execution of the Arnotegi tunnel. Contractual links shown.

This traditional contracting scheme can be an impediment to seeking maximum collaboration between the parties involved, which is a real necessity for the successful implementation of the BIM methodology [17], as each agent naturally tends to develop its activity within the exclusive framework of its contract. To guarantee maximum collaboration between the parties involved, the works contract includes two relevant aspects in this respect. Firstly, the specific administrative terms and conditions define that the prescribed remuneration system is the lump-sum method. Secondly, the contract is complemented by the specific technical specifications, which in Appendix 9 establishes a series of innovative

actions that must be complied with by the successful bidder. The development of the works forms part of the object of the contract itself, and therefore, the costs are understood to be included in the flat-rate offer for the overall execution of the work.

The aforementioned Appendix 9 established several innovative proposals that had to be implemented during the course of the contract. These innovative proposals include the implementation of the BIM methodology. The following section describes the particularities of the implementation of this work methodology in the Arnotegi tunnel.

#### **5. Description of the Proposed System Using BIM Methodology**

Appendix 9 of the specific technical specifications of the contract indicated as a contractual service the implementation of the BIM methodology, understood as the "preparation and development of coordinated and collaborative databases and information models with a view to improving the integration and coherence of the information throughout the life cycle of the asset". The main contractual requirement was to deliver at the end of the infrastructure works a BIM model of the Arnotegi tunnel, which included all the relevant elements of the tunnel construction, including graphic and non-graphic information, which will facilitate the coordination tasks with subsequent contracts and the subsequent operation and maintenance tasks of the infrastructure (Figure 5).

**Figure 5.** BIM methodology focused on the complete life cycle of the infrastructure: model view with geotechnical information (**left**); general model view (**centre**); model view for checking interference in facilities (**right**).

#### *5.1. BIM Execution Plan*

During the first months of execution of the contract, the BIM execution plan (BEP) was drawn up, a document that defined the bases, standards, and rules for working with this methodology. The BEP developed in detail the basic principles indicated in the specifications so that all those involved in the work could carry out their work in a coherent and coordinated manner. The document had a living character of continuous improvement that evolved throughout the construction process, adapting to new needs and always maintaining the objective of achieving a practical and useful BIM implementation.

It is worth highlighting three concepts that have been considered the most important within the BIM implementation plan because of the advantages they provide. On the one hand, in terms of workflows, it established the basis for deploying the collaborative environment by defining a folder structure for the entire project, a homogeneous documentcoding system and the roles, permissions, and responsibilities of each user in the common data environment (CDE). This aspect is very important because the CDE of this project is not focused exclusively on storing and working with the BIM models, but all the information of the project is stored in it.

On the other hand, if we focus on the 3D models, the interference matrix made it possible to reduce the number of collisions at an early stage of the project and the organisation of the parameters ensured that the intended uses could be achieved, and the models were ready for the exploitation phase.

Finally, the third most relevant aspect of the BEP has been the definition of a specific codification for the processing of the work. This coding has allowed the bidirectional link between the 3D models and the documentation of the work stored, also thanks to the capabilities of the CDE platform used. This concept responds to what is known as an integrating model and allows quality control of the stored documentation as well as the availability of the most important information for each asset in the exploitation phase.

#### *5.2. BIM Uses and Models*

Table 2 shows, in order of priority, the intended BIM uses, both in the development of the Arnotegi tunnel project itself and in the subsequent operation phase.

**Table 2.** Intended BIM uses in the Arnotegi Tunnel.


To achieve the uses defined in the table above, three main BIM models were developed: an initial model, an updated model, and a follow-up model, which was updated periodically throughout the works.

The main uses of the initial model (Figure 6), which was based on the two-dimensional project provided by the promoter, consisted of a 3D design, detection of interferences (Figure 7), and analysis of possible solutions to these interferences in a collaborative manner during the meetings held using the models.

**Figure 6.** Initial model in Arnotegi tunnel.

**Figure 7.** Critical areas in the initial model for interference detection.

In the second phase, the updated BIM model was developed (Figure 8), whose main uses consisted of coordinating the 3D design, not only between disciplines, but also with the rest of the project lots, 4D planning, and being the reference base for the construction process by including the resolution of each of the discrepancies identified in the initial model.

However, one of the most relevant uses of this model and the common data environment arose from the need to use the most up-to-date information possible for the design of the project for the execution of the facilities. Permission was granted to the successful bidders of this contract to access the model and the most up-to-date information necessary, thus minimizing possible future discrepancies and therefore contributing efficiency and great value to the concept of "collaboration" for which the BIM methodology is characterized.

**Figure 8.** Updated model of the Arnotegi tunnel and landfill site.

Finally, during the execution of the works, progressive as-built modelling was carried out according to the real progress. The main uses of this follow-up model consisted, on the one hand, of its geometric control (tolerances in each tunnel advance step and section entry), the integration of new control geometries, such as sensors and other auscultation devices, and even support in the analysis of alternatives to special treatment solutions for the tunnel (Figure 9). The follow-up model was updated 126 times during the duration of the works on a weekly basis.

In addition, a detailed monitoring of the sowings and the evolution of the landfill site has been carried out, allowing an intuitive visualisation of the state of the revegetation of each of the areas (Figure 10).

**Figure 9.** Follow-up model use: (**a**) check of interference between shotcrete surface after each advance step and theoretical surface from outside the tunnel; (**b**) check of interference between shotcrete surface after each advance step and theoretical surface from inside the tunnel; (**c**) analysis of micropiles umbrella possible interferences; (**d**) analysis of anchor system in tunnel portals.

**Figure 10.** Landfill monitoring.

Besides, one of the most relevant uses during this phase has been the integration of the information generated during the construction process. The technology used as a collaborative environment (CDE), Vircore [18], has allowed all the documentation stored in the document manager to be linked automatically and bidirectionally to each 3D element, guaranteeing that any type of information, regardless of its location in the folder structure, is accessible from the 3D BIM model as long as it is related to the selected 3D element (Figure 11).

#### *5.3. Level of Graphic Detail*

According to the customer's requirements, the level of detail (LOD) for all modelled elements is LOD 300, considering that for the as-built final models, the corresponding and achieved LOD is LOD 500.

LOD 300 is the level at which all the tunnel elements (support passes, support elements, and other tunnel elements) are graphically defined, precisely specifying their shapes, sizes, quantities, and locations in relation to the project as a whole. These elements can be observed in detail. They always have a graphical representation and may contain nongraphical information as well.

#### *5.4. Common Data Environment*

#### 5.4.1. Multi-Project Document Management

The global management of the project was carried out using Vircore software [18], developed by Ingecid S.L. This software allows the management through a user interface of this database and the storage container, facilitating the various operations required throughout the project.

Once the folder structure of the common data environment had been defined and agreed upon by all those involved, its component elements were defined. On the one hand, there were a series of digital models of the different disciplines that were associated in a coordination model for their review and modification, if necessary, and on the other hand, there were all the non-modelled information that were generated in the different phases of the construction project that could be linked to the different modelled elements if their relationship allowed it, as well as stored within the common data environment.

All this information (modelled and non-modelled) stored centrally allowed the monitoring of the work to be carried out in an integrated and efficient manner.

#### 5.4.2. Model Visualization

Vircore either allows for the use of its own built-in viewer for the management of IFC files or for the use other viewers, such as the Navisworks Manage graphic engine that ensures the visualisation of models developed in multiple native formats.

#### 5.4.3. 4-D Planning and Quantity Survey

Vircore can integrate the rest of the needs related to BIM models without requiring the use of external tools. In this sense, it has an integrated planning module that allows multiple planning without the need for other software. It also features bidirectional communication with MS Project if necessary.

In this case, the strong point of Vircore is in the linkages; the tasks can be bidirectionally linked to the 3D elements of the models, allowing the visualisation and management of interactive 4D simulations, and on the other hand, the planning can be associated with documentation for the project phases where it is necessary (preparation of reports, memories, etc.).

Vircore allows the integration of files in BC3 format, and as with schedules, it allows for the development of budgets and their work breakdown structure (WBS) from scratch.

#### *5.5. Procedure for Linking Information to the Model*

The documentation that is integrated in the collaborative environment scheme is linked to the model with the implementation of a stable algorithm, which uses a part of the character string of each file that is uploaded to the repository as a hashtag and as a code that links the file to the different parts of the model, as previously defined (Figure 12). The document-coding system was agreed upon by the parties and integrated into the BEP.

**Figure 12.** Detail of nomenclature assignment of different parts of the model. Definition of the nomenclature of the forward pass "PE" and the destroy pass "PD".

#### *5.6. Collaborative Working Scheme*

The working scheme proposed for the definition of the collaborative workflow in the application of the BIM methodology in the Arnotegi tunnel is illustrated in Figure 13.

**Figure 13.** Working scheme for collaborative workflow and references to software.

For the overall management of the project, a role system was defined that allowed collaboration between the different agents involved in the project. In this way, it was possible to control the access of each user to their assigned areas with a simple system of authorisations.

#### *5.7. Usage Data*

Throughout the development of the project, both the Vircore CDE and the BIM models developed have been tracked and monitored.

The evolution of this amount of information has been used as an indirect metric to assess the effectiveness of the proposed framework. In addition, on a monthly basis, it was verified that the main thematic reports whose drafting was contractually committed were correctly uploaded in the system and linked to the model.

At the end of the underground works, the volume of information stored in the common data environment reached 160 Gb and almost 50,000 files.

Figure 14 shows the evolution over time of the files uploaded to the common data environment. The sudden increase in the first months of 2020 is due to the start of the underground works. Figure 15 shows the type of files managed. Most of the files were pdf documents (43%) and jpg files (41%).

**Figure 14.** Temporal evolution of the number of items in the common data environment.

**Figure 15.** File types stored in the common data environment.

#### **6. Conclusions**

To make progress in the implementation of the BIM methodology in underground works projects and to use it for more than the traditional uses (3D visualisation and checking of interferences), it is necessary to establish a contractual framework that favours collaboration between all those involved in each project. In underground work projects, moreover, the execution models change every day, either because of the difference between the theoretical blast line and the real line, or because of the geotechnical uncertainty of the terrain; thus, the BIM management framework must be sensitive to this circumstance.

In the Arnotegi project, a BIM methodology with advanced uses has been implemented and set contractually, allowing a bidirectional integration between the model and the common data environment.

In addition, the framework of the BIM methodology in this project was able to determine a changing follow-up model according to the actual geometric and geotechnical information after each blasting phase.

These advanced uses of the BIM methodology have facilitated the transmission of graphic and non-graphic information to all the decision-making elements, enabling rapid and efficient management of all decisions regarding the progress of the underground work.

These results have the limitation of having been evaluated in a single implementation experience. Since underground works are usually very repetitive, they can be extended to all similar construction works, although it will be necessary to enlarge the sample and test them in other projects to definitively verify the advantages of the advanced uses of BIM that have been described in this paper.

**Author Contributions:** Conceptualization, J.-M.B.; methodology, J.-M.B., J.d.-P. and J.R.; formal analysis, J.-M.B. and J.d.-P.; writing—original draft preparation, J.-M.B.; writing—review and editing, J.-M.B.; supervision, J.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality.

**Acknowledgments:** The authors want to acknowledge the promoter of the works, Interbiak, for its firm commitment to exploring the possibilities for the implementation of the BIM methodology in underground works.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Defining a BIM-Enabled Learning Environment—An Adaptive Structuration Theory Perspective**

**Theophilus Olowa 1,\*, Emlyn Witt 1, Caterina Morganti 2, Toni Teittinen <sup>3</sup> and Irene Lill <sup>1</sup>**


**Abstract:** Digitalization of the AEC-FM industry has resulted in the reassessment of knowledge, knowledge management, teaching and learning, workflows and networks, roles, and relevance. Consequently, new approaches to teaching and learning to meet the demands of new jobs and abilities, new channels of communication, and a new awareness are required. Building Information Modelling (BIM) offers opportunities to address some of the current challenges through BIM-enabled education and training. This research defines the requisite characteristics of a BIM-enabled Learning Environment (BLE)—a web-based platform that facilitates BIM-enabled education and training in order to develop a prototype version of the BLE. Using a mixed-methods research design and an Adaptive Structuration Theory (AST) perspective for interpreting the findings, 33 features and 5 distinct intentions behind those features were identified. These findings are valuable in taking forward the development of the BLE as they suggest a BLE requires the integration of functions from three existing types of information technology application (virtual learning environments, virtual collaboration platforms, and BIM applications). This study will inform the design of a web-based BLE for enhanced AEC-FM education and training, and it also provides a starting point for researchers to apply AST to evaluate the use of a BLE in different educational and training contexts.

**Keywords:** BIM; BIM-enabled learning; BIM education; virtual learning environment; AEC-FM

### **1. Introduction**

Digitalization of the construction industry is driving changes in the required knowledge, skills, and attitudes of construction industry professionals, thus motivating the adaptation of their education and training. Building Information Modelling (BIM) is central to this digitalization, and it offers opportunities to address some of the current challenges through BIM-enabled education [1], i.e., using BIM as a vehicle for knowledge creation, sharing, transmission, and evaluation. In earlier research, the authors analyzed extant cases of BIM education and investigated the difficulties faced in designing and implementing BIM education curricula generally and BIM-enabled education curricula specifically. In doing so, the need for an integrated, BIM-enabled Learning Environment (BLE) in which educators and trainers can effectively carry out BIM-enabled education and training was identified [2,3]. A BLE is expected to provide a web-based platform through which new and existing BIM-enabled approaches can be conveniently deployed for teaching and learning activities for the Architecture, Engineering, Construction, and Facilities Management (AEC-FM) disciplines. This study aims to define the characteristics of a BLE and applies an Adaptive Structuration Theory (AST) perspective to achieve this.

AST is a development of Anthony Gidden's Structuration Theory to the context of Advanced Information Technology (AIT) use in organizations [4]. Structuration Theory aims to understand social systems through their structures—the properties, rules, and

**Citation:** Olowa, T.; Witt, E.; Morganti, C.; Teittinen, T.; Lill, I. Defining a BIM-Enabled Learning Environment—An Adaptive Structuration Theory Perspective. *Buildings* **2022**, *12*, 292. https:// doi.org/10.3390/buildings12030292

Academic Editor: Fahim Ullah

Received: 30 January 2022 Accepted: 28 February 2022 Published: 2 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

resources or sets of transformational relations that allow similar social practices to be reproduced across time and space and give them the form of systems [5] (pp. 16–25). AST considers the types of structures that are provided by AITs, i.e., structures that are embedded within the technologies themselves, and the structures that emerge in human action as people interact with these technologies [5].

DeSanctis and Poole [6] define AITs as information technologies that not only enable the accomplishment of organizational tasks but also support coordination among people and provide procedures for interpersonal exchange. As an educational and training platform, the proposed BLE must clearly achieve both—it must enable BIM-enabled educational/training tasks and also mediate interpersonal exchanges between teachers/trainers/ students—and thus may be considered an AIT in the AST sense.

DeSanctis and Poole [6] propounded the theory for understanding technology-induced organizational change and proposed a comprehensive framework to this end, which is shown in Figure 1. By applying this AST framework to the problem of BLE development, the authors' intention is to first understand and define the characteristics of the BLE as an AIT in order to develop the BLE and then, later, to study its use and impact in the organizational contexts where it is utilized for education and training. This article reports the first of these steps: research to define the BLE characteristics with reference to the AST framework in order to subsequently facilitate research, in which the AST framework is applied to study the effects of BLE implementation.

**Figure 1.** Adaptive Structuration Theory (AST) Framework (DeSanctis and Poole (1994)). Propositions: **P1**: AITs provide social structures that can be described in terms of their features and spirit. To the extent that AITs vary in their spirit and structural features sets, different forms of social interaction are encouraged by the technology. **P2**: Use of AIT structures may vary depending on the task, the environment, and other contingencies that offer alternative sources of social structures. **P3**: New sources of structure emerge as the technology, task, and environmental structures are applied during the course of social interaction. **P4**: New social structures emerge in group interaction as the rules and resources of an AIT are appropniated in a given context and then reproduced in group interaction over time. **P5**: Group decision processes will vary depending on the nature of AIT appropriations. **P6**: The nature of AIT appropriations will vary depending on the group's internal system. **P7**: Given AIT and other sources of social structure, ideal appropriation processes, and decision processes that fit the task at hand, then desired outcomes of AIT use will result.

AST maintains that the structure of an AIT may be characterized in terms of its set of structural features and its spirit. The structural features relate to the rules, resources and capabilities offered by the AIT, and they control and bring meaning to the social interactions mediated by the AIT. The spirit of an AIT refers to the overall intentions behind its set of structural features in terms of value propositions and goals for which the AIT was designed (cf. [7]). It also embraces what DeSanctis and Poole [6] referred to as the "status quo", i.e., the current interpretive account of the technology's values and purposes based on the numerous ways by which the technology is appropriated over time by different users under different conditions. As Orlikowski [8] puts it: "While technologies may appear to have objective forms and functions at one point, these can and do vary by different users, by different contexts of use, and by the same users over time". Similarly, DeSanctis and Poole [6] argue that the use of any structure in an AIT is not sacrosanct since humans, as reflective agents, may use any aspects of the technology structures in any way they wish—they referred to these as appropriation moves. The decision to appropriate a particular structure and its continuance is dependent on how favorable and satisfying the actual outcome is. An appropriation move is considered faithful, if it is in line with the design intent for which it was created, or unfaithful, if used differently from the spirit of the technology (which is not necessarily a bad thing).

This study defines the structural features and spirit of the proposed BLE as an AIT through a qualitative, interpretivist, pragmatic approach. As previously noted, this will enable BLE development in the first place and, subsequently, facilitate the study of a BLE in use. Moreover, identifying both the structural features and the spirit of a BLE will assist in categorizing the existing sources of BLE structures into domains that would enable both a comparative and gap analysis of users' requirements in delivering BIM-enabled learning. The latter is particularly necessary since the expected output of this effort is the development of a web-based BLE that will afford geographically dispersed users the opportunity to access learning materials without the constraints and limiting issues associated with hardware devices, encourage independent and lifelong learning, and also promote adaptive and personalized learning. Lastly, it will offer researchers, educators, and trainers a means to evaluate empirically, and, possibly, address the consequences arising from, teachers' and learners' appropriation moves with respect to a BLE.

In the next section, we provide a brief review of the related literature. This is followed by a description of the methodology adopted to define and specify the attributes of the proposed BLE through a series of case studies and interviews carried out in three countries. The findings of these case studies and interviews are then presented before their implications for theory and practice are discussed. Conclusions are drawn in the final section.

#### *1.1. Literature Review*

#### 1.1.1. BIM-Enabled Education

BIM education has seen an upsurge in interest in the last two decades among teaching faculty and researchers with authors emphasizing different aspects of educational skills, attitudes, and knowledge. Conversely, the presence of COVID-19 globally in the past 2 years has also brought to focus the importance of digital technologies, virtual and augmented realities, and other tools that are valuable in construction engineering education [9]. BIM educational programs start with creating awareness and educating students and trainees on how to use different industry-specific BIM software packages (e.g., Revit, ArchiCAD, Navisworks, Rhino3D, Aconex, etc.) for modelling, viewing, simulating, scheduling, or data sharing (see [10–16]). Courses often begin by highlighting the benefits and barriers of BIM, including the reasons for BIM adoption in the AEC-FM industry (e.g., [17–27]) and the progress on BIM knowledge and authoring/manipulation skills (e.g., [28–31]).

Beyond developing BIM software skills, BIM technology has also been used to impart other learning such as coordination, collaboration, communication, and interpersonal relationships among students, etc. (see [16,32–34]). For instance, Barham et al. [35] experimented with BIM as a visualization tool in teaching structural detailing. Several other studies have demonstrated how researchers and practitioners are pushing the boundaries in the ways that BIM can be leveraged in construction engineering games for educational purposes (e.g., [36–42]). This mutual influence between BIM technology and BIM agents teaching BIM technology and using BIM technology to teach—is a defining characteristic of BIM education.

Underwood et al. [1] categorized the evolution of BIM education into three progressive stages:


Both BIM-aware and BIM-focused education have been generally recognized and initiatives to develop curricula to incorporate BIM have become widespread. A comprehensive account of BIM-enabled education cases has been documented in Abdirad and Dossick [43] and more recently updated in Olowa et al. [2].

#### 1.1.2. BIM-Enabled Learning Environments

COVID-19 has significantly underscored the demand for distributed, collaborative, self-paced, and adaptive learning. Already a decade ago, Ku et al. [40] identified these challenges and experimented on what they referred to as a BIM interactive Model (BiM)—a platform that combines a virtual environment with BIM for learning purposes and proposed a theoretical web-based virtual world for engaging construction stakeholders in real-time social interaction using the Second Life virtual environment. They contended that integrating 2D and intelligent 3D BIM models would supplement construction education to overcome the limitation of location-based learning and make it accessible to anyone with an internet connection. Recognizing the benefits of promoting distributed training opportunities, as suggested by Ku and his colleagues, further studies have been carried out and reported in support of this initiative (e.g., [44–49])

Acknowledging the general consensus among previous developers and authors on the ability of a virtual learning environment (VLE) to promote off-site training and education, Shen et al. [50] used the 3D-UNITY game engine to create a web-based training environment for HVAC rehabilitation and improvement using a BIM model. In contrast to Ku et al. [40] and the Second Life platform, the authors argued that game engines have been sufficiently developed for BIM interoperability, thereby making game creation cheaper and easier with little to no need for programming skill. With their research, Shen et al. [50] were able to demonstrate how BIM could be leveraged for teaching at the topical level.

#### 1.1.3. Application of AST to BIM-Enabled Learning Environments

AST is used in this study as it emphasizes the importance of social structures in the development of new technologies and in the use of those technologies by people [6,51]. As Turner et al. [51] note: "AST explains the complications associated with the technology– organization connection and provides ... information on how to develop new technologies or design educational curriculums that encourage adapting new technologies". Although we have not come across any study that has applied AST in the development of a new, innovative technology (in this case, a BIM-enabled Learning Environment), AST has been extensively used in evaluating AITs relating to group decision support systems [7] and, more recently, to explore value creation at the business process level through BIM in the construction industry [52]. AST has also been used to investigate socio-technical changes that are brought about by AITs, such as social media interaction among researchers [53], understanding the relationship between agile methods and organizational features [54], and understanding the influence of ICT infrastructure on student teachers' use of Student Information Management System [55].

#### **2. Materials and Methods**

According to Ma et al. [56], there are 3 steps involved in defining the functional requirements for an AIT. These include identifying and isolating relevant processes of intended users; formulating functional requirements based on the isolated processes; and revising and validating the relevant processes that correspond to the formulated functional requirements through inquiries from prospective users. With these processes in mind, an exploratory sequential mixed-methods research methodology [57] was applied in this research with the aim of specifying a BIM-enabled Learning Environment (BLE).

In preparatory work to this study, an initial, theoretical BLE concept developed by Witt and Kähkönen [58] had been applied in a BIM-enabled learning intervention that was trialed at Tallinn University of Technology within an existing course taught to fourth year civil engineering students (reported in [3]). In addition to this Estonian case, two further cases of BIM-enabled learning activities carried out at the University of Bologna, Italy and Tampere University, Finland were analyzed in order to develop an initial list of requirements for a BLE. A desk study was also conducted to review existing academic and grey literature to find relevant materials related to existing BLE type initiatives so as to understand the general characteristics of a BLE. These preparatory activities enabled the design of the semi-structured interview data collection strategy and instrument elaborated below.

#### *2.1. Data Collection*

#### 2.1.1. Interview Participants

For the interviews, participants were purposively selected in 3 European countries: Estonia, Finland, and Italy. These 3 countries were selected for convenience in the context of an ongoing research collaboration between the Tallinn University of Technology, Tampere University, and the University of Bologna. The relevance criteria for participants were that they should be actively engaged with AEC-FM training and/or AEC-FM education and/or BIM-training and/or BIM-education in any (e.g., academic, industry, etc.) setting irrespective of their mode of delivery in teaching practice. The selection of interviewees was intentionally directed towards achieving representation from as wide a range of relevant stakeholders as possible. A total of 31 participants (10 from Estonia, 9 from Finland, and 12 from Italy) were interviewed with interviews in each country conducted by 2 or 3 different facilitators. All interviewees read and signed an informed consent form prior to their participation.

#### 2.1.2. Interview Schedule

A semi-structured interview schedule was used to elicit information regarding the ideal characteristics of a BLE based on the educator's/trainer's lived experiences and aspirations. The interview schedule commenced with an overview of the purpose and context of the research and confirmation of the interviewee's data (name, position, and affiliations). As the interviewees were expected to comment on a concept (the BLE), as opposed to an existing artefact with which they could have direct experience, it was important to establish a common understanding of the general idea of the BLE among all interviewees. For this purpose, a short (1 min) video outlining the BLE concept with commentary in the local language (Estonian, Finnish, or Italian) was played to them before a series of open-ended questions were asked as follows:


3. How do you use BIM in the delivery? (e.g., for visualizations, project data, communication, etc.)

(Alternative if organization only arranges training: How is BIM used in training delivery?) If NO:

4. Could you use BIM to help deliver your teaching/training and for what? (e.g., for visualizations, project data, communication, etc.)

(Alternative if organization only arranges training: Could BIM be used in training delivery?)

5. Beyond your present area(s) of teaching/training, how do you think BIM could be used in BIM-enabled learning?

(Alternative if organization only arranges training: Beyond the areas of training arranged by your organization, how do you think BIM could be used in BIM-enabled learning?)

6. What functions would you like to see in a BIM-enabled Learning Environment?

#### *2.2. Data Analysis*

2.2.1. Grounded Theory Method

The analysis of the interviews was based on a Grounded Theory (GT) model because of their acclaimed usefulness in the development of process-oriented, context-based descriptions and explanations of information system phenomena [59]. GT is a method of data analysis and theory generation propounded by Glasser and Strauss [60] that is based on induction. Since the pronouncement of their initial concept, it has metamorphosed with different authors suggesting additional nuances on how it should be applied leading to different GT versions. According to Urquhart, the major models used in the literature are those suggested by Glasser, Strauss, and Charmaz [59]. Despite their differences, they all agree on iteratively sampling data to generate themes (at a high abstract level) that are useful for developing theories grounded in the collected data. This study adopted the Straussian Theory Model (STM) with the unit of analysis being predominantly segments of the interview transcripts that convey a particular meaning. In line with the Straussian approach, extracting these segments of texts is the first step of analysis referred to as open coding. This was followed by axial coding in order to identify major categories. However, this methodology was applied as a tool for discovering associations within the data rather than as a rigid set of rules [59]. The data collection and analysis were sequential. Interviews were mostly carried out virtually (online) using MS Teams, Zoom, etc. as maybe agreed by both the facilitators and the participants. Where possible, face-to-face interviews were also conducted. In both circumstances, interview sessions were audio recorded and transcribed. As interviews were conducted in local languages as well as in English, interview transcription and analysis were carried out by different analysts and this necessitated coordination in the form of a commonly agreed analysis template with four predetermined coding categories: demographics; subjects taught; target audience; and functional requirements. Additionally, emergent categories were then continuously added as analysts found them. These included method(s) of teaching/training, BIM uses, level(s) of BIM awareness/competency, and challenges. The structural coding was achieved using NVivo qualitative data analysis software in some cases and, in others, the MS Word text editor was used, as not all the facilitators were familiar with NVivo software. Analysis of all interviews was then aggregated using NVivo software for further and final analysis. As part of this aggregated analysis, all interview references to the "spirit" attributes of the BLE were also captured through theoretical sensitivity.

#### 2.2.2. Validation of BLE Features by Focus Group

The results of the interview analysis were then presented to a focus group of AEC-FM education experts for validation. For the focus group, the researchers took advantage of an online workshop in which BIM educators and enthusiasts from 5 countries participated and discussed the BLE concept and the proposed BLE features that had emerged from the

interviews. Focus group participants were then asked to rate the level of importance of each proposed BLE feature identified from the interviews using an online questionnaire containing both closed- and open-ended questions. The closed-ended questions presented each identified feature with a 5-point Likert-type scale for importance ratings: "1-Not important", "2-Slightly important", "3-Moderately important", "4-Very important" and "5-Critically important". The open-ended questions were intended to elicit comments, suggestions and recommendations for additional features that would be important for a BLE but were missing from the list identified from the interviews.

#### 2.2.3. Statistical Methods

The questionnaire was fully completed and submitted by 10 respondents. Analysis of the online questionnaire by the focus group was carried out using descriptive statistics, viz simple mean score and a relative importance index for each of the identified BLE features. Figure 2 illustrates the research process adopted for this study.

**Figure 2.** Research process adopted.

#### **3. Results**

#### *3.1. Characteristics of Participants*

The interviewed participants were from diverse backgrounds in terms of the type of organization that they belonged to, actual sub-sector in which they operate, and their geographical location. Figure 3 shows three clusters of bars, which depict the distribution of the participants according to their organization type, sub-sector, and country. From the three countries where the interviews were conducted, i.e., Estonia, Finland, and Italy, a total of six sub-categories emerged from the organization type with the highest participants coming from the university (13), construction (8), and vocational education (4) sub-categories. Other sub-categories are Construction information and training NGO (1), Consultancy (1), and Real Estate management and maintenance (4).

The sub-sectors to which the participants belong were also identified as education (15), general contracting (5), and real estate/facilities management (2). The individual characteristics of the validation questionnaire of respondents within the focus group could not be isolated because, while it was expected that all validation workshop participants who had not been engaged in developing the research findings would complete the online questionnaire, this did not turn out to be the case.

#### *3.2. Identifying and Isolating Functional Requirements/Structural Features of the Proposed BLE*

Table 1 shows the list of 33 identified and isolated functional requirements emerging from the preparatory desk study (literature review and three case study analyses), the 31 interviews and the focus group suggestions for additional BLE features together with an explanatory commentary on the corresponding structural feature for the proposed BLE.


**Table 1.** Processes based on BIM structures.


#### **Table 1.** *Cont.*


#### **Table 1.** *Cont.*

#### *3.3. Validating and Revising the Structural Features of BLE*

Table 2 shows the list of structural features for a BLE based on the focus group ranking. The mean was calculated based on the 5-stage Likert scale ranging between 1 and 5, 1 being "Not important" and 5 representing "Critically important". Using the relative importance index (RII) where the most important has the least value (1 in this case) and the least important has the highest value (i.e., 30). Three of the functional requirements (#13, #26, #27) were identified by the focus group as suggestions for additional BLE features and were therefore not included in the validation questionnaire and consequently, not ranked by the focus group.

**Table 2.** Revised and validated structural features.


#### **Table 2.** *Cont.*


\* Items not included in the focus group questionnaire as these emerged from focus group suggestions for additional BLE features.

#### *3.4. Spirit of the Proposed BLE*

Qualitative content analysis of the interview data also revealed insights into the attributes of the spirit of the proposed BLE. Table 3 shows the spirit attributes or intentions that were expressed by the participants and which informed their defining of structural features for a BLE. These attributes include collaboration, active learning, integrated learning, adaptive and personalized learning, and project process improvement

#### **Table 3.** Spirit of the proposed BLE.



#### **Table 3.** *Cont.*

#### **4. Discussion**

The interview transcripts and emergent recommendations for BLE features, to an extent, appear to reflect the participants' positive and negative experiences in relation to their own education/training activities. For instance, the popularity of collaborative learning in groups and problem/project-based learning approaches is reflected in the numerous recommended features that relate to groups and collaboration (features (refer to Tables 1 and 2 above): #3, #6, #14, #15, #16, #17, #18, and #20) and generating realistic project learning contexts (features: #2, #7, #8, and #9). In addition, participants complained of problems with managing software for students and interoperability (reflected in features #7 and #18) as well as the need for effective integration between systems (reflected in features #12, #32, and #33). Further challenges expressed by participants included the limited BIM skills of educators and trainers themselves, and there was some skepticism regarding educators'/trainers' motivation to welcome new modes of training using BIM models. These challenges have been identified by several researchers as impediments to BIM education generally (e.g., [1,61,62]) and, it seems, could not be addressed by specific feature recommendations for the proposed BLE.

The recommended BLE features can also be understood as corresponding to three distinct categories of function: "BIM" functions, "collaboration" functions, and "virtual learning environment" (VLE) functions, and Figure 4 depicts these categories together with their associated BLE features.

The BIM functions relate to features typically associated with BIM software such as the creation and editing of BIM models, BIM model viewing, common data environments for project data, etc. Collaboration functions allow for virtual communication, coordination, and collaboration in groups and can be readily recognized as including features commonly associated with existing virtual collaboration/video conferencing platforms such as Zoom, MS Teams, etc. Similarly, the VLE functions aggregate those features (learning progress tracking, performance monitoring, assessment and testing, feedback to learners, associated security and data protection, and so on), which would be associated with typical VLE or learning management system (LMS) platforms such as Moodle, Blackboard, etc. There are also some recommended BLE features that relate to more than one of these categories. For example, the ability to be able to upload, store, download, and edit files is common to both VLE and collaboration categories. Similarly, the ability to simulate project development processes and associated stakeholder interactions relates to both collaboration and BIM function categories. Importantly, we note that these three functional categories are required to be incorporated into the proposed BLE if it is to properly support and facilitate AEC-FM training and learning.


**Figure 4.** Matrix of functional categorization of BLE features.

These findings suggest that, when asked to specify the functionalities that would be necessary in a BLE, the interview participants have collectively drawn on their educational/training experiences of existing AITs (specifically BIM, virtual collaboration, and VLE technologies) and identified relevant functionalities from these familiar AITs to then incorporate into the new, proposed AIT (the BLE). This process closely resembles the "appropriation of structures" as conceptualized by DeSanctis and Poole [6]—see boxes one and four and proposition P1 in Figure 1. The same types of social interactions enabled by

certain structures embedded within these existing AITs are considered by the interviewees to be desirable for BIM-enabled learning, and therefore similar social interactions should also be enabled by the BLE. In order to replicate these desired social interactions among and between learners and teachers using the BLE, the same enabling structures must therefore be appropriated and incorporated into the BLE specification.

DeSanctis and Poole's [6] conceptualization also points to other sources of structure in the organizational environment and task (Figure 1, box two) as well as the (AIT user) group's internal system and styles of interaction (Figure 1, box three). Whereas at the stage of designing the BLE, both user groups and tasks are as broadly defined as possible so as to allow the greatest and widest potential utility of the BLE, the organizational environments in which the BLE will be used and from which the interviewees have been drawn may be readily identified as being of two distinct types: educational and industry. It follows that the structures of the BLE will also reflect the structures from these two organizational types: structures from the education system and structures from the AEC-FM industry system. The structures embedded in education systems have been delineated by Witt and Kähkönen [58] to include the rules, resources, and roles relating to learning and teaching, and it is clear that participants' interactions and relationships with these structures have informed their suggestions offered for defining the structures of a BLE.

The contributing structures from the AEC-FM industry system relate to industryspecific roles and ways of working. The nature of the construction industry is such that it involves different stakeholders, with different responsibilities and liabilities even when they have the same product as a goal. The industry workflow demands that suppliers come in at different points in the execution and delivery of projects with clear deliverables and targets. The structures enabling these activities are reflected in the interviewees' recommendations of related structures that a robust BLE must exhibit to effectively deliver project-based learning to graduates, trainees, and professionals for industry relevance. Within the AEC-FM industry environment, its digital transformation and, specifically, its adoption of BIM is particularly important, as the BLE is predicated upon the latent benefit of BIM for the industry and also for education. BIM structures dictate how work and project data should flow with different levels of definition, how they should be shared, etc.

The emergent conception is one in which the structural features recommended by participants for the proposed BLE are those which they have identified as enabling the social interactions they consider could support BIM-enabled learning. Additionally, when we consider from where (the organizational environments from which) those participants are drawn and the types of AITs (BIM, virtual collaboration technologies, and VLEs) with which they are already familiar, it becomes clear that these (environments and AITs) are the sources of the structures that are being appropriated for incorporation into the BLE.

DeSanctis and Poole [6] consider the structure of AITs to comprise both structural features and also spirit—the overall intentions behind the set of structural features. While our data collection and validation rather emphasized the definition of the structural features (for the practical reasons of interviewees and focus group members' ease of understanding), the intentions that drive these features have also been extracted to some extent from the interview transcripts (summarized in Table 3). It is notable that many of the intentions (spirit attributes in Table 3) among educators in higher education institutions (HEIs) reflect what have previously been documented and described as educators' strategies in BIM for construction education [2]. These include integrative teaching, promoting active learning or constructivist education, promoting accessible education, and creating adaptive and personalized learning experiences. Further spirit attributes (intentions) captured included collaboration and (project process) improvements, both of which appear to reflect current intentions (particularly relating to BIM adoption) within the AEC-FM industry, thus reinforcing the notion that the recommended structures (both structural features and spirit) for the proposed BLE are indeed selected structures appropriated from existing AITs and organizational environments with which the interviewees were familiar. This is illustrated in Figure 5: concept map showing the sources of structures appropriated to define the BLE.

**Figure 5.** Concept map showing the sources of structures in a BLE.

The notion of appropriation of structures from existing AITs and organizational environments, in itself, is a useful insight for the further development of the BLE as it may be thought of as representing an integration of these AITs and environments. This phenomenon of adapting available resources underscores the need to have a defined structural starting point that will promote the delivery of BIM-enabled education in an efficient way. The development of a prototype BLE on this basis will enable a new pedagogical strategy capable of increasing students' motivation by presenting a more inclusive and sophisticated view of any AEC-FM BIM-related topic or course. Going forward, the defined structures must now inform technical system design in order to develop a prototype BLE.

Whereas DeSanctis and Poole [6] originally designed AST to assess and evaluate the outcomes of AIT use in social settings, this study has shown how it can also be employed to define an AIT (the BLE) in terms of the desired social structures (structural features and spirit) that the proposed AIT is intended to enable. We have also found AST to be a useful theoretical lens through which to interpret and understand the emergent BLE definition that has been derived. Once the BLE is developed, even in prototype form, then it will be possible, and it is intended to deploy the full AST approach to investigate how it is used by (different) social groups and thus evaluate its effectiveness in delivering BIM-enabled learning.

Regarding the limitations of this study, it should be noted that we have concentrated on defining the structural features and the spirit of a BLE using a structured set of interview questions among a few interviewees and respondents in three European countries. While we consider the findings robust, they are geographically and developmentally specific, and a larger, more geographically dispersed sample size would be beneficial for a more comprehensive identification and definition of the structures of a BLE, particularly if it were to be utilized in non-European contexts.

#### **5. Conclusions**

The digitalization of the AEC-FM industry has resulted in a demand for the reassessment of knowledge, knowledge management, teaching and learning, workflows and networks, individual roles, and relevance. Consequently, new teaching and learning platforms to cater to the requirements of new jobs and abilities, new channels of communication, and a new awareness are all required. BIM is a central feature of this digitalization, and it also offers opportunities to address some of the current challenges through BIM-enabled education and training. While BIM has become standard in industry, it is still being determined how it can be fully leveraged in training and education. To facilitate BIM-enabled learning, a platform—the BIM-enabled Learning Environment (BLE)—through which new and existing BIM-enabled approaches can be conveniently deployed for teaching and learning activities in the AEC-FM disciplines is needed.

This study aimed to define the characteristics of the proposed BLE. Within an exploratory sequential mixed-methods approach, preliminary data were collected through

the qualitative analysis of three case studies as well as a study of the academic and grey literature. This led to a series of 31 semi-structured interviews being carried out in three European countries (Estonia, Finland, and Italy). A qualitative, grounded theory inspired, content analysis of the interview transcripts was applied to identify and isolate the desired functionalities of the BLE and the broader intentions behind these functionalities. The identified and isolated features of the BLE were then validated and added to in a focus group validation exercise using a quantitative, questionnaire with a Likert-type scale for importance ranking. Thus, a comprehensive list of BLE features was defined and validated, and each feature's ranking in terms of its relative importance was determined. In addition, the general intentions underlying the set of identified features were described.

Adaptive Structuration Theory (AST) was applied as a theoretical lens through which to interpret and understand the emergent findings in terms of the BLE's structural features (functionalities) and spirit (intentions behind the recommended functionalities). While, to the authors' knowledge, the application of AST for the design of an Advanced Information Technology (AIT) (the BLE) is a first, the AST lens did enable us to appreciate that the structures of the proposed BLE (its structural features and spirit) were not new in themselves but were rather being appropriated from other, existing AITs (BIM, virtual collaboration technologies, and VLE platforms) with which the interview participants were already familiar. In addition, and, in a sense, providing the sources of structure to the existing AITs, structures were also appropriated from the organizational environments that the participants came from. These insights are valuable in taking forward the development of the BLE into an actual, usable prototype as they suggest the functional integration of features from three defined AIT sources. The AST framework also provides a sound basis for future investigations of the BLE in use—which would be the typical application of the AST framework to study AIT use in a given social/organizational context.

Plainly, there are remaining challenges and doubts about how best to implement BLE in training and whether the new processes will be worth the effort among the stakeholders. This skepticism is understandable when we remember that change is turbulent and not easily embraced by all. This situation gets more complicated when trainers envisage putting in disproportionate additional efforts to bring a new learning style to bear. However, this is one way the development of an easy to use, open, and accessible platform with a repository of example BIM-enabled exercises could prove valuable.

The findings of this study have a wide range of implications for both theory and practice and in guiding future research direction. First and foremost, from a practical point of view, it provides the basis for the actual development of a prototype BLE. It also provides decision makers in software development organizations (especially those relating to the development of BIM applications for industry) insights and improvement opportunities to develop products that can be more easily integrated into AEC-FM education. Additionally, educational policy decision makers at relevant governmental levels should consider promoting more collaboration between developers of technologies for industry, users of technology, and educators/trainers—not only from the point of view of preparing industry workers with appropriate technology knowledge and skills but also in order to maximize the degree to which the technologies can be used to enhance education and training. Future research will focus on


**Author Contributions:** Conceptualization, T.O. and E.W.; methodology, T.O. and E.W.; formal analysis, T.O., C.M., T.T. and E.W.; writing—original draft preparation, T.O.; writing—review and editing, E.W.; visualization, T.O. and E.W.; supervision, E.W. and I.L.; project administration, E.W. and I.L.; funding acquisition, E.W. and I.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by the BIM-enabled Learning Environment for Digital Construction (BENEDICT) project (grant number: 2020-1-EE01-KA203-077993), Integrating Education with Consumer Behavior relevant to Energy Efficiency and Climate Change at the Universities of Russia, Sri Lanka, and Bangladesh (BECK) project (grant number: 598746-EPP-1-2018-1-LT-EPPKA2-CBHE-JP), Building Resilience in Tropical Agro-Ecosystems (BRITAE) project (grant number: 610012-EPP-1- 2019-1-LK-EPPKA2-CBHE-JP) and Strengthening University-Enterprise Collaboration for Resilient Communities in Asia (SECRA) project (grant number: 619022-EPP-1-2020-1-SE-EPPKA2-CBHE-JP) all co-funded by the Erasmus Programme of the European Union. The European Commission support to produce this publication does not constitute an endorsement of the contents which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


### *Article* **An Empirical Analysis of Barriers to Building Information Modelling (BIM) Implementation in Wood Construction Projects: Evidence from the Swedish Context**

**Lina Gharaibeh 1,\*, Sandra T. Matarneh 2, Kristina Eriksson <sup>1</sup> and Björn Lantz <sup>1</sup>**


**Abstract:** Building information modelling is gradually being recognised by the architecture, engineering, construction, and operation industry as a valuable opportunity to increase the efficiency of the built environment. Focusing on the wood construction industry, BIM is becoming a necessity; this is due to its high level of prefabrication and complex digital procedures using wood sawing machines and sophisticated cuttings. However, the full implementation of BIM is still far from reality. The main objective of this paper is to explore the barriers affecting BIM implementation in the Swedish construction industry. An extensive literature review was conducted to extract barriers hindering the implementation of BIM in the construction industry. Secondly, barriers to the implementation of BIM in the wood construction industry in Sweden were extracted using the grounded theory methodology to analyse expert input on the phenomenon of low BIM implementation in the wood construction industry in Sweden. Thirty-four barriers were identified. The analysis of this study also led to the development of a conceptual model that recommended solutions to overcome the barriers identified to help maximise BIM implementation within the wood construction industry. Identifying the main barriers affecting BIM implementation is essential to guide organisational decisions and drive policy, particularly for governments that are considering articulating regulations to expand BIM implementation.

**Keywords:** building information modelling (BIM); wood construction; grounded theory

### **1. Introduction**

The adoption of building information modelling (BIM) is growing at an exponential rate [1]. The global market for BIM was valued at USD 5.4 billion in 2020, and it is expected to grow to USD 10.7 billion by 2026 [2]. BIM is gradually being recognised by the architecture, engineering, construction, and operation (AECO) industry as a valuable opportunity to increase the efficiency of the built environment [2]. The adoption of BIM in different areas provides promising opportunities; for example, in the scope of facility management, BIM can reduce the operation and maintenance costs by providing a unified data source for accurate information about the facility [1]; BIM can reduce building energy consumption and support energy analysis [3]; BIM can facilitate sustainability and Life Cycle Assessment (LCA) [4–6]; and BIM, with its visualisation tools and 4D and 5D capabilities, can support scheduling and budgeting activities [7,8], increase quality [9], and reduce occupational risks [10].

One of the primary factors significantly driving market expansion is the rapidly expanding construction sector, substantial technology improvements, and government initiatives to mandate BIM implementation [4,5]. BIM is hailed as the answer to open communication, cost cutting, and energy efficiency. BIM is making its way from trendy

**Citation:** Gharaibeh, L.; Matarneh, S.T.; Eriksson, K.; Lantz, B. An Empirical Analysis of Barriers to Building Information Modelling (BIM) Implementation in Wood Construction Projects: Evidence from the Swedish Context. *Buildings* **2022**, *12*, 1067. https://doi.org/10.3390/ buildings12081067

Academic Editor: Fahim Ullah

Received: 22 June 2022 Accepted: 20 July 2022 Published: 22 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

ephemera to national legislation in many countries, such as the UK. In fact, the UK government has mandated the utilisation of BIM in every government construction project [6]. Europe is another key player in the global BIM movement. There is a clear push to achieve a fairer understanding of the practice. For example, the French government established a construction research and development initiative to support the establishment of BIM standards in infrastructure development [7]. As a result, the French BIM roadmap was established, and the country mandated BIM. Another European country that has adopted a national BIM strategy is Germany. This was accomplished by standardising methods and advocating for BIM to be made a requirement in public infrastructure projects [8]. For more than a decade, Nordic countries have been implementing BIM in both the public and private sectors [11]. The requirement originates from initiatives in which there is a lack of clear communication across construction project phases [2,10].

As for Sweden, in the absence of official government mandates for BIM, the shift towards implementing BIM in the Swedish construction industry is led by pioneers of the industry. The early steps towards BIM implementation were taken by representatives from several companies forming a BIM alliance or BIM network [12]. In this network, the focus was to pave the way for BIM implementation in Sweden by addressing barriers and promoting the use of BIM. The shift towards BIM is ultimately receiving more attention; however, as in many other countries, old approaches are still practised, and the full potential of BIM is yet to be achieved [2,10,13], meaning there are still barriers preventing and delaying the full implementation of BIM.

Focusing on the wood construction industry, BIM is becoming a necessity; this is due to its high level of prefabrication and complex digital procedures using wood sawing machines and sophisticated cuttings [11,14]. Currently, 3D models for machine production are being used to some extent, and digitalisation and automation of prefabrication are constantly developing [14]. However, the full implementation of BIM is still far from reality [15,16]. The move towards BIM is evident in the wood construction industry in Sweden in general. Nevertheless, it is still not fully implemented, and the level of BIM implementation in the wood construction industry is considered lower than in some other developed countries [17,18]. This emphasises the necessity for identifying and understanding barriers that are hindering full BIM implementation and paving the way for solutions to overcome these barriers.

The main aim of this paper is to explore the barriers affecting BIM implementation in the Swedish construction industry. These barriers are identified as issues that are preventing a large segment of industry practitioners from shifting towards a BIM-enabled project environment or are hindering the full implementation of BIM in the optimum way to realise the potentials that BIM offers for the industry. This research focuses on the wood construction industry in Sweden, and as such, this research can guide the industry to generate more practical and effective BIM application strategies, thus increasing the level of BIM implementation in Sweden. To achieve this, firstly, an extensive systematic literature review was conducted to extract barriers hindering the implementation of BIM in the construction industry in general, considering the lack of literature focusing on BIM implementation in the wood construction industry. Secondly, barriers to the implementation of BIM in the wood construction industry in Sweden were extracted using the grounded theory (GT) methodology to analyse expert input on the phenomenon of low BIM implementation in the wood construction industry in Sweden. The barriers are summarised from both grounded theory and the literature review to identify the final main BIM implementation barriers. Based on the research findings and discussions, recommendations and proposals for supporting the implementation of BIM are proposed and discussed. This research offers a valuable starting point for further research to facilitate and increase the level of BIM implementation in the wood construction industry by scrutinizing the main barriers that are currently preventing the full implementation of BIM and highlighting proposed solutions to overcome these barriers.

The research is organised as follows. In Section 2, the systematic literature review is outlined to extract and understand the barriers that are hindering BIM implementation in the wood construction industry and highlight the research gaps in this area in addition to the research gaps and limitations, followed by Section 3, which illustrates the methodology used in this study, including the analytical framework, data collection and analysis, and grounded theory. The results and discussion in Section 4 illustrate and discuss the barriers identified from GT, complementing the literature review findings, with a focus on the wood construction industry, in addition to discussion of recommendations concerning possible application issues. Conclusions and opportunities for future research are offered in Section 5.

#### **2. Literature Review**

As technology has advanced in recent years, the use of BIM has proven to be an innovative, useful, and effective factor in the development of sustainable projects [19–21]. The benefits of BIM have been recognised, particularly in the processes of design, performance assessment, visualisation, management, and, more recently, the operations and maintenance of the project [22–24]. However, despite the continuous increase in the use of BIM, the anticipated gain from the investment has not yet been perceived [25,26]. Advocates of BIM claim that the proper implementation of BIM holds benefits and value that far surpass the initial cost of investment and that, to date, the envisioned implementation of BIM has not been achieved [27].

In recent years, several initiatives concerning BIM implementation have been launched in several countries [20]; similarly, numerous studies in the literature have sought to create suitable strategies for BIM implementation [28,29]. This ongoing and increasing interest from both academia and industry proves that the proper and full implementation of BIM is still far from reality, and barriers to full implementation still exist.

#### *2.1. BIM Implementation Barriers*

The literature review critically examined articles focusing on BIM implementation barriers, and some articles examined the BIM implementation barriers in a selected country or region. Research undertaken by Daneshvartarigh [27] and Rossi identified barriers to BIM implementation with a focus on the European region [27,30]. Similar studies have been conducted in Canada [31], the UK [32–34], New Zealand [35], Poland [36], and the Middle East [28,37,38].

Other studies tackled the BIM implementation barriers with a focus on a specific aspect. A study by Saltarén and Gutierrez-Bucheli [39] examined barriers that are affecting BIM implementation in public infrastructure projects [39], while other scholars identified barriers that are related to BIM implementation in facility management activities [6,29,40]. Some recent research focused on investigating barriers to integrating BIM in building sustainability assessment [6,41–44], BIM for smart building energy and efficiency [31], BIM for prefabricated construction [45–47], BIM in renovation processes [27], and BIM for industrialised building construction [48].

The utilisation of BIM for prefabrication and modular construction has also received attention in several studies, and barriers concerning this subject were examined in several studies [6,42,43].

The examination of previous literature revealed an obvious lack of studies focusing on identifying the level of BIM implementation in Sweden. Only one paper was found that investigates the current use, barriers, and driving forces of BIM implementation among mid-sized contractors. The authors concluded in their study that the main barriers are a lack of demand from clients and the absence of internal demand in companies [49]. Furthermore, the literature review conducted in this study revealed that there is a lack of interest in exploring BIM implementation barriers that are hindering the expansion of BIM implementation in the wood construction industry in general and in Sweden specifically. Few articles have studied BIM adoption in the wood construction industry. For

example, [50] investigated BIM implementation barriers, strategies, and best practices in wood prefabrication for SMEs in Canada. One of the main barriers was related to the effort needed to create BIM software libraries and the programs required to exchange information between BIM models and production equipment.

Table 1 illustrates the BIM implementation barriers as perceived from examining and analysing the recent literature. The articles used for the purpose of extracting the barriers were selected based on refining criteria to ensure (1) recency: this research aimed to capture the current position of the construction industry, considering that the state of the art in research related to BIM is moving at a fast pace; (2) relevance: barriers were extracted from research focusing mainly on developed countries. As this research focuses on the Swedish construction industry, it is logical to consider barriers from similarly developed countries, as the level of income plays a major role in the country's capabilities to adopt the latest technologies and to encounter fewer difficulties compared to developing and low-income countries.

**Table 1.** Barriers to BIM implementation as identified from the literature.


#### *2.2. Research Gaps and Limitations*

Previous literature has revealed that BIM is not fully utilised and adopted in the industry, despite the realisation of the potential and opportunities that BIM offers for construction projects [17,23,29,51]. The existing literature also revealed an obvious lack of studies focusing on identifying the level of BIM implementation in the wood construction industry in general and in Sweden in particular. Table 2 summarises selections of similar studies and highlights the gaps and limitations in the currently available literature.

**Table 2.** Relevant literature focus and limitation.


In view of the limitations presented in Table 2 and considering the lack of research focusing on wood construction in particular, further research is needed, especially to address certain gaps: (1) to identify barriers that are limiting BIM implementation in the wood construction industry and (2) to identify solutions to overcome these barriers, which will influence the level of BIM implementation in the wood construction industry.

Accordingly, this research aimed to identify barriers that are hindering the full implementation of BIM in wood construction projects in Sweden by investigating the phenomenon of low BIM utilisation in the wood construction industry in Sweden. The research adopted the grounded theory methodology for the theoretical identification of the factors that are causing this phenomenon.

#### **3. Research Methods**

Technology adoption in general is affected by several factors and complex interactions between project stakeholders. The level of BIM implementation in construction firms is impacted by various barriers and difficulties, and since this research aimed to identify these barriers from experts and practitioners of the industry, adopting a qualitative theory-building methodology seemed adequate to systematically identify these barriers and interpret them [22]. Achieving this aim requires deep insights into the central phenomenon and the causal factors. Accordingly, the research adopted a qualitative approach, which started by gaining a holistic understanding of the most recent barriers to BIM implementation in developed countries by conducting a systematic literature review covering articles from the last four years (2019–2022) using the Scopus database.

Considering the lack of research available on the subject, this study did not have too many prior theoretical assumptions to learn from. Thus, it is essential to start with actual observations, summarise experiences from the original data, and then build the theory to identify barriers to BIM implementation in the wood construction industry. The grounded theory method is considered to be the most scientific qualitative research method [22]. Its primary goal as a qualitative research technique is to develop a theory based on empirical evidence. The foundation of the grounded theory approach is to continuously use the comparison principle throughout the theory-building process. Researchers must draw comparisons from the first set of data in order to trigger thought and fully and succinctly understand the key features of research phenomena. Data can be sorted using comparison to determine the relationships between different phenomena [56]. Hence, this topic is a great fit for the grounded theory method in exploratory research. A bottom-up method named grounded theory is used to develop a substantive theory. It includes examining fundamental ideas that capture the phenomena through a systematic data collection process and then building pertinent theories by connecting these ideas. However, the major advantage of grounded theory is not that it is empirical in nature but rather that it abstracts new concepts and ideas from empirical facts [17].

Grounded theory has been broadly used in construction management and engineering science domains, such as identifying project performance risks. Shi et al. (2022) adopted grounded theory in research to identify planning risks in prefabricated buildings [57]. Grounded theory was found to be suitable for engineering studies that adopt qualitative methods and investigate behaviours or traits in industries, such as safety performance precautions [58] and maintenance management [24].

#### *3.1. Research Framework*

The research commenced with a literature review to establish a strong foundation for this research and to direct the research towards bridging the research gaps in the current state of the art. The systematic literature review analysed related studies on the subject of BIM implementation barriers in the construction industry to extract those gaps and to formulate the research question that would later be the basis for the grounded theory methodology. The research framework utilises two methods of data collection: the first is the literature review, which uncovers the barriers to BIM implementation in the construction industry in general, and the second method is grounded theory, which is aimed at extracting new barrier factors that are specific to the wood construction industry in Sweden from expert interviews. This research then recaps the research findings and provides recommendations to stimulate the implementation of BIM in the wood construction industry in Sweden. Figure 1 illustrates the research framework.

#### *3.2. Grounded Theory (GT)*

Grounded theory qualitative research intends to investigate the complex set of factors associated with the central phenomenon and to describe participants' perspectives regarding these factors. In typical GT research, a main open question is identified, rather than an objective or a hypothesis [59]. Several sub-questions should follow the main open question. The questions and the setting of the data collection have an exploratory nature to allow factors to emerge and develop around the main phenomenon. This research investigated the phenomenon of the implementation of BIM in wood construction projects in Sweden, and the GT method was employed to answer the open research question, which is: What are the barriers hindering the full implementation of BIM in the wood construction industry? The research employed the qualitative method of open-ended semi-structured interviews to collect data from industry practitioners around the identified phenomenon.

#### *3.3. Data Collection*

Interviewing is the most commonly utilised data collection method in grounded theory [54,59]. The interviews are designed to build concepts and theory and to allow data to emerge spontaneously until the extracted facts are "grounded" in the analysis [59]. This research utilised semi-structured open-ended interviews following a qualitative research strategy [54]. This method was found to be suitable for collecting experts' perceptions surrounding the identified phenomenon. Semi-structured interviews with industry prac-

titioners were conducted to identify the barriers to BIM implementation in the wood construction industry. The interviews led to a better understanding of the phenomena [54] and to gathering truths about the reality of BIM implementation [59].

#### 3.3.1. Participant Sampling

In GT, selecting participants is considered iterative [56]. The selection was initiated using theoretical sampling, meaning that the research was initiated by selecting a small group of participants loosely based on the initial research question. The loose selection of participants was conducted using purposive and convenience sampling methods [60]. Purposive sampling means that participants were selected based on predefined criteria. In this study, the criteria were defined to include participants from the Swedish construction industry and Swedish academics from construction-related fields.

Moreover, participants were selected from a database related to the Tillverka i trä (Wood Manufacturing) project, which is a project that involves several Swedish companies and organisations and focuses on wood industry innovation and is thus of strong relevance for this study [61]. The selection from a defined database is related to the convenience sampling method, where easy access to participants was needed (Bryman and Bell, 2015). The interviews were conducted in steps at the end of each group of interviews, the data were analysed, and the following group of interviews was determined. In GT, the data analysis and collection continue until any additional excerpts do not add to the coding categories, which is referred to as "theoretical saturation" [55].

#### 3.3.2. First Round of Data Collection

Twelve interviews were conducted in the first stage with representatives from the industry and academia. The academic point of view was taken into consideration to assess the gap between research and industry practice in areas related to BIM implementation. The first round of interviews was transcribed, and the analysis was initiated using open coding. Open coding is the process of breaking the transcripts into excerpts. These excerpts are later continuously compared and contrasted in what is known as the "constant comparative method", which is the core of the grounded theory method [55]. Two repetitions of the process were conducted. The second round of interviews involved six participants, bringing the total number of interviews to eighteen. The descriptive data of participants are described in Table 3. The average duration of the interviews was 45 min, and the open-ended questions covered several themes focusing on the knowledge of BIM, the use of BIM within the company, and the difficulties and constraints that are preventing the implementation of BIM.

**Table 3.** Descriptive data of participants.


The participants were asked to provide answers to the core research question and describe the main difficulties or barriers that are preventing the full implementation of BIM in their working environments. Then, based on their answers, participants were presented with several open-ended sub-questions, such as: How do you see the level of BIM knowledge at your company? How sufficient are the current BIM libraries regarding wood objects? What are the data formats that are used to exchange information in your projects? These questions, among others, were used to urge the participants to tackle a broader view of the barriers within their own work environment.

#### *3.4. Data Analysis*

The data analysis was initiated by breaking the transcripts of the interviews into excerpts using open coding. NVivo software was used in this research to create "nodes" to encapsulate these groups of excerpts. Coding is essential for successfully implementing a grounded theory methodology [55], as coding provides the link between collecting the data and developing the evolving theory to derive explanations for the defined phenomenon. This research followed the coding process offered by grounded theory, starting with the initial coding, followed by more focused coding, and finishing with theoretical coding [57].

At first, twelve interviews were conducted. The transcriptions were performed immediately after each interview to ensure the quality of the extracted data. The transcripts of the first batch of interviews were analysed, coded, and constantly compared. This act of comparison is an essential part of the grounded theory method and is known as the constant comparative method [57], where excerpts of raw data are sorted and organised into groups according to attributes in a structured way to formulate a new theory. The process of coding in GT is carried out in three successive stages, which are initial, axial, and theoretical coding. Figure 2 illustrates the coding map followed in the data analysis.

**Figure 2.** Coding map, Codes, Groups, and Categories.

#### 3.4.1. Initial Coding (Open Coding)

Initial coding is an inductive activity. The purpose of this stage was to inductively generate as many ideas as possible from the raw data. Several nodes were identified, and preliminary categories were established. This stage also involved deriving patterns and searching for similarities. Several factors for low BIM implementation were mentioned by participants; these factors were initially classified and grouped into several nodes based on similarities and connections using various criteria for classifications.

The initial coding assigned a code (B#) to any barrier mentioned in the transcripts. The same code was given to similar factors; for example, if one factor was mentioned by several participants using different words, these factors were given the same code to avoid duplication. The initial coding resulted in the identification of 35 barriers, coded as B1 through B35.

#### 3.4.2. Focused Coding (Axial Coding)

During the axial coding stage, decisions were made regarding the initial codes based on their prevalence or importance and on the extent of their contribution to the analysis. The factors were highlighted based on their importance and the frequency at which they were mentioned. The factors or barriers were also analysed in conjunction with other aspects, such as the background of the participant, the type of company, and the type of project. During axial coding, comparisons between codes were constantly made to find similarities and connections and to group factors.

Axial coding resulted in grouping the barriers into eight groups (G1–G8) based on the causes that these barriers stemmed from. The groups are shown in Figure 2. The number or frequency at which each barrier was mentioned during the interviews is shown in the figure as a number next to each barrier.

#### 3.4.3. Theoretical Coding (Selective Coding)

In theoretical coding, the factors were refined into final categories in the theory, and relationships were drawn. The barriers at this stage were clearly defined and classified, and the data were considered ready to derive analysis results. Broader groupings and nodes were deduced at this stage. Selective coding produced three main categories for the barriers, as shown in Figure 2, which are C1: technology- and resource-related barriers; C2: people-related barriers; and C3: process-related barriers.

#### 3.4.4. Second Round of Data Collection—Theoretical Saturation

After the first round of analysis and coding was finished, six more interviews were conducted, transcribed, and coded using the predefined codes from the first round. The coding in the two stages was carried out by different researchers to ensure the validity of the coding process. The aim of conducting the analysis in GT in multiple stages is to achieve theoretical saturation [57]. The first round of interviews was coded and analysed, and initial barriers were identified. In GT, several iterations are conducted using the same codes. After analysing the second round of interviews, it was noticed that all identified barriers already existed in the defined codes, meaning that the additional transcripts did not expand upon the previously analysed codes; hence, the mentioned barriers and factors were already identified, and the coding process was theoretically saturated.

#### *3.5. Theory Building*

After theoretical coding and achieving theoretical saturation, the analysis comprehensively identified barriers that are hindering the full implementation of BIM in the wood construction industry. This identification of factors and development of concepts in GT is known as the storyline method [57]. This storyline interpreted how companies are still facing barriers and difficulties that are limiting the move of the industry towards achieving the full potential of BIM in wood construction projects in Sweden. Participants in the interviews also shared possible solutions and suggestions to overcome these barriers, which complement the findings of the literature review and helped to derive and map the framework for BIM implementation barriers and the proposed solutions that this research recommends for the wood construction industry in Sweden.

#### **4. Results and Discussion**

The analysis of the interviews provided insights about the participants' perceptions of barriers that are hindering the implementation of BIM in the wood construction industry. Using the GT methodology, 35 barriers were identified and grouped into eight groups under three main categories.

#### *4.1. Comparison between GT Results and Literature Review Results*

The examination of the identified barriers compared with the results of the literature review revealed many similarities. Barriers related to resources and cost (G2) [36,49,55,62,63], skills and knowledge (G3) [9,21,29,31], human behaviour (G5) [44,49,55], and governmental (G7) [21,31,49] were all identified previously from similar studies in the literature review, as shown in Table 1. Other barriers were mentioned by participants and were found to be related to the wood industry in particular, such as interoperability issues [46,64] and insufficient object libraries (G1) [50].

Although similar barriers have been identified in the literature concerning the construction industry in general, wood construction practitioners have highlighted that interoperability issues between BIM software and wood manufacturing and fabrication still exist. Thus, efforts are being made to create a smoother information flow between design, manufacturing, and construction, including a seamless, direct flow of information between 3D models and CNC machines. However, several participants mentioned that manufacturers are still required to produce their own drawings and fabrication details, with a significant amount of rework in most cases, and the majority of wood fabricators still rely on 2D drawings for their cutting machines. Moreover, several participants mentioned that available object libraries for wood items are not sufficient; in particular, because wood objects require high levels of details, several participants stated that they still need to produce their own objects in some cases, which requires time and effort, and accordingly, decisions are made in some projects to create enlargements manually and in CAD rather than adding objects to the BIM model.

Other barriers related to industry culture were identified (G4). As the wood construction industry supply chain involves wood manufacturers and suppliers, the design and manufacturing integration process requires greater collaboration and introduces more challenges to the implementation of BIM, as manufacturers and wood fabricators work with different formats and software than consultants involved in the design phase. Other barriers were mentioned by the participants in relation to the wood construction industry concerning the sizes and types of businesses. Most wood construction industries in Sweden are small family-owned firms, following processes that have been adopted for years, and motivation for change is lower when compared to larger multinational firms. These eight groups of barriers are discussed in this section, and their corresponding codes are examined inductively.

#### *4.2. Technology- and Resource-Related Barriers (C1)* 4.2.1. G1: Information Technology and Software

Barriers related to IT and software were mentioned by most participants. Interoperability issues are still being faced, whether between different software or between project teams and stakeholders. Several data formats are still being used to exchange information, which results in data loss and wasted time. Some participants argued that BIM is not adequately suited for some disciplines due to its lack of capability to perform a specific task. One participant who works with structure analysis stated that IFC has low-quality geometry and that they find other formats and tools more suitable. Interestingly, some participants claimed that traditional methods, in some cases, such as small unique projects, can be more effective, such as sketching and using typical details from previous projects in AutoCAD, and as the vision of BIM requires that everyone be on board, the reluctance of one of the project teams to implement BIM will lead to missing information in the BIM model, and the usage of the model will be reduced. Efforts are still needed to address the IT-related issues of BIM; the seamless flow of data between project phases and along the supply chain will increase the effectiveness of the project, and better standardisation of data formats and guidelines should be achieved to overcome the issues of using different data formats and software between project stakeholders. The wood construction industry will benefit from solving interoperability issues between BIM models and CNC machines. Efforts should be made to enhance the available libraries of wood objects, which will lead to creating more informative BIM models in wood projects.

#### 4.2.2. G2: Resources and Cost

There was disagreement between participants when related to the initial cost of BIM, the cost on the run, and whether this cost is worth the investment. BIM requires introducing new software, such as Revit, Tekla, and Navisworks, that can replace or be used in addition to traditionally used software, such as 2DCAD. Starting to use new software will impose changes on many levels, such as server capacity, device specifications, new licences, and IT personnel capabilities. Additionally, training to use new software will be inevitable and will require additional time and cost. The introduction of new software will involve changes to processes and ways of doing things, and transition periods can be messy and costly. Other participants reported that proper implementation of BIM will add more time to the work schedule; new time-consuming activities will be needed, such as adding objects and specifications to the model, and if a fully detailed model is not requested by the clients, designers will often disregard such details. We conclude that evidence of BIM's feasibility is required, and more research should be conducted to measure the return on investment in BIM at the level of projects and the organisation's performance. Proof of the benefits of BIM for saving time and cost is needed to make it more appealing for investors to take the step.

#### 4.2.3. G3: Skills and Knowledge

Any company that decides to shift their processes towards implementing BIM will require either hiring new specialised staff or training their own, and sometimes both. Some participants mentioned that proper training for BIM can be hard to find and that skilled technicians can be hard to find.

#### *4.3. People-Related Barriers (C2)*

#### 4.3.1. G4: Construction Industry Culture

The construction industry has been criticised by participants for being fragmented and reluctant to change. Moreover, the industry profit margin is sensitive, and companies will avoid any risk when it comes to cost. The industry is resistant to change, construction contracts are still demanding 2D drawings as a project deliverable, and the lack of demand for BIM from clients is still noticeable. The industry needs to realise the benefits that BIM offers for decision makers to change their mindsets and accept the change. Accepting a BIM model as a legal project document in the contract might reduce the need for producing 2D drawings in every project.

#### 4.3.2. G5: Human Behaviour

Similar to industry culture barriers, a change in the mindsets of human resources in the industry is needed. Taking the lead towards change needs to come from management, and resistance to change from all people involved should be reduced by increasing the knowledge about BIM and developing the skills of human resources, which will reduce the fear of risk and encourage people to take steps towards new technologies and development.

#### *4.4. Process-Related Barriers (C3)*

#### 4.4.1. G6: Organisational

At an organisational level, some barriers were attributed to the fact that Swedish wood construction companies are relatively small and, in most cases, have been owned by families for generations. Barriers such as lack of leadership and reluctance to change can have a higher effect in small companies. Some participants claimed that the ownership of the project can be a barrier to BIM implementation and that this is a norm that can be seen in wood construction projects where the entire project process is handled in-house. When the design, production, and construction of the project are handled by the same company, the need for integration processes and advanced communication systems becomes less necessary, and accordingly, these companies will not be very tempted to change their methods of working to adopt BIM, unless it is required by other stakeholders, such as clients of facility managers.

#### 4.4.2. G7: Governmental

Most participants stated that they were not aware of any BIM government mandates in Sweden, and there was disagreement among responses on whether government enforcement is required. The move towards BIM in the Swedish wood construction industry was initiated and supported by industry representatives and alliances with little governmental participation. However, the involvement of the public sector and official mandates and requirements from authorities, similar to other countries, can facilitate, speed up, and organise the move towards BIM. Other acts of government support can be funding BIM initiatives, providing training and technical support, and mandating BIM at least in public construction contracts.

#### 4.4.3. G8: Process and Guidelines

Unclear information requirements and unclear strategies were frequently mentioned barriers among participants. Several participants stated that, often, companies do not know how to start implementing BIM and that clear BIM implementation processes and guidelines are not available. A participant from the BIM alliance stated that current efforts are being put towards issuing BIM standards in Sweden, which is believed to address several issues related to data formats and requirements. The need for clear official standards is evident to achieve more efficiency in BIM implementation.

Other barriers concerning the construction industry process were mentioned by participants, such as not involving all stakeholders from the initial project stages, especially wood manufacturers and facility managers. Some participants revealed that the reason behind this late involvement is due to procurement strategies and cost issues, in which suppliers are rarely identified in the concept stage, and thus, to avoid commitment with other parties, the involvement of material vendors in most cases comes after finalising the design, which entails abortive works and additional hours and efforts from these stakeholders. On a similar note, several participants reported that one of the reasons for not implementing BIM properly is that proper implementation will require increasing efforts in the preliminary stages of the project. The traditional method of project design usually starts with a concept stage with simple plans and sketches, and minimal cost and effort will be utilised at this stage, as the project budget is still unclear and negotiations are being carried out. Having to produce a BIM model with sufficient details at this stage will pose risks to budgets and will move the cost from later stages to upfront in the cost plan.

The three categories and their corresponding barriers illustrated above are complementary and synergistic. The proper implementation of BIM will require overcoming barriers from the three categories simultaneously. Figure 3 illustrates a conceptual model showing the relationship between the three theoretical categories and the recommended solutions extracted from the research analysis.

#### 4.4.4. C1: Technology- and Resource-Related Barriers

The construction industry is found to be conservative when it comes to adopting new technologies, and the wood construction industry is not an exception. As the results of the research revealed that adopting BIM will entail issues related to resources, systems, and costs, overcoming these barriers will pave the way for more companies to join the shift towards BIM. BIM utilisation might be initiated by hiring expert staff to set guidelines and work maps and to train the company's staff on new software and methods. The efforts for standardisation under category C3 will aid in overcoming barriers related to data exchange and requirements and provide a clearer path for newcomers.

#### 4.4.5. C2: People-Related Barriers

Overcoming barriers related to resources and information technology (C1) will facilitate overcoming people-related barriers. People will be less likely to resist a change that they believe is efficient and necessary and will find the move toward BIM more appealing when they do not face technical and resource obstacles. Some participants suggested

that companies move towards implementing BIM in small steps rather than substantially shifting all at once. This will lighten the burden and will facilitate the transitional period. The results of the research showed a serious need for evidence of tangible outcomes of implementing BIM, and more research should focus on measuring the rate of investment in BIM through actual case studies that will encourage construction industry stakeholders to invest in BIM.

**Figure 3.** The proposed conceptual model to overcome BIM implementation barriers.

4.4.6. C3: Process-Related Barriers

Several participants stressed the need for official standards for BIM. The lack of standardised processes and guidelines is the main cause of ununified format and interoperability issues (C1). Clear guidelines for BIM implementation are needed, among which a mandate for getting everyone on board in the project should be emphasised. The involvement of all stakeholders throughout the project lifecycle will minimise the issues of resource allocation and double efforts (C1) and will encourage several participants in the project supply chain to be involved in the BIM process, which will increase the level of BIM implementation in the long run in the industry (C2).

#### **5. Conclusions**

The level of BIM implementation varied among different businesses, and its implementation commenced at different maturity levels and stages, ranging from no utilisation at all to being used to some extent. Although many entities and organisations are striving

towards excelling in BIM and are showing extraordinary interest in it, barriers still exist and are creating obstacles in the way forward.

The reluctance of the wood construction industry to adopt BIM is evident, and research on BIM implementation in wood construction is scarce. Additional research is needed to identify the reasons behind the slow adoption in the wood construction industry in Sweden. Thus, the contributions offered by the present paper are twofold: barriers were identified through a robust review of research in the wood industry context and refined (semi-structured interviews) through the lenses of experts in Sweden. Secondly, the study offers a road map (considering the identified barriers) for industry stakeholders to move toward more digitalised wood construction in Sweden. In addition to these contributions, the study highlights opportunities for decision makers to encourage the wider adoption of BIM in the wood construction sector. The study results show that interoperability issues, insufficient object libraries, lack of time, high initial cost, lack of knowledge, and resistance to change are the main issues that require the utmost attention. Thus, the following recommendations can be made with consideration of the barriers identified in this study:


Notwithstanding the contributions that are presented in this research, acknowledging the limitations is essential prior to deriving any conclusions based on the findings. The results should be considered with caution, as they are based on the feedback of industry experts in Sweden, which may introduce operational and social disparities related to wood construction practices. Future research could possibly investigate barriers to BIM adoption in the wood construction field within other countries and compare findings. Additionally, to offer more generalisable findings, future researchers are recommended to explore the barriers to BIM adoption using probability sampling techniques. Finally, the root causes of the barriers, any possible interconnections between them, and possible strategies to overcome them could be further explored in future research.

**Author Contributions:** Conceptualization, L.G., S.T.M., K.E. and B.L.; methodology, L.G. and S.T.M.; software L.G.; Matarneh, S.T, validation, L.G.; Matarneh, S.T; formal analysis, L.G. and S.T.M.; investigation, L.G. and K.E.; resources, L.G., S.T.M. and K.E.; data curation, L.G. and K.E.; writing original draft preparation, L.G. and S.T.M.; writing—review and editing, L.G., S.T.M., K.E. and B.L.; visualization, L.G. and S.T.M.; supervision, K.E. and B.L.; project administration, K.E. and B.L.; funding acquisition, No funding. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding and the APC was funded by University West, Trollhattan, Sweden.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Key Adoption Factors for Collaborative Technologies and Barriers to Information Management in Construction Supply Chains: A System Dynamics Approach**

**Fawad Amin 1, Khurram Iqbal Ahmad Khan 1,\*, Fahim Ullah 2, Muwaffaq Alqurashi <sup>3</sup> and Badr T. Alsulami <sup>4</sup>**


**Abstract:** Construction processes are complex and dynamic. Like its other components, the construction supply chain (CSC) involves multiple stakeholders requiring varying levels of information sharing. In addition, the intensity and diversity of information in CSCs require dexterous management. Studies reveal that information complexity can be reduced using collaborative technologies (CTs). However, the barriers to information management (IM) hinder the CTs' adoption process and cause complexity in CSCs. This research identifies barriers to IM and factors affecting the adoption of CTs in developing countries. In order to understand and address complexity, the system dynamics (SD) approach is adopted in this study. The aim is to investigate if SD can reduce information complexity using CTs. Causal loop diagrams (CLDs) were developed to understand the relationship between the IM barriers and CT adoption factors. The SD model, when simulated, highlighted three main components, i.e., complexity, top management support, and trust and cooperation, among others, as factors affecting the adoption of CTs. Addressing these factors will reduce information complexity and result in better IM in construction projects.

**Keywords:** construction supply chain; collaborative technologies; information complexity; information management; system dynamics

#### **1. Introduction**

The construction industry, like its counterparts, involves complexity due to its dynamic nature and ever-changing processes [1]. The construction supply chain (CSC) deals with managing materials, information, and financial flows in a multi-stakeholder system. The key stakeholders include general contractors, subcontractors, and suppliers [2,3]. Among the construction process flows, information plays an essential role in the benefit of enterprises and in enabling supply chain integration [4]. Construction processes are information-centric; the associated information is managed by actors involved in the process that directly affects the performance of CSCs [5]. The construction industry has a temporary supply chain that keeps changing from project to project. The large number of stakeholders involved requires information at each stage of the construction project [2].

The involvement of numerous stakeholders and other participants in the CSC has made it more complex than general supply chains [6]. The diversity and intensity of information in CSCs require careful management [7]. There are different barriers to information management (IM) in supply chains. Some examples include failure of information systems functionality, lack of information exchange, communication issues, lack of an IM

**Citation:** Amin, F.; Khan, K.I.A.; Ullah, F.; Alqurashi, M.; Alsulami, B.T. Key Adoption Factors for Collaborative Technologies and Barriers to Information Management in Construction Supply Chains: A System Dynamics Approach. *Buildings* **2022**, *12*, 766. https:// doi.org/10.3390/buildings12060766

Academic Editor: Jurgita Antucheviciene

Received: 2 May 2022 Accepted: 1 June 2022 Published: 5 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

system, lack of information availability, lack of information quality, implementation cost, and lack of leadership skills [8]. These barriers hinder the process by not allowing the information to be managed, processed, and communicated. Accordingly, these barriers induce information complexity throughout the supply chain [2]. The situation is further exacerbated in developing countries due to less sophisticated systems and reliance on traditional management approaches [2].

Chen and Kamara [9] argued that the construction industry is information-intensive from initiation to execution. The efficiency of IM is an important competitive advantage to the construction industry because of its diverse and intense nature [9]. With the use of information technology, IM can benefit CSCs. Further advances in information technology can enhance IM in the construction sector, specifically on construction project sites, by providing timely, relevant, and necessary information to the key stakeholders to make informed and improved decisions [10].

Recent developments in technologies have enabled global construction organizations to avail information easily on their premises. These technologies help manage supply chain information to ensure a smoother process. In addition, the introduction of different collaborative tools within construction projects supports the flow of information between different project members [10]. These working partners often work in different locations, where the two-way information flow is essential in supporting the ongoing construction tasks. However, access to such information is usually restricted, particularly in developing countries. This can be associated with multiple reasons, including lack of trust, poor data archiving, and traditional rigid management practices.

It can be argued that timely collection and dissemination of information to project teams would resolve the risks and reduce unexpected construction problems [11]. Bertelsen [12] has described the construction process as a complex system. The construction process is based on the assumption that it is an ordered, linear process that can be planned and managed using traditional approaches [13,14]. However, the frequent delays in the completion of construction projects lead to an argument that the process is not as predictable as it may appear or assumed [10]. In fact, construction is a non-linear, complex, and dynamic process that requires the use of sophisticated systems to manage it [10].

In order to address the CSCs' complexity, an SD approach is adopted in this study. SD is a tool used to enhance the learning of complex systems and facilitate the understanding of complex dynamic systems [15,16]. The approach addresses complexity and involves interactive modeling tools to represent feedback structures in complex non-linear systems [17]. The strength of the SD approach lies in breaking down complex systems into simpler and understandable sub-systems. The SD approach addresses complexity and process relationships based on non-linear feedback systems [1]. It employs causal loop diagrams (CLDs) to reveal the underlying causal feedback mechanisms [18,19]. Adopting SD can lead to a greater understanding of complex issues in processes such as CSCs.

Therefore, the current study focuses on finding a solution using the SD approach to address the information complexity and IM issues in CSCs. The research focuses on addressing information complexity using collaborative technologies (CTs) to improve the performance of CSCs.

The rest of the study is organized as follows. First, the study identifies barriers to IM in CSCs in the literature review section. It is followed by identifying factors affecting the adoption of CTs for IM in CSCs. The relationship between barriers and factors is used to derive a solution by finding out the factors acting as barriers to IM in the method section using various scoring mechanisms. The important factors are identified to minimize barriers to IM and discussed in the results section. Finally, the study is concluded, and limitations and future directions are presented. Overall, fewer barriers lead to less information complexity, leading to better IM in CSCs.

#### **2. Literature Review**

#### *2.1. The Need for Management of Information Flow*

Management of information flow plays an important role in the success of construction projects. This is because the construction environment is information-intensive, from its design offices to the working sites [20]. IM is the overall management and control of an organization's investment in information. This means that where good IM practice exists, information intended for decision-makers must be relevant, reliable, complete, and available when needed [5]. The core belief of CSCs is that proper IM brings value. However, this value can only be obtained when the information is used to improve the efficiency of people and systems for making informed decisions [2].

Chen and Kamara [9] state that construction projects involve several stakeholders, coordinating and collaborating for a short term to develop the required project. Design and construction are separate phases in the construction process. In the design phase, information in the form of the client's input is utilized to develop the final design of the project. In the construction phase, this information is required to be transferred to field personnel to construct the project or provide a service [9]. The construction phase is the most challenging phase in IM, with design teams, contractors, subcontractors, and suppliers. One of the principal causes of delays in on-site construction work is waiting for design information [9]. Construction personnel in the field require a large amount of information, such as project design and construction drawings, to support their ongoing work. However, the majority of information that is received by construction personnel onsite is paper-based [21]. This is especially evident in developing countries. The limitation of paper-based files has become a major constraint in managing on-site information and associated communication [9].

#### *2.2. Barriers to Information Management*

There are different barriers to IM that cause information complexity throughout the construction system. The first objective of this study is to identify these barriers from the existing literature. Therefore, a systematic literature review was conducted to identify the barriers to IM in CSCs. The literature review comprised retrieving and reviewing literature published from the year 2000 until date. Various sources, including books, research articles, and conference papers, were utilized to conduct a complete, comprehensive, and exhaustive review. The articles were retrieved from the literary search engines, including Google Scholar, Scopus, Web of Science, Emerald Insight, Taylor and Francis, American Society of Civil Engineers, Elsevier-Science Direct, Springer, and MDPI. This is in line with recently published articles on conducting literature reviews [22–24]. The review mechanism, as described in the referred articles, was followed for conducting an exhaustive literature review to identify the barriers to IM in this study.

#### *2.3. Collaboration Technologies*

A CT is a set of hardware and software that can provide communication support to participants and support them in using the technology for collaborations in projects [25]. CTs provide a centralized outlet for all construction-related documents, processes, and communications. All the relevant parties involved in the project must have the same up-to-date information and should provide their real-time inputs and updates to the CT to capitalize on its holistic benefits [25].

CTs have been introduced in construction projects to support the information flow between different project stakeholders. Different CTs are used in CSCs, such as electronic data interchange, mobile computing technologies, building information modeling, autoidentification using data capture technologies, and cloud computing [10]. Cloud computing is a recent, innovative, and cost-effective technology. The key aspects of cloud computing technology include cost, simple use, and easy accessibility. It can make use of the existing IT infrastructure, which can be configured according to the requirements of the construction organization [1].

Other CTs include the disruptive big9 technologies, including drones, the internet of things, clouds, software as a service, big data, 3D scanning, wearable technologies, virtual and augmented realities, and artificial intelligence and robotics [26]. In addition, other CTs, such as machine learning, 3D printing, laser scanning, and blockchains, have also increased collaboration among construction project stakeholders [27,28].

Further, the SD approach can help address barriers to IM using CTs by taking inherent complexities into account. There are numerous CTs available; however, CSCs should adopt innovative and cost-effective technologies to address their inherent complexities.

#### *2.4. Factors Affecting the Adoption of Collaboration Technologies*

CTs are used to support information flow between different project stakeholders. However, several factors affect the adoption of such CTs. Therefore, the second objective of this study is to identify factors that affect the adoption of CTs for managing information in CSCs. Again, these factors are extracted from the published literature using an extensive literature review process.

#### *2.5. System Dynamics Approach*

The SD approach is used to handle and simulate complex systems. It provides tools for understanding the concept of complex systems such as CSCs [15,16]. The SD approach uses a perspective based on information feedback and delays to understand the dynamic behavior of complex systems [29]. The dynamic system has a certain internal structure that is affected by uncertain and complex external conditions. Its fundamental principle is to use system modeling, send the model to a computer, and verify its validity to provide a basis on which to work out a strategy and support decision-making [30]. Khan et al. [1] explained that the strength of the SD approach lies in breaking down complex systems into understandable sub-systems. Furthermore, the SD approach addresses the complexity of process relationships based on non-linear feedback systems. Therefore, it can help improve information flow and IM through CTs, ultimately leading to improved productivity in CSCs [1].

The SD approach imitates the human process of decision-making. Humans draw assumptions about various causes and effects of different components of a system, including their functions [31]. These assumptions are known as mental models that help make sense of the system. However, limitations as a part of the human mind often produce deficiencies, resulting in incomplete causal reasoning and misperceptions [31]. The SD approach addresses these deficiencies with explicit methods for representing, testing, and, ultimately, modifying mental models [31]. Various computer software platforms, such as Vensim, Powersim, Stella/iThink, and AnyLogic, are available to construct computer simulation models using the graphical language of SD [31].

SD has proven to be an effective approach for handling construction project complexity [1,19,29]. It has been used by Khan et al. to manage complexity in construction projects [1]. To understand a complex problem, it is necessary to focus on and understand the relationships and interconnectivity in the system [1,30]. The focus must be on the system rather than its sub-components. Accordingly, the SD approach is adopted in this study to manage information complexity using CTs in CSCs. The focus is on finding a solution to address complexity in the CSC to enhance the overall efficiency of supply chain management in construction projects.

#### **3. Research Methodology**

The methodology adopted in this research requires data from the literature and the field. A hybrid approach consisting of inductive and deductive methods was adopted in this study. It consisted of a combination of questionnaires and expert opinions for validation purposes. The literature data were acquired from different research articles, and field data was collected via questionnaire-based surveys. Overall, a four-stage research process was followed in this study, as shown in Figure 1. The stages are subsequently explained.

**Figure 1.** Schematic representation of research methodology.

#### *3.1. Stage 1*

In the first stage, the research problem was identified from the literature using the research gap, which led to the formulation of the problem statement and research objectives. It was identified from previous studies that there are several key factors for CT adoption. Further, multiple barriers hinder the IM flow through CSCs. Accordingly, an in-depth investigation of these factors and barriers is needed, which is targeted in the current study. Considering the trends and research gaps, the research objectives of the study were finalized in Stage 1. Table 1 shows the barriers to IM, along with the relevant references and the literature score. This literature score is assigned based on the frequency of their occurrence in the literature and their significance, as explained in subsequent paragraphs. Table 2 shows the key factors for adopting CTs in CSCs, along with the relevant references and the literature score.


**Table 1.** Barriers to information management.

**Table 2.** Factors affecting the adoption of collaboration technologies.



**Table 2.** *Cont.*

#### *3.2. Stage 2*

In the second stage, a detailed literature review was performed. The literature was retrieved and reviewed using the process mentioned in Section 2.2. The barriers to IM can cause information complexity in CSCs. This complexity needs to be managed for the holistic adoption and implementation of IM in CSCs. SD has been proven in the published literature to be efficient for handling such complexities [1,19,29]. Therefore, the SD approach is adopted to address such complexity by using CTs in CSCs. The barriers to IM and factors affecting the adoption of CTs for IM were identified by critically examining the literature, as previously explained. As a result, 43 barriers and 60 factors were identified. The identified barriers and factors were then ranked through content analysis. The content analysis consisted of (i) literature analysis and (ii) preliminary survey analysis.

In the literature analysis, the identified barriers and factors were given a literature score based on the frequency of their occurrence in the literature and their significance. These were assessed by each respective author of this study on a three-point Likert scale (1 = Low, 3 = Medium, and 5 = High) [56]. Hence, the literature score was calculated for each barrier and factor by finding the product of its frequency and impact score, respectively. Equation (1) was used to calculate the literature score, where *N* is the total number of papers considered to identify the barriers or factors, *A* is the maximum possible score, and frequency depicts the repetition of barriers or factors in the reviewed papers. The literature score was converted into a normalized score by dividing the individual literature score of each barrier and factor by the sum of the literature score. The normalized score was then arranged in descending order, and the cumulative score was calculated.

$$\text{Literature Score} = \text{Impact Score} \times \frac{\text{Frequency}}{A \times N} \tag{1}$$

After the literature analysis, a preliminary survey was performed to include input from field professionals. A preliminary survey questionnaire was prepared and circulated among experts from developing countries to rank the identified barriers and factors according to their experience. Thirty (30) responses were collected from respondents from different developing countries. The field score was calculated based on the preliminary survey and normalized accordingly. The combined literature and field scores were used to determine the combined normalized scores.

One-way ANOVA analysis was performed to determine any statistically significant variation between the ranks of different factors when assessed through weighing ratios, i.e., 40/60, 50/50, 30/70, etc. A *p*-value of 1 between the combinations of different ratios proposes an insignificant disparity. After ANOVA analysis, a 60/40 weighting distribution (60% Field, 40% Literature) was adopted. The 60/40 distribution was used to get a sizeable representation from the field professionals, i.e., 60%, to make the study more robust and strong while providing a reasonable emphasis on the literature, i.e., 40%. This is in line with the study of Jahan et al. [24], who used the same ratio in their study to highlight key factors influencing profitability in construction projects. Eighteen (18) barriers out of forty-three (43) and twenty-one (21) factors out of sixty (60) were selected on the simple majority principle of having an above-50% cumulative impact [56,57]. Tables 3 and 4 show the details and combined normalized scores of barriers and factors, respectively.

#### *3.3. Stage 3*

In the third stage, the collection and analysis of data were performed. Based on the content analysis, the barriers and factors that were subsequently used in the final questionnaire were shortlisted. As the focus of the study was on developing countries, the questionnaire was only circulated to respondents from such countries.

An influence matrix questionnaire was developed using Google Docs [18] to collect the survey data, comprising two sections. The first section inquired about personal information, including the respondent's academic qualifications, years of professional experience, type of organization, and country of work. The second section asked the respondents to rate the influence of each barrier of IM on all factors affecting the adoption of CTs on a three-point Likert scale (1 = Low, 3 = Medium, and 5 = High). It was also used to identify the pertinent polarity. The questionnaire was placed on online platforms such as Facebook®, LinkedIn®, and official emails.


**Table 3.** Assessed barriers to information management.

**Table 4.** Assessed factors affecting the adoption of collaboration technologies.


A total of 62 responses were gathered from 14 developing countries. As a generally accepted rule, the central limit theorem is satisfied with a sample size of 30 or above [58]. Once the data were collected, they were arranged, and the responses were evaluated for reliability and consistency using basic statistical tools. Cronbach's coefficient alpha method was used to measure the reliability and consistency of the collected data. The minimum acceptable value for Cronbach's alpha is 0.7 [59]. The collected data had a Cronbach's alpha value of 0.78, which represents the reliability and consistency of the data. After evaluating the collected survey data, the Relative Importance Index (RII) method was adopted to rank

important relations. The RII is a statistical method to determine the ranking of different factors [18]. Equation (2) was used to calculate the RII in this study.

$$\text{Relative Importance Index} \left(RII\right) = \frac{\sum W}{A \times N} \tag{2}$$

where

W = weight assigned to the Likert scale (ranging from 1 to 5),

*A* = maximum weight assigned to the scale (i.e., 5 in this study),

*N* = total number of respondents (i.e., 62 in this study), and

RII has minimum and maximum values of 0 and 1, respectively.

The greater the value of RII, the more important the factor or relation will be. According to Rooshdi et al. [60], RII has been categorized into five levels. The RII scores range from 0 to 0.2 as 'Low', 0.2 to 0.4 as 'Medium–Low', 0.4 to 0.6 as 'Medium', 0.6 to 0.8 as 'High-Medium', and 0.8 to 1 as 'High'. In order to reduce the data set, relationships with RII ≥ 0.8 were considered most important in this study. The collected survey data revealed 20 relations between barriers and factors as most important (i.e., RII ≥ 0.8). Table 5 shows the final shortlisted relations of barriers and factors. The barriers were connected to the factors based on the influence matrix results. Further, the polarity was determined by the respondents and selected on the basis of a simple majority.

**Table 5.** Shortlisted barriers and factors.


#### *3.4. Stage 4*

In the final stage, the shortlisted relations (as shown in Table 5) were used to develop a CLD, indicating the significant loops. The CLD in this study was developed using Vensim® software. Developing CLDs is a repetitive practice where connections among all variables are chronologically perceived and arranged using professional acumen. All eight barriers to IM, shortlisted in 20 relations, were used as top variables. All barriers were connected with other variables (factors) in the direction of impact. Each arrowhead carries a negative or positive polarity, indicating an inverse or direct relation with the next variable in the

loop. The closed chains of causes and effects are called feedback loops [61]. Each loop was identified as a reinforcing or balancing loop based on its overall polarity.

The development of the CLD paved the way for the formulation of the associated SD model. The SD model was simulated using Vensim® software. The model consists of three stocks governed by flow rates (inflows and outflows) and the variables used in the CLD. Inflow and outflow equations were also developed for these three stocks from the data acquired through the survey. Stocks can be accumulated, and they depict the state of the system that generates the information upon which decisions and actions are based [61].

After the development of the SD model, simulations were run to check the behavior over time for all stocks using graphs. The model was also validated using different structural validation tests [62], such as boundary adequacy, structure, parameter, and extreme condition tests. Furthermore, for its validation, the SD model and its results were also presented to construction industry professionals for their expert opinion. Thus, experts from different construction organizations in developing countries validated the model. Finally, model results were analyzed, and conclusions were drawn based on the SD analysis and the research objectives.

#### *3.5. Demographics of Survey Respondents*

Different professionals from developing countries responded to the questionnaire survey. Most respondents belonged to the contractor (29%) and consultant (27%) organizations. Qualification-wise, 52% of responses were from M.Sc. degree holders and 19% were from Ph.D. degree holders. In addition, 31% had 6 to 10 years of professional experience, while 21% had 21 years or above of professional experience, as shown in Table 6.


**Table 6.** Frequency distribution of responses.

#### *3.6. Geographical Distribution of Responses*

The survey collected 62 responses, including 29% national (Pakistan) and 71% international responses. Responses were received from many countries, including Pakistan (29%), India (23%), UAE (18%), Bangladesh (5%), Malaysia (5%), Iran (5%), Brazil (3%), Jordan (3%), Saudi Arabia (2%), Morocco (2%), Kuwait (2%), Qatar (2%), Turkey (2%), and Oman (2%), as shown in Figure 2. As the focus of the study was limited to developing countries, responses were requested only from respondents in developing countries.

**Figure 2.** Geographical distribution of responses.

#### **4. Results and Discussions**

This section shows the results and analysis of the proposed SD model and provides relevant discussions. First, the CLD developed, with all its reinforcing and balancing loops, is explained. This is followed by the discussion of the SD model, with all its components and simulation graphs.

#### *4.1. Causal Loop Diagram (CLD)*

The CLD developed in this study illustrates a total of eight significant reinforcing and balancing loops, as shown in Figure 3. The reinforcing loops are marked with the alphabet 'R', while balancing loops are marked with 'B'. The CLD consists of two types of variables: barriers to IM and factors affecting the adoption of CTs. All loops are identified and explained below.

**Figure 3.** Causal loop diagram.

#### 4.1.1. Reinforcing Loop R1

The loop (R1) shows that 'Failure of information systems functionality' decreases technical feasibility, as evident from Figure 4. A decrease in technical feasibility decreases top management support, thereby increasing complexity. Further, an increase in the complexity of the system increases the failure of information systems functionality.

**Figure 4.** Loop R1.

4.1.2. Reinforcing Loop R2

The loop (R2) shows that 'Lack of information exchange' increases complexity, leading to a corresponding increase in security issues, as evident from Figure 5. Thus, an increase in security issues increases the lack of information exchange in CSCs.

**Figure 5.** Loop R2.

4.1.3. Reinforcing Loop R3

The loop (R3), as evident from Figure 6, shows that 'Lack of an Information Management System (IMS)' decreases technical feasibility, which decreases top management support in construction projects. Further, decreasing top management support increases the chances of not utilizing (or a lack of) an IM system. Hence, top management support is needed to adopt and implement an IMS in CSCs.

#### 4.1.4. Reinforcing Loop R4

The loop (R4), as evident from Figure 7, shows that 'Lack of leadership skills' decreases top management support and leads to a corresponding decrease in CEO knowledge of the project. This further leads to an increase in the lack of leadership skills.

**Figure 7.** Loop R4.

4.1.5. Balancing Loop B1

The loop (B1), as evident from Figure 8, indicates that 'Communication issues' lead to a corresponding decrease in trust and cooperation throughout CSCs. Further, a decrease in trust and cooperation leads to increased complexity in projects, which eventually leads to a corresponding increase in communication issues.

4.1.6. Balancing Loop B2

The loop (B2), as evident from Figure 9, indicates that 'Lack of information availability' decreases trust and cooperation and increases complexity in projects. This leads to a corresponding increase in security issues and a lack of information availability. Hence, the availability of information is necessary to resolve all issues in CSCs.

**Figure 9.** Loop B2.

#### 4.1.7. Balancing Loop B3

The loop (B3), as evident from Figure 10, indicates that 'Lack of information quality' decreases trust and cooperation, increasing the complexity of projects. There is a corresponding increase in the lack of information quality with increased complexity. Hence, information quality also plays a role in dictating the level of complexity in projects.

#### 4.1.8. Balancing Loop B4

The loop (B4), as evident from Figure 11, indicates that 'Implementation cost' can increase the overall cost of technology, decreasing perceived benefits and, eventually, decreasing top management support. In addition, it causes a decrease in regulatory support, which can increase the implementation cost. Therefore, to manage information in a CSC, it is important to manage the cost of technology.

#### *4.2. System Dynamics Model*

After the development of the CLD, the SD model was developed and simulated using Vensim® software. The SD model consists of three main components (stocks): (a) Complexity, (b) Trust and Cooperation, and (c) Top Management Support, governed by

inflows and outflows. The equations used in the SD model were developed using the data collected through surveys, as previously explained. The SD model is shown in Figure 12.

**Figure 12.** System dynamics model for managing information complexity.

#### *4.3. Simulation Results and Discussion*

The simulation conducted in this study represents the system's behavior over a time period of 6 months, generally taken as the project duration for a small-scale CSC. Multiple equations were used to simulate the stocks and flows, as presented below.

Equation for the first stock (Complexity) = [(0.049 × Lack of information exchange) + (0.050 × Trust and Cooperation) + (0.050 × Top Management Support)]

Equation for the second stock (Trust and Cooperation) = [(0.051 × Lack of information quality) + (0.049 × Lack of information availability) + (0.051 × Communication issues)]

Equation for the third stock (Top Management Support) = [(0.053 × Lack of leadership skills) + (0.050 × Perceived benefits) + (0.050 × Technical feasibility)]

The analysis was performed using Vensim® software. First, the model is drawn, and all variables are added to the model in the software. Then, the graph shown in Figure 13 is obtained upon running the simulation, which depicts the results of the SD model.

**Figure 13.** Simulation result of all variables.

The decrease and increase in the curve of the simulation graphs are explained subsequently. The simulation presents a behavior-over-time graph. Figure 13 shows a simulation graph of all three variables assessed in this study. There is an abrupt decrease in two variables, trust and cooperation and top management support, when these are simulated against project complexity. Therefore, it can be deduced that with increasing project complexity, the support from top management and the level of trust and cooperation among project stakeholders go down. Alternatively, by controlling project complexity, management support will increase trust and cooperation among project stakeholders.

The graph of 'Complexity' shows a compounding trend that implies that factors in the loop play a positive role, as shown in Figure 14. 'Complexity' is at a minimum at first, but with time, it increases. This increase is rapid in the initial days and then slowly eases with time till the end of the simulation period. The inflow of 'Complexity' consists of lack of information exchange, trust and cooperation, and top management support, which increases the complexity of the system. In order to reduce 'Complexity,' the impact of these variables needs to be addressed in CSCs.

The 'Trust and Cooperation' graph shows a draining process that implies that factors in the loop play a negative role, as evident from Figure 15. 'Trust and Cooperation' is at a maximum at first, but with time, it decreases rapidly. This decrease is rapid in the initial days and slows down with time. The inflow of 'Trust and Cooperation', consisting of lack of information quality, lack of information availability, and communication issues, decreases the trust and cooperation among project stakeholders in the CSC system. In order to increase 'Trust and Cooperation', the impact of these variables must be addressed in the CSC.

The graph of 'Top Management Support' also shows a draining process that implies that factors in the loop play a negative role, as evident from Figure 16. It is at a maximum at first, but it decreases rapidly in the initial days and then slows down with time. The inflow of 'Top Management Support', consisting of lack of leadership skills, perceived benefits, and technical feasibility, decreases the support from top management in the CSC system. In order to increase 'Top Management Support', the impact of these variables must be accounted for and addressed in CSCs.

**Figure 16.** Simulation result of 'Top Management Support'.

#### *4.4. Model Validation*

An SD model addresses the problem and provides a solution for complexity in various systems and processes. In order to put confidence in a simulation model so that it shows the right behavior for the right reasons, it must be validated using different validation tests [62]. Model validation is a continuous and repetitive process. Therefore, the model must be validated from the beginning of its development until its completion. The same concept was followed in this study.

Furthermore, the model and its results were presented to different construction industry professionals to capture their expert opinion for its validation. The model was validated by fourteen (14) experts belonging to different organizations in the construction industry of developing countries. Five (5) experts were from contractor organizations, and three (3) each were from consultant, client, and academic organizations. The different validation tests performed on the SD model developed in this study are explained below:


tions show that the model exhibits results related to published studies. Thus, the parameters of the SD model used in this study are verified.

4. The extreme condition test is used to confirm the logical behavior of the model when extreme values are assigned to selected variables [62]. Extreme values are assigned to selected variables (stocks/exogenous variables), and the model-generated behavior is compared to the reference behavior of the system. Simulation results show that the model shows meaningful results even if the values are increased by 50% in the current study. Therefore, the current SD model withstands the extreme conditions test and can be used in CSCs.

#### **5. Conclusions**

The SD model reflects complex interacting systems comprising different components that apprehend information complexity in CSCs. SD is used to manage information complexity using CTs in CSCs. To understand a complex problem, it is necessary to focus on the relationships and interconnectivity in the whole system instead of focusing on the constituent parts only. The SD model provides insight into important barriers to IM and its relation to factors affecting the adoption of CTs in CSCs. It supports CSCs in predicting and analyzing the system's behavior and managing information complexity accordingly. In order to manage information complexity, the SD approach determines the factors affecting the adoption of CTs that should be addressed to improve the CSCs in developing countries. As these factors are addressed, barriers to IM in CSCs will be reduced, and this will result in managing information more efficiently and appropriately.

This study contributes to the body of knowledge by assisting industry professionals in developing countries in understanding the dynamics of information complexity in CSCs. The SD model highlights the main factors affecting the adoption of CTs. Addressing these factors will reduce information complexity and result in better IM in construction projects. The research has practical implications, including using the SD approach to help address information complexity and the adoption of CTs in CSCs. Such adoption will enable collaboration among construction project stakeholders, empowering CSC managers to increase their productivity and performance.

The developed mechanism has successfully achieved the research objectives. These include identifying the barriers to IM and the factors affecting the adoption of CTs, finding interrelation among the barriers and the factors, and identifying the critical components instigating information complexity in CSCs. However, it must be kept in mind that the developed SD model is limited to the factors and barriers identified in the current study. The model can only facilitate the decision-making process by allowing relationships and interdependencies to explain the behavior of a complex system based on the input variables. Thus, these models cannot provide any project-specific advice to professionals. For this purpose, it is necessary to use the model in collaboration with some case-based systems to experience real-time problems occurring in CSCs and provide a practical solution. Similarly, the degree of mutual influence of factors was not captured in this study. Instead, the polarity, factors' RII, and direction of relations were decided following the respondents' opinions. Such influence can be captured in a future study. Further, future studies can use the developed model in real-time for various case studies and obtain relevant results. A similar study, if repeated in a developed country, may provide useful results for holistic comparisons of CSCs in developed and developing countries.

**Author Contributions:** Conceptualization, F.A. and K.I.A.K.; methodology, F.A., K.I.A.K. and F.U.; software, F.A. and K.I.A.K.; validation, F.A., K.I.A.K., F.U., M.A. and B.T.A.; formal analysis, F.A., K.I.A.K. and F.U.; investigation, F.A. and K.I.A.K.; resources, F.U., M.A. and B.T.A.; data curation, F.A., K.I.A.K., F.U., M.A. and B.T.A.; writing—original draft preparation, F.A. and K.I.A.K.; writing review and editing, K.I.A.K. and F.U.; visualization, F.A. and K.I.A.K.; supervision, K.I.A.K. and F.U.; project administration, F.A., K.I.A.K., F.U., M.A. and B.T.A.; funding acquisition, M.A. and B.T.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data are available from the first author and can be shared upon reasonable request.

**Acknowledgments:** The authors would like to acknowledge the Taif University Researchers Supporting Project (number TURSP-2020/324), Taif University, Taif, Saudi Arabia, for supporting this work. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4390001DSR01).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Civil Infrastructure Damage and Corrosion Detection: An Application of Machine Learning**

**Hafiz Suliman Munawar 1, Fahim Ullah 2,\*, Danish Shahzad 3, Amirhossein Heravi 2, Siddra Qayyum <sup>1</sup> and Junaid Akram <sup>4</sup>**


**Abstract:** Automatic detection of corrosion and associated damages to civil infrastructures such as bridges, buildings, and roads, from aerial images captured by an Unmanned Aerial Vehicle (UAV), helps one to overcome the challenges and shortcomings (objectivity and reliability) associated with the manual inspection methods. Deep learning methods have been widely reported in the literature for civil infrastructure corrosion detection. Among them, convolutional neural networks (CNNs) display promising applicability for the automatic detection of image features less affected by image noises. Therefore, in the current study, we propose a modified version of deep hierarchical CNN architecture, based on 16 convolution layers and cycle generative adversarial network (CycleGAN), to predict pixel-wise segmentation in an end-to-end manner using the images of Bolte Bridge and sky rail areas in Victoria (Melbourne). The convolutedly designed model network proposed in the study is based on learning and aggregation of multi-scale and multilevel features while moving from the low convolutional layers to the high-level layers, thus reducing the consistency loss in images due to the inclusion of CycleGAN. The standard approaches only use the last convolutional layer, but our proposed architecture differs from these approaches and uses multiple layers. Moreover, we have used guided filtering and Conditional Random Fields (CRFs) methods to refine the prediction results. Additionally, the effectiveness of the proposed architecture was assessed using benchmarking data of 600 images of civil infrastructure. Overall, the results show that the deep hierarchical CNN architecture based on 16 convolution layers produced advanced performances when evaluated for different methods, including the baseline, PSPNet, DeepLab, and SegNet. Overall, the extended method displayed the Global Accuracy (GA); Class Average Accuracy (CAC); mean Intersection Of the Union (IOU); Precision (P); Recall (R); and F-score values of 0.989, 0.931, 0.878, 0.849, 0.818 and 0.833, respectively.

**Keywords:** artificial intelligence; building corrosion detection; building damage detection; civil infrastructure crack detection; civil infrastructure inspection; image processing; machine learning; unmanned aerial vehicles

#### **1. Introduction and Background**

Corrosion is the degradation of material properties due to environmental interactions that ultimately cause the failure of civil infrastructures [1]. Corrosion is classified into eight categories based on the morphology of the attack and the type of environment to which the structure is exposed. Uniform or general corrosion is the most dominant type of corrosion, which includes rusting of steel bridges, rusting of underground pipelines, tarnishing of silver, and patina formation on copper roofs and bronze statues [2]. The World

**Citation:** Munawar, H.S.; Ullah, F.; Shahzad, D.; Heravi, A.; Qayyum, S.; Akram, J. Civil Infrastructure Damage and Corrosion Detection: An Application of Machine Learning. *Buildings* **2022**, *12*, 156. https:// doi.org/10.3390/buildings12020156

Academic Editor: Łukasz Sadowski

Received: 7 December 2021 Accepted: 31 January 2022 Published: 1 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Corrosion Organization has delineated corrosion as an extremely hazardous phenomenon that has caused significant damage to global infrastructure amounting to 2.5 trillion USD [3]. Therefore, it is one of the major defects evident in the structural materials or systems and has a considerable impact on the economy and on safety [4].

Early detection with concurrent maintenance is essential for maintaining safety and reducing the costs associated with corrosion-related damages [4,5]. Therefore, it is of extreme importance to monitor the condition and health of civil infrastructures to assure the safety of human lives and reduce the associated financial losses in line with modern smart city initiatives [6,7]. Traditionally, human-based visual inspection systems have been used to monitor the structural health of civil infrastructures [8]. However, these visual inspection methods are highly subjective, mainly due to variable personal evaluation approaches used by the involved personnel. Therefore, visual inspection of civil infrastructures such as bridges can lead to highly variable outcomes that depend on multiple factors [9]. These factors include but are not limited to the height of UAV, the field of view (FoV), camera pose, weather conditions, and photometric quantities [10]. As a result, these visual inspection systems display numerous limitations in terms of monitoring corrosion in civil infrastructures, which ultimately lead to lower overall structural reliability [11], pose threats to human lives [12], and result in a huge economic burden due to high maintenance costs.

Corrosion mainly consists of two main visual characteristics: rough texture surface and product color. Therefore, it is recommended to develop algorithms for corrosion detection, with the help of texture analysis and color analysis [9]. These two features can be applied on a stand-alone basis or implemented in a pattern recognition technique [9,10]. Furthermore, digital image processing unlocks various real-life opportunities to explore a variety of environments [11,12]. In comparison to the conventional techniques, digital image processing provides an economical, easy to handle, fast, and accurate approach to be used for large civil infrastructures such as metal bridges, tunnels, poles, and ships [13,14]. Moreover, structural monitoring of the civil infrastructures is an area that has gained considerable importance in terms of assessing the fitness and health of infrastructures. For at least half of the century, numerous structural monitoring approaches have been applied to civil infrastructures using modern sensor-based technology, including load cells, transducers, strain gauges, thermistors, accelerometer, anemometer, microphone, and internet camera technology [15,16].

However, during the last two decades, a shift has been observed to computational approaches to monitor the structural health of infrastructures [17]. The computer-based methods are designed to assist in measuring aging in infrastructures to provide timely responses to ensure safety and reduce economic losses [17]. Nowadays, the research focuses on replacing visual inspection methods with more efficient computer vision-based methods that can efficiently assess buildings, roads, bridges, and underground tunnels to identify damage-sensitive features [18]. Tian et al. [19] used computer vision to identify metal corrosion through the design of a combined Faster-Region Based Convolutional Neural Networks (Faster R-CNN) model and the Hue-Saturation-Intensity (HSI) color feature, to achieve higher correctness and recall rate. Hoang [20] has proposed a computer vision and data-driven method for detecting pitting corrosion. This method is based on history-based adaptive differential evolution with linear population size reduction (LSHADE) integrated with various image processing techniques and support vector machine (SVM) [21]. Numerous other computer-based image processing techniques have been reported to assess corrosion in civil infrastructures. For example, Pragalath et al. [22] have reported using a fuzzy logic framework combined with a recently developed image processing algorithm to measure damages to or the structural health of civil infrastructures. To address the pitting corrosion that causes serious failures to infrastructures, Hoang developed a method based on computer vision, constituting a data-driven approach [20]. Over the recent years, technological breakthroughs have been achieved in developing computer vision techniques to assess civil infrastructures. This has also increased the applicability of artificial intelligence (AI) models based on artificial neural networks (ANNs) and convolutional neural

networks (CNNs). Huang et al. [23] have proposed a damage identification system in steel frames through a neural network integrated with time series and variable temperatures. Liu et al. [24] used the CNN (Faster R-CNN) architecture and a Visual Geometry Group-19 model (VGG-19 model) to quantify and assess corrosion in civil infrastructure. Atha and Jahanshahi [4] developed various CNN models for corrosion detection. Additionally, a CNN-based deep learning method for damage detection (i.e., cracks) has been reported by Munawar et al. [25]. Suh and Cha [26] used a modified version of the Faster R-CNN to evaluate and quantify multiple types of structural damages. Furthermore, Atha and Jahanshahi [4] reported using different CNN architectures to assess corrosion on metal structures. It was shown that CNNs perform better in comparison to advanced vision-based corrosion detection methods.

The advancements in machine learning and AI have also led to a greater emphasis on applying hybrid approaches that ultimately display improved accuracy and reliability [27]. Accordingly, the detection of structural corrosion from infrastructures has been demonstrated using drone images (for automated image analysis) [28]. Furthermore, an ensemble deep learning approach was used in a relevant study, displaying better efficacy than the existing approaches [29]. Furthermore, the new PAnoramic surface damage DEtection Network (PADENet) has been reported to assess corrosion from metal structures. The PADENet method is based on an unmanned aerial vehicle (UAV) for capturing panoramic images, a distorted panoramic augmentation method, the use of multiple projection techniques, a modified CNN (faster region-based), and training through transfer learning on VGG-16 [30].

The reported literature highlights the importance of image processing and machine learning over traditional corrosion detection and assessment methods in civil works [31]. Most prominently, utilizing an ImageNet Large-Scale Visual Recognition Challenge (LSVRC-2010) and deep CNNs for the training and classification of approximately 1.2 million high-resolution images has produced promising results [32]. Accordingly, numerous studies based on CNN architectures have been reported extensively in the literature [4]. Sensor-based technologies are very expensive; thus, we need a plethora of resources to deploy these sensor-based systems, and we still need a fusion-based technique to improve accuracy. Sensor-based systems are time-consuming and require extensive manual work and engineering output. Moreover, the output provided by the sensor also significantly affects the final decision.

However, a visual inspection provides a solution that lacks consistency and sustainability. On the contrary, many municipalities do not conduct infrastructure inspections appropriately and frequently due to a lack of viable methods and human resources. This situation increases the overall risk posed by deteriorating structures. Apart from visual inspection methods, modern municipalities, with enough resources, answer the problem by employing quantitative determination-based solutions, such as using a mobile measurement system (MMS) or laser-scanning. However, though quantitative inspections are accurate and consistent, they are too expensive to conduct such comprehensive inspections [33].

In the current study, we aim to develop a robust CNN-based classifier for corrosion detection on civil infrastructures such as bridges. Our proposed CNN-based methodology will be effectively used for large-scale corrosion detections as it is based on a cycle generative adversarial network (CycleGAN). The applicability of the generative adversarial network (GAN) has gained phenomenal progress in deep learning methods during recent years. It provides a novel strategy for model training using a max–min two-player game [34]. Initially, a fully connected layered generator configuration was used for GAN to generate images from random noised images. The CycleGAN provides effective training without using the pair data for image style transfer and can prevent the overfitting problem during training. Owing to the wider applicability of CycleGANs, corrosion detection on surfaces can be treated as an image-to-image translation problem. The CycleGAN is a style-changing image generation network; thus, the generated data will not differ much from the original

data. Thus, this study proposes the automatic detection of corrosion with the application of CycleGAN and minimizing manual defect detection. The proposed novel methodology is effective for identifying possible cracks and corrosion. Cycle GAN has been found to enhance corrosion detection accuracy and performs much better than other traditional methods. It employs a novel compositional layer-based architecture for generating realistic defects within various image backgrounds with different textures and appearances. It can also mimic the variations of defects and offer flexible control over the locations and categories of the generated defects within the image background.

Overall, the CNN-based architecture proposed in the current study is highly efficient in detecting corrosion features with greater accuracy due to pixel-wise segmentation. Moreover, the proposed architecture will help in overcoming the processing overhead. However, besides the various advantages, the current approach displays certain limitations and challenges due to the inclusion of CycleGAN, such as instability, the collapse of mode, and non-convergence. These are due to inappropriate architectural design, objective function, and the usage of the optimization algorithm. Additionally, the one-to-one mappings used by the CycleGAN lead to a limitation where the model associates a single input image with a single output image that is not suitable for application in complex environments. Therefore, the current architecture considers manipulating the photometric values and HSV (Hue Saturation, Value) for data augmentation.

In the current study, after the introduction section, we elaborate on the detailed architecture and the methodology used for the corrosion detection of civil infrastructures such as bridge. After the methodology section, the results are elaborated in the results and discussion section using an image data set from two locations, mainly the Bolte Bridge and sky rail, Victoria as shown in Figure 1a,b. Also, the list of abbreviations used in this study is given in Abbreviations part.

**Figure 1.** The images of (**a**) Bolte Bridge and (**b**) sky rail areas in Victoria (Melbourne).

#### **2. Materials and Methods**

In the methodology section, we provide the details of the dataset, followed by a discussion of the image processing procedure and the proposed scheme used for the study. The methodology of the proposed method is presented in Figure 2.

**Figure 2.** (**a**) Images collected from the selected locations. (**b**) The structure degradation detection framework used in the current study.

#### *2.1. Data Collection*

In the current study, corrosion images were collected from two locations in Melbourne, Victoria, Australia: The Bolte Bridge and sky rail areas. The images of the case projects are presented in Figure 1a,b, respectively. The images were collected from both locations to assess the safety and structural health of case projects. This objective was achieved by including the corrosion dataset for corrosion detection collected at the selected locations. It is preferred to design techniques capable of detecting and localizing infrastructural damages such as corrosion efficiently and cost-effectively [35]. To perform these tasks without interfering with the operational processes, UAVs should be utilized, which are flexible and cost-effective means for capturing images [36]. In the current study, all images were captured using the UAV model DJI-M200 based on vertical take-off and landing (VTOL). Additionally, the quadcopter of DJI-M200 is equipped with three important components: the Global Navigation Satellite System (GNSS) receiver, a barometer, and the Inertial Measurement Unit (IMU) [35].

In the current study, the corrosion detection procedure was started by the collection of an image dataset. The images were extracted from public datasets such as Crack Forest Dataset (CFD), Crack500, and GAPs. The datasets were provided by VERIS, Australia as well. The images were captured using an Unmanned Aerial Vehicle (UAV) with a digital camera. The main data set was raw images that were processed through Seg-Net for label generation and cropped based on desired height and width used as our dataset. For the current study, the target structure was the Bolte Bridge and the SkyRail located in Melbourne, Victoria, Australia (Figure 1a,b). The final dataset contained a total of 1300 images with dimensions set to 4864 × 3648. Each image number of cropped images that was kept was manually supervised based on corrosion level. Each image used was 7 megabytes (MB) and in JPEG format (Figure 2a). The structure degradation detection framework used in the current study is presented in Figure 2b.

Moreover, the images included in the final dataset were divided into four types of corrosion levels: no corrosion, low-level corrosion, medium-level corrosion, and high-level corrosion, as shown in Table 1. A total of 4.26%, 1.64%, 0.75%, and 93.25% pixels were used in the current study for low, medium, high, and no corrosion, respectively, selected by the expert conducting this study. From a total of 6.75% corroded pixels of the final dataset, the used training and validation ratio is 80:20. Training is performed with a total of 5.4% pixels of final dataset, whereas the validation set comprises 1.25% pixels (see Table 1).


**Table 1.** The percentages of pixels in images used for model training and validation.

The camera installed on the UAV was used to capture the images. The lens collected the light from the object to create an image (Figure 3). The size and location of the image depend on the location of the object and the focal length of the lens. The relationship between the object distance (*o*), the focal length (*f*), and the image distance (*i*) are given by <sup>1</sup> *<sup>o</sup>* <sup>+</sup> <sup>1</sup> *<sup>i</sup>* <sup>=</sup> <sup>1</sup> *<sup>f</sup>* . The image distance is simply the distance from the object to the thin lines. The image distance is the distance from the thin lens to the image. The focal length is a characteristic of the thin lens, and it specifies the distance at which parallel rays come to a focus. The field calibration at 50 m flight height was carried out with the focal length of 8.4 mm, format size of 7.5 mm, and principal point of 3.8 × 2.5. The drone DJI M200 was used to have a focal length of 24 mm, an operating frequency of 24,000–24,835 GHz, obstacle sensing range 2.3–98.4 feet, a maximum speed of 50.3 mph (S-mode/A-mode) and 38 mph (P-mode), and maximum wind resistance of 12 m/s.

**Figure 3.** Lens collects the light from the object to create an image.

The height and width distributions of the corrosion images included in the final data set are presented as per the four corrosion levels in the current study (Figure 4). An explanation of the spatial location of the corroded area is provided in Figure 4. The overall height and width distributions of corrosion pixels included in the data were obtained using WANDB and Pytorch sessions. Note that the plot only shows corroded pixels distribution. To define positive class images with blob/connected components of more than 3%, corroded pixels are considered after performing a morphological closing operation. Using this procedure, the frequency of corroded pixels was depicted for our data. The data distribution and spatial location are used to locate the labels or bounding boxes of corrosion data. Herein, the axis values define the size distribution of our data set. For our data set, it is notable that the frequency or spatial distribution of the corrosion pixels is not projected closely at one place but rather is well distributed. Moreover, the distribution of the corrosion pixels is not skewed to the left or right side of the plot. Rather, it is centralized, which shows that the data are normal and display a Gaussian distribution. This also indicates that our selected data set of corrosion pixels is not biased.

**Figure 4.** The height, width, and spatial extent of the corroded pixels in our dataset.

The final dataset has been divided into four types of corrosion levels. The levels are manually defined based on the segmentation task since all images are randomly cropped in our architecture to 544 × 384 × 3. Further, morphological operation (closing) using a disk of size 3 × 3 has been used to categorize images into sub-classes. The reason for using closing operation is trails that are used as closing fills in the gaps and are smaller than or the same size as the structuring element.


• High-level corrosion: Images having more than 15% of corroded pixels.

#### Image Pre-Processing and Data Augmentation

After collecting the data set, pre-processing was performed on the captured images for the removal of unwanted objects and noise. This was followed by adjusting the image brightness and size. An important component of the deep networks is the data augmentation procedure. For the current study, the data augmentation was performed 16 times using the following steps:


#### *2.2. Manual Supervision*

The captured dataset has been segmented by the training SegNet model. However, due to its limited accuracy, manual supervision is required. Therefore, we performed per pixel annotation and supervised it manually. While working with general images, the pre-trained model for semantic segmentation is not appropriate for use; therefore, the deep corrosion data set was used to train the SegNet.

The SegNet deep convolutional architecture contains a sequence of encoders, decoders, and a pixel-wise classifier. Generally, the encoder is based on one or more convolutional layers, batch normalization, and rectified linear activation unit (ReLU) non-linearity units, followed by non-overlapping max-pooling and sub-sampling [37]. For the reduction in the overall model parameters and the retention of the class boundaries in segmented images, the decoder displays an unsampled sparse encoding through max-pooling indices. For this study, the end-to-end model training was performed using the stochastic gradient descent method [37], and the general SegNet architecture was modified to Bayesian SegNet. The modification of the architecture was performed using SegNet [37] and SegNet-Basic [38]. Additionally, for the SegNet's encoder, a VGG-16 network with 13 convolutional layers, was used, followed by 13 corresponding decoders [39].

The CNN is composed of multiple blocks, where the first block is used for feature extraction from an image through the utility of several convolution kernels. This procedure is repeated multiple times, and the last feature map-based values are transformed into a vector that forms the output of the first block and serves as the input for the next block. During the processing in the second block, the input vector values are modified using several linear combinations and activation functions. This results in returning output in the form of a new vector. The end block contains the same number of elements and classes, where each element lies between 0 and 1 and the probabilities are calculated using the logistic or a softmax function activation function. As far as the layers are considered, there are a total of four types of layers in a CNN, namely, (1) convolutional layer, (2) pooling layer, (3) ReLU correction layer, and (4) fully connected layer. The convolution layer is present in the first layer and is responsible for receiving input images and calculating convolution across each filter.

Moreover, the pooling layer is conventionally positioned between two convolution layers and is responsible for receiving feature maps. The pooling layer is responsible for applying pooling operations to these maps, mainly reducing image size by preserving important characteristics. It also plays an important role in reducing the number of parameters and calculations within the network, thus improving the overall efficiency and avoiding the over-learning of the network. Another important component of the CNN is the ReLU correction layer, an activation function responsible for replacing all negative values received as inputs by zeros [40]. Herein, a constant feature size of 64 and a total of four layers were used for each encoder and decoder in the smaller SegNet-Basic network. Therefore, we use the SegNet-Basic as the smaller model for our analysis as it is evident that it provides a conceptually effective representation of the larger architecture. After

performing image labeling, all labeled images were re-examined manually using MATLAB for the delicate handling of all corrosion/non-corrosion labels.

#### *2.3. Image Classification and Processing*

Generally, the image needs to be resized for the classification network to correspond to the input size, after which high-level features are extracted via the convolutional layers. Then, the fully connected layers utilize the extracted features for the image classification based on the corrosion levels. Ultimately, a particular image will be classified according to four levels of corrosion: no corrosion, low-level corrosion, medium-level corrosion, and high-level corrosion. It is notable that both raw and augmented images were used for the training of our network. The data augmentation based on random image cropping and patching for deep CNNs were used for label generation and corrosion detection in the entire training procedure. Additionally, the input images were resized to 256 × 256 due to the rotation-based transformations. The model training and testing were executed using a single NVIDIA TITAN X graphic card [40].

Considerable advancements have been achieved in classifying and processing images by utilizing the highly expressive deep CNNs displaying numerous parameters [41]. However, the high-performance CNNs are likely to display chances of over-fitting, which might be due to the CNNs memorization of the non-generalized image features present in the training set. Therefore, it is extremely important to use a sufficient set of training samples because utilizing an insufficient set of training samples can cause the model to be over-fitting [42]. However, collecting sufficiently abundant samples is a prodigiously costly attempt that allows for the increased applicability of various data augmentation methods, including flipping, resizing, and random cropping. These augmentation techniques are essentially required to increase the variation in images to prevent model over-fitting [43].

A detailed description of the method adopted in the current study and the proposed CNN architecture is provided in the following subsections.

#### 2.3.1. U-Net Model Architecture

A U-Net implementation was used as the model architecture through which important model configurations, including activation, function, and depth, can be passed as arguments during model creation. The U-Net implementation considers the CNNs for precise and quick image segmentation. Such U-Net architectures have emerged as prevalent end-to-end architectures for performing semantic segmentation and can produce very promising results [44]. The overall U-Net architecture of the encoder and decoder used in the study for corrosion detection is presented in Figure 5. The steps followed are given as below:


**Figure 5.** The U-Net architecture used for crack detection.

The architectural details described above lead to an elegantly designed u-shaped convolution network that can provide localization and context-based solutions. The accuracy of U-Net architecture is 6.1% higher than FCN (Fully Convolutional Network) architecture for the 256 × 256 dataset. For dataset 512 × 512, although the validation accuracy of U-Net architecture is 1% less than FCN, for memory footprint, U-Net is better than other variants. In comparison with U-Net, Seg-Net has a similar design. The main difference is the depth (channels), whereas the overall accuracy is almost similar. Therefore, the usage of Seg-Net or U-Net is often considered a developer choice.

Each blue box in Figure 5 represents a multi-channel feature map with the number of channels mentioned on top of the box, and the *x* − *y* − *size* is provided at the lower-left edge of the box, whereas the white box corresponds to the copied feature maps, and the arrows indicate different operations.

We employed a multi-scale feature Decode and Fuse (MSDF) model with a Conditional random fields (CRF) layer for the current study for boundary detection (Figure 6). Furthermore, we also used this model to implement the probabilistic inference over the segmentation model. Therefore, for the given training data *X* with labels *Y* and probability distribution *p*, we use the Bayesian SegNet to explain the posterior distribution over the convolutional weights (*W*), as denoted by the following expression:

$$p(\mathcal{W} \mid X, Y) \tag{1}$$

Since the posterior distribution is not manageable, an approximation of the weight distribution is essential for which we utilize the variational inference that allows distribution learning over the network weights *q*(*W*) This is achieved through the Kullback–Leibler (*KL*) divergence minimization amid the approximated distribution and the full posterior, described as follows:

$$KL(q(\mathcal{W}) \parallel p(\mathcal{W} \mid \mathcal{X}, \mathcal{Y})) \tag{2}$$

For each *K* × *K* dimensional convolutional layer (*i*) with units *j*, vectors of Bernoulli distributed random variables bi and variational parameters *Mi* (in the approximated variational distribution *q* (*Wi*)) allow one to achieve an approximated model of the Gaussian process and the approximated model of the Gaussian process can be described mathematically, as shown in Equation (4).

$$\text{'} \text{bi}, \text{ j } \sim \text{ Bernoulli}(\text{pi}) \text{ for } j = 1, \dots, \text{Ki} \tag{3}$$

**Figure 6.** A representation of multiple layer CNN architecture based on 16 convolutional layers.

For this study, we used a standard value of 50% to account for the probability (*pi* = 0.5) of dropping for probability optimization of neurons to avoid overfitting while training. For the network training, the utilization of the stochastic gradient descent allows for effective learning of the weight distribution that assists in obtaining a better explanation of data and avoiding model over-fitting. For model training, we use the dropout method, which aids in achieving a posterior distribution of SoftMax class probabilities. For segmentation prediction, the means of these samples are employed. In comparison, the output uncertainty associated with each class is modeled by the variance estimation, estimated by taking the mean of the variance measurements per class.

#### 2.3.2. CycleGAN

The CycleGANs offer effective network training without requiring manually labeled Ground Truths. This is mainly because CycleGANs allow for the translation of corrosion images to the Ground Truth, such as images with similar structural patterns [45]. The proposed approach will help government/municipalities by providing a scalable, real-time, robust, and efficient technique to conduct a cost-effective inspection to maintain or detect structural damages. We propose a conceptual framework for an automated corrosion detection system using modern machine learning techniques (deep neural network). This can be easily scaled to any edge device, e.g., jetson nano or coral dev board. Two separate data sets are required for training a CycleGAN. These include the corrosion images {*xi*} in the corrosion image set (*X*), and the images {*yi*} in the structure library (*Y*), respectively. The network topology is based on the forward and reverses GANs that perform image

translations from *X* to *Y* (*F* : *X* → *Y*) and *Y* to *X* (*F* : *Y* → *X*). The overall topology is presented in Figure 7. The CycleGAN is used to transfer or distribute the characteristics of an image to another image.

The CycleGAN addresses problems through image reconstruction. Generally, the CycleGAN has two core components: the generator and discriminator. The generator produces the samples from the desired distribution, whereas the discriminator helps determine whether the sample is from a real distribution or a generated one. The CycleGAN comprises three parts: (1) encoder, (2) transformer, and (3) decoder. The steps for CycleGAN are given as below:


The input images are received by the encoder, which extracts input features using convolutions. The encoder is based on three convolutions that help reduce the image to 1/4th of its actual size. After applying the activation function, the output of the encoder is transferred to the transformer based on 6 or 9 residual blocks depending on the input size. Similarly, the transformer's output is transferred to the decoder, which utilizes a 2-deconvolution block of fraction strides that helps increase representation size compared to the original size [34].

Generally, for the CycleGAN working, there are two discriminators for the system, mainly *Dx* and *Dy*. The discriminator *Dx* is used for distinguishing between the {*xi*} and {*R*(*yi*)} with *Ladvr* (reverse adversarial loss). Further, the discriminator *Dy* distinguishes between {*yi*} and the translated images {*F*(*xi*)} to overcome the differences in consistency to domains and data imbalance. Finally, the objective function can be described using Equation (5), where *Ladvf* indicates the forward adversarial loss.

$$L = \left(L\_{\text{adv}f} + L\_{\text{advr}}\right) + \lambda \left(L\mathbf{1}\_{f\varepsilon} + L\mathbf{1}\_{\text{rc}}\right) \tag{5}$$

In the objective function, *λ* controls the weight between the two losses (adversarial and the cycle-consistent loss), and *L*1*f c* and *L*1*rc* represent the two-cycle consistent losses with *L*1-distance formulas in the forward and reverse GAN, respectively [45].

#### *2.4. Loss Formulation*

The loss formulation is based on adversarial loss and cycle-consistency loss. The adversarial loss provides the benefit of obtaining structured images. However, when used solely, it is insufficient to translate the corrosion image patch to the desired structure patch or vice versa. Owing to this, the consistency of structure pattern amid the input and the output images is not guaranteed. Thus, an extra parameter for cycle consistency is necessitated for the effective network training and maintenance of the structure pattern consistency between the input and output [34].

#### 2.4.1. Adversarial Loss

The adversarial networks execute through the max–min two-player game. The training based on these networks can assist in achieving real-like images that are generated from noise. Therefore, the alternate optimization of the following objectives of the adversarial networks is very important.

$$\max\_{D} V\_{D}(D, G) = E\_{X \cdot P4^{(X)}}[\log D(\mathbf{x})] + E\_{z \cdot P4^{(z)}}\tag{6}$$

$$\max\_{\mathbf{G}} V\_{\mathbf{G}}(D, \mathbf{G}) = E\_{\mathbf{z} \cdot \mathbf{P4}^{(\mathbf{z})}} \tag{7}$$

In Equations (6) and (7), D refers to discriminator, G refers to the generator, z refers to noise vector input into the generator, and *x* refers to the real image in the training set. *G* generates images (*Gz*) that are like images from *X*. *D* distinguishes between the real

samples '*x*' and the generated sample *G*(*z*). Equations (6) and (7) are being maximized by *D* and *G*, leading to adversarial learning.

#### 2.4.2. Cycle-Consistency Loss

For the data set *X*, each sample '*x*' should be able to return to the original patch through the network after the processing cycle (*x* → *G*(*x*) → *F*(*G*(*x*)) ∼ *x*). Similarly, for each structure image '*y*' in the structure set, the network should allow for the return of y back to the original image (*y* → *R*(*y*) → *F*(*R*(*y*)) ∼ *y*). *Ex p*4(*x*) defines the expected probability distribution. These constraints lead to the formulation of cycle-consistency loss, which can be defined using Equation (8).

$$L\_{\text{cyc}}(F, \mathbb{R}) = E\_{\text{x \, p4}^{(\text{x})}} \tag{8}$$

A general mathematical expression can be used for the image edge detection problem, where an image in the data layer can be transformed into a multidimensional matrix and the set of edge feature pixels of the Ground Truth can be denoted by the variables *Xn* and *Yn*, respectively, as shown in Equation (9).

$$\text{Yn} = \{ yj(n) \; , \; j = 1 \; \dots \; \; \mid \; Xn \vert \} \tag{9}$$

In Equation (9), |*Xn*| denotes the number of pixels in the image *n*. The possibility of predicted pixel labeled by the annotator is represented as follows:

$$P(yj = 1 | Xn) \tag{10}$$

The values are obtained as zero and one, where zero means that the edge pixel predicted by the model is not the Ground Truth label, and one shows that the edge pixel predicted by the model is labeled by all annotators.

Each pixel is annotated, and a value of 0.7 was set as the threshold (*μ*) based on training data in the loss function. A pixel should not be used if the value of *P*(*yj* = 1|*Xn*) is greater than 0 and less than *μ* because the pixel is considered controversial. The training data of images can be described using Equation (11).

$$S = \{ (Xn, \,\,^\gamma n), \, n = 1, \, 2, \, \dots, \, N \} \tag{11}$$

where *N* is the number of images used for training. For every pixel, the loss functions are computed for both the top-down architecture and the decoder subnet regarding the Ground Truth pixel label, using Equations (12) and (13).

$$\text{If } P(y) = \mathbf{1}|\mathbf{X}n\rangle > \mu \tag{12}$$

$$d\_{\rm slide}(\mathcal{W}, \mathcal{X}\_{\rm n}) = -\beta \sum\_{j \neq y^{+}} \log P\left(y\_{j} = 1 | \mathcal{X}\_{\rm n}, \mathcal{W}\right) - (1 - \beta) \sum\_{j \neq y^{-}} \log P\left(y\_{j} = 0 | \mathcal{X}\_{\rm n}, \mathcal{W}\right) \tag{13}$$

If *P* (*yj* = 1|*Xn*) < *μ* then lside (*W*, *Xn*) = 0. For this particular instance, Equation (14) is used.

$$\beta = \frac{|Y - |}{|Y - | + |Y + |} \text{ } 1 - \beta = \frac{|Y - |}{|Y - | + |Y + |} \tag{14}$$

where *Y* + *represents* the edge label and *Y* − *re f ers* to the non-edge label.

It is extremely important to consider the fusion loss while calculating the total loss function for a multi-scale model. This method for calculating fusion loss assumes that the top-down architecture has the same loss function as the decoder subnet. Further, the value *W* is learned and subsequently updated during the training procedure. The improved loss function version is represented using Equation (15).

$$l(\mathcal{W}, X\_{\mathcal{U}}) = \sum\_{i=1}^{N} l\_{i=1}^{\mathcal{M}} \sum\_{j=1}^{M} l\_{sidc}^{j} (\mathcal{W}, X\_{\mathcal{U}}) + l\_{fusc} (\mathcal{W}, X\_{\mathcal{n}}) \, \tag{15}$$

where *N* represents the number of images used for training and *M* denotes the total number of Holistically-Nested Edge Detection (HED) dsn-layers and the layers of decoder subnet (set to 6 for this study). Similarly, *lf use*(*W*, *Xn*) represents the loss function associated with the fusion layer.

#### 2.4.3. Model Parameters

The network was built based on top of implementations (i.e., Fully Convolutional Network (FCN) , Deeply Supervised Network (DSN), Holistically-Nested Edge Detection (HED), and SegNet. It was trained using the Caffe library. The Stochastic Gradient Descent (SGD) based optimization was used for the model. The parameters considered for this study and the associated tuned values are represented in Table 2.

**Table 2.** Model parameters tuned for the network.


In the current study, we utilized batch normalization, side-output layers, and loss-offunction elaboration for our network principally to distinguish the four levels of corrosion. For our model, the learning rate; loss weight of each side-output; loss weight of each momentum; and decay values of 1 × <sup>10</sup><sup>−</sup>4, 1.0, 1.0, 0.9, and 2 × <sup>10</sup>−<sup>4</sup> were used, respectively (Table 2). The size of the images was scaled according to the size of the input images. The in-depth analysis of local images was performed using image segmentation.

Moreover, variable mini-batch sizes can be used depending on the GPU capacity. For the current study, a mini-batch of 1 was used to reduce the time required for training. The learning rate starts from 0.0001 but is later adjusted based on momentum and weight decay. This is performed for the training optimization and avoiding getting stuck at the local minima. The utilization of the described architecture assists in improving the convergence and accuracy subsequently. It also eliminates the requirement of using pre-trained models for network training.

#### **3. Model Development and Training**

This section explains the development and implementation of the model developed in this study.

#### *3.1. Model Training*

Various tools, functions, and criteria such as a loss function, optimizer (Stochastic Gradient Descent (SGD)), device (CPU/GPU), learning rate scheduler, epoch (number from where to initiate training), training, and validation data loader were used to train the U-Net architecture. Training U-Net is based on iterating over the training data loads and sending the batches to the training mode through the network. Training of the U-Net architecture produces outputs/results in the three forms: accumulated training loss, validation loss, and learning rate. Thus, an extra parameter for cycle consistency is necessitated for the effective network training and maintenance of the structure pattern consistency between the input and output. The training rate range test results on our data set (for both the training and validation loss) were analyzed using the matplotlib. The data used in the current study do not display overfitting or bias. Since we are using both the training and test sets, we adopt a better and generalized strategy that can also be applied to unseen data to give better performance and accuracy. It is also shown that for both the training and test set, a comparable performance was achieved after 50 iterations (Figure 8).

**Figure 8.** The plots for iterations versus accuracy for the training and validation data sets.

#### *3.2. Evaluation Metrics*

For the evaluation of common semantic segmentation, three metrics were used in the study. We calculated the Global Accuracy (GC), Class Average Accuracy (CAC), and the mean Intersection of the Union (IoU) over all classes. Additionally, three other metrics, including precision (P, Equation (13)), recall (R, Equation (14)), and F-score (F, Equation (15)), were used for the evaluation. Finally, the GC estimates the percentage of correctly predicted pixels and is calculated using Equation (16). Here, *n* and *t* refer to the sample and technique currently under observation.

$$\text{GCC} = \sum\_{i} n\_{ii} \sum\_{i} t\_i \tag{16}$$

The CAC estimates the predictive accuracy over all the classes and is presented by Equation (17).

$$\text{CAC} = \left(\frac{1}{n\_{cls}}\right) \sum\_{i} n\_{ii} / t\_i \tag{17}$$

The mean IoU over all classes was calculated using Equation (18), where *n* is the sample and *t* is the technique currently under observation.

$$\text{IoU} = \left(\frac{1}{n\_{cls}}\right) \sum\_{i} n\_{ii} / t\_i + \sum\_{j} n\_{ji} - n\_{ii} \tag{18}$$

Further, P, R, and F were also calculated to evaluate the semantic segmentation using Equations (19)–(21).

$$\mathcal{P} = \frac{no\ of\ true\ positives}{no\ of\ true\ positives + no\ of\ false\ positives} \tag{19}$$

$$\mathcal{R} = \frac{no\ of\ true\ positives}{no\ of\ true\ positives + no\ of\ false\ negatives}}\tag{20}$$

$$\mathbf{F} = \frac{2\mathbf{PR}}{\mathbf{R} + \mathbf{R}} \tag{21}$$

Since the P and R metrics do not consider the true negative rates, the concept of the Receiver Operating Characteristic (ROC) curve was employed for the evaluation. The True Positive Rate (TPR), False Positive Rate (FPR), and the Area Under the ROC Curve (AUC) are calculated using Equations (22) and (23), respectively.

$$\text{TPR} = \frac{no\ of\ true\ positives}{no\ of\ true\ positives + no\ of\ false\ negatives}}\tag{22}$$

$$\text{TPF} = \frac{no\ of\ false\ positions}{no\ of\ false\ positives + no\ of\ true\ negatives} \tag{23}$$

#### *3.3. Training and Test Accuracy*

To assess the accuracy of the training and test set, mean average precision (mAP) and IoU parameters were used in the current study. The P values were plotted on the x-axis and the R values on the *y*-axis, respectively. Precision (P) is defined as the positive predictions for a positive class, whereas the R metric quantifies positive predictions for all positive classes included in the dataset. These parameters were used for the evaluation of image segmentation. Here, the IoU metric is used to quantify the percent overlap between a target mask and the predictions of the output results. In other words, the IoU parameter can measure or quantify the number of overlapping or common pixels between a target mask and the predictions of the output results. For the selected data, the value of the IoU\_min was increased from 0.1 to 0.9 while calculating the error on each point, as shown in Figure 9.

**Figure 9.** The precision-recall curves for the data set.

It was observed that when the value IoU\_min was equal to 0.1, the value of mAP was maximum (0.77). This indicates that the threshold should be high. However, while increasing the IoU\_min from 0.1 to 0.9, an observable decrease in the values of mAP is observed for our data set (Figure 9). The threshold must not fluctuate; otherwise, the average accuracy will be compromised. The P should be high for a model, and R-value should be lower. In the current study, we have computed the P-R curves by changing the prediction confidence threshold. The process was repeated for different IoU thresholds to establish a match. Further, in our model, the computation of a single P and R score at the specified IoU threshold did not adequately describe the behavior of our model's full P-R curve. Therefore, we utilized the average P to integrate the area under a P-R curve effectively.

#### *3.4. Evaluation of Network Performance*

The network for corrosion detection was trained using six approaches, namely, (1) DeepCorrosion-Basic, (2) DeepCorrosion-BN, (3) DeepCorrosion-CRF, (4) DeepCorrosion-GF, (5) DeepCorrosion-CRF-GF, and (6) DeepCorrosion-Aug. The HED and loss-of-function architecture were used in the DeepCorrosion-Basic. This approach was used for the training of a set of 3000 original images. The DeepCorrosion-BN is the modified version of DeepCorrosion-Basic, where, before every activation operation, the batch normalization layers are added. The architectures of DeepCorrosion-BN and DeepCorrosion-CRF are similar, but they differ concerning the variations in the addition of the CRF after the network. The same holds for the DeepCorrosion-GF, which is a DeepCorrosion-BN based version with an addition of the guided filtering module. The DeepCorrosion-CRF-GF approach is based on the linear combination of the two approaches, namely, DeepCorrosion-CRF and DeepCorrosion-GF, and is described using Equation (24).

$$\text{IP} = \beta \text{P} \, \text{CRF} + (1 - \beta) \, \text{PGF} \, \tag{24}$$

where P is the prediction map, and β is the balancing weight with a value of 0.5. The precision-recall curves generated by the threshold segmentation method used in the current study is shown in Figure 10 below.

We have used the PSPnet based multi-focus method for image fusion. Moreover, the DEEPLAB method was used for the semantic image segmentation using deep CNNs, Atrous Convolution, and Fully Connected CRFs. Herein, the Pyramid Scene Parsing Network (PSPnet) is used to extract the focused region of an image. In comparison, DEEPLAB was utilized for the optimization of the segmentation map to achieve map refinement. We have used baseline and extended architecture for the current study. We have added data augmentation, fusion method, and decay of learning rates in the extended architecture method. However, in the baseline method, data augmentation has not been applied. The results of both the baseline and extended methods are presented in Figure 10.

The P-R curves obtained for both the methods indicate that the extended method displays better performance, as indicated by the F-Score value of 0.833 compared to the baseline method (F-Score of 0.806), as shown in Figure 10.

**Figure 10.** The precision-recall curves generated by the threshold segmentation.

#### **4. Results and Discussions**

Early detection and concurrent maintenance form an integral part of the structural health monitoring system used for civil infrastructures. If left unprotected, the damage caused to infrastructures can lead to serious losses in terms of economy and loss of human lives [4]. Therefore, designing cutting-edge technologies to monitor the health of infrastructures such as bridges is of extreme importance to ensure the health of civil infrastructures and the safety of human lives and to reduce financial losses [17]. However, the traditional manual or human-based visual inspection methods that were conventionally used to monitor the structural health of civil infrastructures display different limitations [46]. To overcome the limitations associated with traditional inspection methods, it is necessary to use advanced computer vision-based and artificial intelligence (AI) techniques, such as artificial neural networks (ANNs) and convolutional neural networks (CNNs), to assess the health of civil infrastructures.

Therefore, in the current study, we proposed a modified version of deep hierarchical CNN architecture, based on 16 convolution layers and cycle generative adversarial network (CycleGAN), to predict pixel-wise segmentation in an end-to-end manner. This was achieved through image collection (1300 images) from two target locations, mainly the Bolte Bridge and sky rail areas in Victoria (Melbourne), using the DJI-MJ200. The images were collected to train our deep learning model. To detect corrosion on civil infrastructures such as bridges, an in-depth analysis of local images was conducted using image segmentation that was performed through the representation of the subject into the binary images. Herein, the total image set was divided into the training set and the test set, respectively. We have presented some of the representative images collected from the target locations, and the corresponding segmentation used in the current study is also presented in Figure 11.

**Figure 11.** (**a**–**z1**) Results of several samples with corrosion and water straining.

The segmentation was performed through the representation of subjects in the binary images. The pixel-wise segmentation map was used for each image that allowed the exact coverage of the corrosion regions. The images were set to a pixel size of 544 × 384. The corrosion images were chosen from a wide range of scales and scenes to achieve the universal representation of corrosion. Furthermore, the images included in the dataset were divided into four types of corrosion levels: no corrosion (93.25%), low-level corrosion (4.23%), medium-level corrosion (1.32%,) and Hhigh-level corrosion (1.20%), as previously presented in Table 1. Thus, about 95.16% non-corrosion pixels and 4.84% corrosion pixels were included in the training set.

In comparison, 94.01% non-corrosion pixels and 5.99% corrosion pixels were included in the test set (Table 1). The statistics for the annotation of the major textures and scenes distribution are shown in Figure 11. Figure 11a,b represents the input and output water straining images of the case study projects, whereas Figure 11d,f,h represent the water straining input images used to obtain the output images Figure 11e,g,i respectively. The input and output images for corrosion are presented in Figure 11j–m, where the left panel corresponds to input and the right panel corresponds to the output images. The corrosion images were chosen to achieve the universal representation of corrosion, as shown in the data distribution previously in Figure 4. Figure 11a–m depicts water straining, and Figure 11c shows the suppression of the background image to extract water straining. Figure 11n represents water straining, and Figure 11o depicts corrosion severity. There are different shades of grey, but from a practical standpoint, a grey spectrum can be divided into a limited number of levels, i.e., grey level Figure 11p–z1.

The colored image is converted to grayscale with predetermined grey levels to build the matrix. For example, Figure 11p shows a grey-scale patch, and Figure 11r represents its corresponding grey levels. Figure 11r,u,x,z1 represents the detected corrosion. Despite other structures in the images, the prediction gives a high score for the area where positive classes are present. For the proposed CNN framework, the total corrosion pixels included in the training and test sets were further distributed into significant and weak corrosion pixels, respectively (Figure 11). This classification was performed to distinguish corrosion pixels based on the pixel width. For example, corrosion with a pixel depth of 0 to 5 was categorized with corrosion and water straining.

Similarly, corrosion with a pixel width above 5 depicts object features. Moreover, a data augmentation approach based on random image cropping and patching was used for the proposed deep CNN architecture for label generation and corrosion detection during training. The frequency of corrosion pixels was predicted, and related labels or bounding boxes were located using data distribution and spatial location.

A batch normalization, side-output layers, and loss-of-function elaboration were used for our network, principally to distinguish four levels of corrosion in the current study. A reduced over-fitting of the network and a significant boost in the performance can be achieved using batch normalization. In addition, both conditional random fields and the faster/efficient guided image filtering methods can be implemented to refine the dense regions. In the current study model, parameters such as learning rate, loss weight of each side-output, loss weight of each final fused layer, momentum, and decay were considered, as shown in Table 2. A mini-batch size of 1 was used to reduce the time required for training.

We have proposed two models in our study that are the baseline and extended architectures (Figure 10). In the baseline method, data augmentation has not been applied, whereas in the extended architecture method, we have added data augmentation, fusion method, and decay of learning rates. Overall, we assessed the performance of two proposed methods (i.e., baseline and extended). The performance was also evaluated for three different methods, including the PSPNet, DeepLab, and SegNet. It was evident that the extended method displayed superior performance compared to all other models, as shown by the performance evaluation metrics presented in Table 3. The performance of models was assessed using parameters such as GC, CAC, Mean IOU, P, R, and F-score.


**Table 3.** The statistics for the used threshold segmentation methods.

In addition, cross-validation was performed on each architecture, and the prediction for each method was assessed using the described evaluation metrics. The statistical parameters for all five models, including proposed methods (extended and baseline), PSPNet, DeepLab, and SegNet, are presented in Table 3. The SegNet method is sensitive to noise and incorrectly extracts data with more shadows and stains. Lack of contextual information to acquire different types of cracks may lead to changes in the width of the cracks. While PSPNet effectively eliminates the influence of noise, it loses more detailed information. DeepLab performs better in extracting light cracks and can eliminate most of the noise interference, but non-existent cracks are produced at large dilation rates, and its single convolution kernel size may cause a loss of crack information.

Similar to our study, various authors have also described the importance of CNNbased deep learning methods for damage detection of civil infrastructures with variable performance statistics. Moreover, the utility of CNNs for corrosion detection has been extensively reported in the literature. Cha et al. [18] have adopted Faster R-CNN for the detection of multiple damages on civil infrastructures. The authors used a total of 2366 images with 500 × 375 pixels, and the evaluation of their model revealed average precision values of about 90.6%, 83.4%, 82.1%, 98.1%, and 84.7% for concrete crack, steel corrosion (medium or high), bolt corrosion, and steel delamination damage types, respectively. Particularly for this study, the CNN-based method showed better accuracy.

Moreover, Atha and Jahanshahi reported using CNN-based networks, namely, ZF Net and VGG16, compared with three other proposed CNNs (i.e., Corrosion7, Corrosion5, and VGG15) for the detection of corrosion [4]. Overall, for these models, the F-Scores were observed between 91.96 to 98.64%, and the P and R values were found to be 90–98.31% and 65.64–98.64%, respectively [4]. The fine-tuned VGG-16 network outperformed all other methods showing F-Score, R, and P values of 98.47%, 98.31%, and 98.64%, respectively [4]. Additionally, Forkan and coworkers have reported the development of a CorrDetector framework for corrosion detection and feature extraction. The CorrDetector was based on the novel ensemble of deep learning CNNs. Seven different methods were evaluated in the study, including a simple CNN, *λC* model, Corrosion 5, Corrosion 7, Inception *V3*, Mobilnet, Resnet50, and VGG16 [29]. Overall, their proposed *λC* model displayed the best performance for segment-level and image-level predictions compared to all other methods. Accuracy (92.50%), precision (96.01%), recall (95.91%), and F-score (98%) of *λC* model were the highest for the image-level predictions [29].

However, if we consider the performance of our proposed models (i.e., extended, baseline, PSPNet, DeepLab, and SegNet) in the current study, it is evident that the extended method outperformed all other methods, thus showing GC, CAC, mean IOU, P, R, and F-score values of 0.98, 0.93, 0.87, 0.84, 0.81, and 0.83, respectively, as shown in Table 3. Notably, our extended and baseline models display higher accuracies of about 0.989% and 0.983%, respectively. As far as the overall performance evaluation parameters are considered, it is evident that our proposed models display performance measures within a comparable range of the previously reported studies [4,29]. Moreover, the lowest performance was obtained in our study using the SegNet architecture as indicated by the evaluation parameter values. The PSPNet and DeepLab architectures displayed better performance in comparison to the SegNet method. Most prominently, the baseline method

also outperformed the PSPNet, DeepLab, and SegNet architectures. If we compare the proposed baseline and extended methods, it is evident that the performance of the extended method was slightly better than the baseline method. Notably, the baseline did not perform good generalization because we added a weighted average instead of simple fusion averaging. However, in the extended architecture, we performed CRF-based fusion. Therefore, it is evident that the proposed network architecture in the current study displays a considerable improvement in the overall performance compared to the methods included in this study (i.e., PSPNet DeepLaband and SegNet) and methods used in other studies reported in the literature. Moreover, variations are observable in the results of the methods, mainly due to the presence of certain intrinsic differences between the corrosion detection and edge detection methods. Therefore, we can conclude that the extended model is ideal for real-time image processing of corrosion images, while the lowest performance was obtained for corrosion assessment using the SegNet.

Generally, the major challenge in damage identification is the characterization of the unknown relationships between measurements and patterns of damages for the subsequent selection of damage indicators. This requires the utilization of dense sensor array and complex algorithms for data processing, producing computational complexity. The proposed CNN-based architecture proposed in the current study is highly efficient at detecting corrosion features with greater accuracy due to pixel-wise segmentation. It also helps in overcoming processing overhead and computational intensity. Additionally, the utility of CNN-based architecture allows for the exploration and learning of abstract features and complex classifier boundaries that allow distinguishing various attributes of the problem under study. Moreover, the proposed CNN-based architecture is less prone to noise in images and thus allows for the efficient detection and localization of corrosion. Using CycleGAN and U-Net together in the CNN-based architecture provides the advantage of shifting the target domain distribution to fit the source distribution. This allows for the maintenance of the segmentation performance due to the GANs ability to enforce the segmentation-specific similarity of the U-Net features.

Overall, the proposed architecture will assist in improving the convergence and accuracy and eliminating the requirement of using a pre-trained model for network training. Additionally, the proposed U-Net-CycleGAN-based CNN architecture allows for efficient adaption of images across various platforms, such that it leads to effective generalization to another platform in real-world scenarios, without the requirement of new annotation. The approach presented in the current study will ultimately assist the government/municipalities in providing a scalable, real-time, robust, and efficient techniques to conduct cost-effective inspection and detection of structural damages for maintenance purposes. We describe an automated conceptual framework for corrosion detection using modern machine learning techniques based on deep neural networks that can be easily scaled to any edge device, e.g., jetson nano or coral dev board.

#### **5. Conclusions**

The manual inspection of the damages incurred due to corrosion on various surfaces is an extremely challenging and time-consuming exertion. Moreover, these methods display shortcomings in terms of the objectivity and reliability aspects. To address these issues, the automatic detection of corrosion from images of civil infrastructure is utilized to reduce the potential loss to the degraded infrastructures by identifying the possible cracks and corroded regions. Generally, a broader range of AI and machine learning/deep learning methods have been effectively applied to detect corrosion on bridges or sky rails. Among the reported deep learning methods, CNNs provide promising features for the automatic learning of image features without the need to extract image features, which helps reduce the effects of noise. Therefore, owing to these facts, we proposed a deep hierarchical CNN architecture based on 16 convolution layers in combination with Cycle GAN to predict pixel-wise segmentation in an end-to-end manner for the detection of corrosion on the Bolte Bridge and sky rail projects in Victoria, Melbourne. The proposed architecture is based on the extended FCN and the DSN, where the DSN provides very direct and highly integrated supervision of features at each convolutional stage.

Moreover, the sophisticated design of the proposed network model allows for the effective learning and aggregation of both multi-scale and multilevel features during the training procedure. Therefore, our proposed network differs from the standard architectures that rely on the utility of the last convolutional layer. Here, we have also reported using guided filtering and CRFs methods to refine the overall predictions. Moreover, the effectiveness of the proposed architecture can be seen in the results section, which shows that the deep hierarchical CNN architecture based on 16 convolution layers produced advanced performances. The extended method outperformed all other methods, including baseline, PSPNet, DeepLab, and SegNet, as indicated by the model performance metrics such as Global Accuracy (0.989), Class Average Accuracy (0.931), mean IOU (0.878), precision (0.849), recall (0.818) and F-score (0.833), respectively.

Overall, superior performance has been obtained for our proposed extended method; however, the approach has certain limitations due to the inclusion of CycleGAN. Although the utilization of CycleGAN has allowed one to obtain exceptional performance, there are several challenges associated with its use, such as instability, the collapse of mode, and non-convergence. This mainly occurs due to ineffective architectural design, inappropriate use of objective function, and the optimization algorithm. Moreover, another limitation of a CycleGAN is evident in the form of one-to-one mappings in which the model associates a single input image to a single output image that is not suitable for describing relationships across complex domains.

The focus of the study is limited to methodology (CycleGAN), corrosion, external environmental factors, and the severity of corrosion. Future studies can test the proposed method for the corrosion detection of different shapes. Additionally, the current study does not consider manipulating the photometric values and HSV for data augmentation. Therefore, in the future, researchers can focus on using the photometric values and HSV for data augmentation to achieve better accuracy. Additionally, detection under multi-angle and multi-light conditions will be taken into consideration. Further, future studies can compare the performance of CycleGANs with other methods.

**Author Contributions:** Conceptualization, H.S.M. and F.U.; methodology, H.S.M., F.U. and D.S.; software, H.S.M. and F.U.; validation, H.S.M., F.U., D.S., S.Q. and J.A.; formal analysis, H.S.M. and F.U.; investigation, H.S.M. and F.U.; resources F.U.; data curation, H.S.M., F.U., D.S., A.H., S.Q. and J.A.; writing—original draft preparation, H.S.M. and F.U. writing—review and editing, H.S.M., F.U., D.S., A.H., S.Q. and J.A.; visualization, H.S.M. and F.U.; supervision, F.U.; project administration, F.U. and A.H.; funding acquisition, F.U. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data are available with the first author and can be shared with anyone upon reasonable request.

**Acknowledgments:** This research is conducted under the project "2021 HES Faculty Collaboration Grant" awarded to Fahim Ullah at the University of Southern Queensland.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**


#### **References**


### *Article* **Plastic Deformation Analysis of a New Mega-Subcontrolled Structural System (MSCSS) Subjected to Seismic Excitation**

**Muhammad Moman Shahzad 1,2,\*, Xun'an Zhang 1, Xinwei Wang 1, Mustapha Abdulhadi 1, Yanjie Xiao <sup>1</sup> and Buqiao Fan <sup>1</sup>**


**Abstract:** This paper seeks to examine the plastic deformation and seismic structural response of a mega-subcontrolled structural system (MSCSS) subjected to strong seismic excitations. Different MSCSS configurations were modeled with nonlinear finite elements, and nonlinear dynamic analyses were performed to examine their behaviors. This paper introduces a novel and optimized MSCSS configuration, configuration 30, which demonstrates remarkable results for the reduction of plastic strain. Utilizing a steel plate shear wall enhances the seismic structural integrity of this system (SPSW). This configuration improved the mean equivalent plastic strain of columns and beams by 51% and 80%, respectively. In addition, a comparison between unstiffened and ring-shaped infill panels of SPSWs demonstrates that ring-shaped infill panels offer greater lateral stiffness and energy dissipation with a 44% reduction in maximum equivalent plastic strain. Compared to configuration 1, configuration 30 exhibited the most controlled structural response, as the minimum residual story drift improvement was 70% in the first, second, and third substructures, respectively, and the maximum coefficient of variation (COV) was 16% and 32% in the acceleration and displacement responses, respectively.

**Keywords:** new mega-subcontrolled structural system (MSCSS); steel plate shear wall (SPSW); infill panel; plastic deformation; nonlinear dynamic analysis; energy dissipation

### **1. Introduction**

Due to urbanization and limited land availability in metropolises, there has been an escalation in the demand for high-rise buildings. According to the Council on Tall Buildings and Urban Habitat (CTBUH) in 2019, 368 high-rise buildings have been constructed that are more than 100 m tall and, up to now, 5129 high-rise buildings have been constructed that are more than 150 m tall [1]. In recent years, buildings have become slender and taller, which is only possible due to advancements in construction techniques and ultra-strength materials [2]. Due to their slenderness and light weight, these tall, slender buildings are prone to random vibrations and excitations. As high-rise buildings are more prone to lateral loads, structures can be safe by enhancing energy dissipation and lateral stiffness while optimizing weight. Wind loads depend on structural height, while seismic loads depend on structural weight, soil-structure interaction, earthquake duration, magnitude, and distance to epicenter. Random excitations from lateral loads reduce the load-bearing capacity of structural foundations, shortening their lifespan. The structural fundamental period depends on soil-structure interactions and soil properties. Under prolonged dynamic loading, the fundamental period may change [3–5].

To control the high-rise building structural response against severe seismic and wind loading, mega-substructures (MSSs) has been used worldwide as the typical structural design. Mega-structural frame and multistory substructures are key components of MSSs.

**Citation:** Shahzad, M.M.; Zhang, X.; Wang, X.; Abdulhadi, M.; Xiao, Y.; Fan, B. Plastic Deformation Analysis of a New Mega-Subcontrolled Structural System (MSCSS) Subjected to Seismic Excitation. *Buildings* **2022**, *12*, 987. https://doi.org/10.3390/ buildings12070987

Academic Editor: Fahim Ullah

Received: 10 June 2022 Accepted: 5 July 2022 Published: 11 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In the MSS configuration, the mega-structural frame has a rigid connection with multistory substructures. The main structural strength component in MSSs is the mega-structural frame, and multistory substructures serve as the occupants' residency. Feng and Mita introduced a passive mega-subcontrolled structure (PMSCS) [6]. In their design, fixed-end conditions are used between the megastructure and substructure. In the PMSCS, the substructures act as a tuned mass damper (TMD). Furthermore, there is no need to add an additional 1% mass over the structure, as the mass ratio between the megastructure and substructure is 100% compared to the 1% of a traditional TMD. Chai and Feng improved Feng and Mita's work on dynamic structural response to wind. In their model, a cantilever beam represented the megastructure and a concentrated mass represented the substructure [7]. Later, Lan proposed a multidegree freedom system for his analytical model by replacing each substructure with a concentrated mass [8].

In the past decade, Zhang established a new configuration for PMSCSs, named the mega-subcontrolled structure system (MSCSS), which showed dominant shearing and bending in their substructures and mega-frames, respectively. The relative stiffness ratio (RD) between the substructure's shear stiffness and the mega-bending frame's stiffness affects structural vibration control. Vibration control is effective if RD is less than 0.477, but if it increases, first modal vibrations will not be damped effectively. This newly proposed PMSCS performs better in terms of energy dissipation and self-control in structural vibrations caused by seismic excitations. Later, the effects of friction and magnetorheological dampers used in MSCSS substructures were studied [9–11]. Recent studies [12,13] have examined a variety of controlling techniques and contrasted them with the use of an MSS to reach their conclusions. The results showed that the MSCSS has a better structurally controlled response under seismic activity; i.e., it experiences less structural accelerations and displacements than previous models. A further application of friction dampers was made in an MSCSS substructure to improve the substructure's responses to wind and seismic excitations [14,15]. Recent research on an MSCSS viscous damper control mechanism involved optimizing damper locations and parameters. Rubber bearings were installed at the tops of additional columns between floor mega-beams and substructural components to reduce earthquake response. In a high-intensity earthquake, the MSCSS sustains moderate damage while the MSS collapses [16,17]. The MSCSS has a 30% lower failure probability than the MSS, and the megastructure has a 50% lower failure probability than its substructure. MSCSS showed a response control rate of over 10% and a base shear control rate of 20% during long-period ground motions [18,19]. According to the MSCSS, the controlling effectiveness of structural acceleration and displacement at the top of a mega-frame with various arrangements and numbers of substructures ranged from 42% to 70% [20], whereas the controlling effectiveness of substructural acceleration and displacement ranged from 20% to 65%. Using mid-story isolation and inverted V-bracing (chevron), the MSCSS has demonstrated remarkable stabilizing effects against earthquake-induced structural vibration and shows significant improvements, particularly under a service load, with an average structural acceleration response of 49.7% under the El Centro earthquake. The presence of chevron bracing increases the structural stiffness of the structure, which reduces the excessive structural displacement caused by a shift in the period of time [21].

Despite recent developments in structural response control of MSCSS that show significantly improved results, the investigation of plastic deformation in a structure under intense seismic excitations still has not been fully explored and therefore requires further attention.

In this paper, the performance of different configurations of MSCSS under intense seismic excitations is investigated. An improved and new configuration of MSCSS is suggested that has a steel plate shear wall (SPSW) as a resisting structural system for lateral loads. The new proposed configuration of the MSCSS shows a remarkable improvement in the structural response under seismic excitations. The results of drift and plastic dissipation are presented in the first half of this study to comparatively calculate and compare the seismic response. In the second half, the results of the time histories for the new and proposed configuration of MSCSS are presented to quantify the effect of an SPSW in the MSCSS.

#### **2. Scope**

Utilizing the performance of various MSCSS configurations under intense seismic excitations, a new and improved configuration is proposed. The structural response is studied in relation to various types and configurations of SPSW infills in MSCSS. Due to the high dissipation of hysteretic energy, infill panels exhibit significant post-buckling strength and ductility under tension, according to previous research. Consequently, tension-only yielding behavior is observed in the infill panels and plastic hinges formed at the ends of horizontal boundary elements (HBEs), resulting in adequate seismic performance of SPSW systems [22,23]. HBEs reduce tension fields in unstiffened infill panels by increasing structural stiffness and stability. Vertical boundary elements (VBEs) prevent plastic hinges and yielding in the infill panel. High-rise buildings need more brittle infill panels. Infill panels connected only with HBEs dissipate less energy and carry less load than fully connected infill panels but demonstrate the same lateral deformation resistance [24–26]. SPSW infill's ductility and post-buckling strength provide seismic stability. SPSWs are commonly used as multi-stories, making them a cost-effective way to resist lateral loads in high-rise buildings. As infills are relatively thin compared to conventional concrete shear walls, they decrease the wall's dead load and increase the occupant floor space. SPSWs are easy to install; they are appropriate for new structures and also for retrofitting existing structures. Additionally, under earthquake loads, they buckle during compression because of their slenderness [27–29].

A new SPSW system was developed during the last decade, which exhibits a unique pattern of rings connected by diagonal links, with circular cut-outs. This ring-shaped steel plate shear wall (RS-SPSW) resists out-of-plane buckling by deforming the rings into an ellipse. Elongation of the ring shrinks the diagonal links during tension, preventing the web from sagging perpendicular to the tension field [30,31]. Low-rise buildings subjected to near-field ground motions showed the best performance from unstiffened SPSW systems, whereas RS-SPSW and honeycomb-SPSW systems demonstrated good resistance to induced structural acceleration and dampening effects [26]. Prior to suggesting a new and improved configuration of MSCSS that addresses common problems faced by structural engineers in high-rise building design, in this research, plastic deformation and plastic and viscous dissipation in various MSCSS configurations under seismic loads are studied.

This work was conducted using MSCSSs with various configurations. The MSCSS seismic response was studied using nonlinear time-history analysis (NTHA) to improve structural strength and stiffness while reducing structural weight, resulting in a more cost-effective solution. Different durations of seismic waves were used to investigate the seismic performance of the suggested MSCSS to draw a comprehensive conclusion.

#### **3. Generation of Ground Motion**

To examine the seismic response and performance of different MSCSS configurations, nonlinear dynamic analysis is conducted. For this purpose, 11 renowned seismic time histories are used. Five of the 11 ground motions are near-fault time histories with a distance to the rupture surface (Rrup) of less than 15 km, while the others are far-fault. According to ASCE 7–16, the ground motion number is chosen for nonlinear dynamic analysis (NDP). To balance more accurate estimates of the mean structural responses, this number of motions were chosen. The use of this greater number of motions has the benefit of indicating a significant likelihood that the structure will not achieve the 10% target collapse reliability for Risk Category I and II structures, if unacceptable responses are found for more than one of the 11 motions [32]. The total duration of these ground motions varies between 20 s and 56 s, and the interval is defined according to Kawashima and Aizawa, which is the exceedance of the first and last ground accelerations and is more than ±0.1 g [33]. Time histories are scaled to a design spectrum with a peak ground acceleration (PGA) of

1 g. The design spectrum is conceived under the guidelines of ASCE/SEI 41-17 [34], with **SXS** and **SX1** being 2.50 and 1, respectively. A long-period transition of 8 s along with 5% damping for soil type D is utilized. The spectral matching method is used for scaling the time histories, which was primarily recommended in 1987–1988. For the purpose of spectral matching, SesimoMatch is used, which is based on RSPMatch (wavelets algorithm) in 1992 and later improved in 2006 [35–39]. Table 1 contains details on selected ground motions, and Figure 1 depicts the design spectrum, as well as spectral matching. Figure 1d illustrates a comparison between the time history of the original Nahanni and after conversion. In addition, also evident from Figure 1d is that after spectral matching, the characteristics of ground motions are preserved.


**Figure 1.** Spectral acceleration: (**a**) spectra before matching, (**b**) mean spectral matching, (**c**) spectra after matching, and (**d**) time history of Nahanni.

#### **4. Computation Model**

The analytical models of the MSCSS with mid-story isolation [21] and the newly proposed MSCSS with steel plate shear wall are represented as a simplified three-lumped-mass model based on Kelly's two-lumped-mass model for base isolation structures, which used an equivalent linear model to represent the isolation system's hysteretic behavior [40,41]. Figure 2 depicts an elevation view, as well as plans for various sections in MSCSS with mid-story isolation and a newly proposed MSCSS.

**Figure 2.** Configurations: (**a**) MSCSS with mid-story isolation [21] and (**b**) newly proposed MSCSS. Dimensions are in meters. (Adapted with permission from [21]).

Figure 3 depict the computing model of the newly proposed MSCSS, which includes a bottom structure, an isolation system, and a superstructure. Except for the isolation system, all structural elements are assumed to be elastic during seismic excitation.

The dynamic equation of motion for the superstructure (i.e., above the mid-story isolator) for proposed MSCSS under seismic excitation can be expressed as:

$$M\_{TPS}\ddot{X}\_{\text{T}} + C\_{TPS}\dot{X}\_{\text{T}} + K\_{TPS}X\_{\text{T}} = -\Gamma\_{TPS}\ddot{X}\_{\text{g}} \tag{1}$$

where *ΓTPS* = *diag* [*MTPS*] is the mass vector of the superstructure and *XT* = [ [ *Xp*], [*Xs*]] *T* with (*n* − 1) +(*n* − 1) (*nz* + 1) variables is the lateral deformation vector of the superstructure relative to mid-story isolator, and *Xp* = [*Xp*<sup>2</sup> , *Xp*<sup>3</sup> ,......, *Xpn* ] *<sup>T</sup>* with (*<sup>n</sup>* <sup>−</sup> 1) variables, *Xs* <sup>=</sup> [ [ *Xs*<sup>1</sup> ], [*Xs*<sup>2</sup> ],......, [*Xs*(*n*−1)]]*<sup>T</sup>* with (*<sup>n</sup>* <sup>−</sup> 1) (*nz* + 1) variables, *Xsi* <sup>=</sup> [ - *Xi*,1 <sup>+</sup> *XSWi*,1 , - *Xi*,2 <sup>+</sup> *XSWi*,2 ,..., - *Xi*,*<sup>j</sup>* + *XSWi*,*<sup>j</sup>* ,..., - *Xi*,*nz* + *XSWi*,*nz* , *Xli*] *T* with (*nz* + 1) variables are lateral deformation vectors of mega-structure, substructure, and *i* th substructure, respectively. Xli is the relative lateral deformation of LRB, which is placed over additional column, and *XSWi*,*<sup>j</sup>* is the relative lateral deformation of infill panels of shear wall presents at *j th* floor of *i* th substructure, *n* is total number of mega-structure stories, and *nz* is total number of stories in each substructure.

• The mass matrix MTPS in Equation (1) can be expressed as:

$$\begin{aligned} M\_{TPS} &= \begin{bmatrix} M\_P & 0\\ 0 & M\_s \end{bmatrix} \prime\\ M\_{si} &= \text{diag}\left[ (m\_{i\_1} + m\_{SW\_{i\_1}}), \ldots, (m\_{i\_{\mathcal{N}\_1}} + m\_{i\_{\mathcal{N}\_2}}) \right] \\ M\_{si} &= \text{diag}\left[ (m\_{i\_1} + m\_{SW\_{i\_1}}), (m\_{i\_2} + m\_{SW\_{i\_2}}), \ldots, (m\_{i\_{\mathcal{N}\_2}} + m\_{SW\_{i\_{\mathcal{N}\_2}}}), m\_{li} \right] \end{aligned} \tag{2}$$

where MP, Ms, and Msi are diagonal mass matrices of super mega-structure, substructures, and *i* th substructure, respectively. Mass of shear wall infill panels at the at *j* th floor of *i* th substructure is represented by *mSWi*,*<sup>j</sup>* .

**Figure 3.** Computing model of newly proposed MSCSS [21]: (**a**) whole system, (**b**) superstructure, (**c**) ith Substructure, (**d**) mid-story isolator, (**e**) bottom structure, and (**f**) three-lumped-mass structural model. (Adapted with permission from [21]).

• The damping matrix CTPS of the superstructure in Equation (1) can be expressed as:

$$\mathbf{C\_{TPS}} = \begin{bmatrix} \mathbf{C\_{PA}} & \mathbf{C\_{c}} \\ \mathbf{C\_{c}^{T}} & \mathbf{C\_{s}} \end{bmatrix}, \begin{array}{c} \mathbf{C\_{s}} = \text{diag} \begin{bmatrix} \mathbf{C\_{s,1}} & \mathbf{C\_{s,2}} & \dots & \mathbf{C\_{s(n-1)}} \end{bmatrix} \\\ \mathbf{C\_{c}^{T}} & \mathbf{C\_{s}} \end{array} + \text{diag} \begin{bmatrix} \mathbf{C\_{i+1,1}} + \mathbf{C\_{i}} + ad\mathbf{C\_{i,1}} + ad\mathbf{C\_{i,2}} \end{bmatrix} \tag{3}$$

where Cp and Cs are damping matrices of the super mega-structure and substructures. The [(*n* − 1)] × [(*n* − 1) (*nz* + 1)] matrix Cc in Equation (3) is the coupling damping matrix between the super mega-structure and substructures. Cli is damping coefficient of LRB placed over additional column of *i* th substructure and adCi is damping coefficient of additional column of *i* th substructure. It can be expressed as:

$$\begin{aligned} \mathbb{C}\_{\varepsilon}[i\_{\prime}(i-1)(n\_z+1)+(n\_z+1)] &= -\mathbb{C}\_{li} \\ \mathbb{C}\_{\varepsilon}[i\_{\prime}(i-1)(n\_z+1)+n\_z] &= -ad\mathbb{C}\_{i,1} \\ \mathbb{C}\_{\varepsilon}[i\_{\prime}(i-1)(n\_z+1)+(n\_z-1)] &= -ad\mathbb{C}\_{i,2} \\ \mathbb{C}\_{\varepsilon}[i,i(n\_z+1)+1] &= -\mathbb{C}\_{i+1,1} \\ \mathbb{C}\_{\varepsilon}(rest) &= 0 \end{aligned} \tag{4}$$

• The stiffness matrix KTPS of the superstructure in Equation (1) can be expressed as:

$$K\_{TPS} = \begin{bmatrix} K\_{PA} & K\_c \\ K\_c^T & K\_s \end{bmatrix} \prime \begin{array}{l} \text{K}\_s = \text{diag}\left[K\_{s,1}, K\_{s,2}, \dots, K\_{s(n-1)}\right] \\\ K\_{PA} = K\_P + \text{diag}\left[K\_{i,1} + K\_{l,i-1}\right] \\\ \text{for } K\_{pA} \text{ } i = 2, 3, \dots, n \text{ and } K\_{n,1} = 0 \end{array} \tag{5}$$

where Kp and Ks are stiffness matrices of the super mega-structure and substructures. The [(*n* − 1)] × [(*n* − 1) (*nz* + 1)] matrix Kc in Equation (5) is the coupling damping matrix between the super mega-structure and substructures. It can be expressed as:

$$\begin{array}{c} \mathbb{K}\_{\mathfrak{c}}[i\_{\mathfrak{e}}(i-1)(n\_{\mathfrak{z}}+1)+(n\_{\mathfrak{z}}+1)] = -\mathbb{K}\_{li}\big] \mathrel{i}=1,2,...,(n-1) \\\mathbb{K}\_{\mathfrak{e}}[i,i(n\_{\mathfrak{z}}+1)+1] = -\mathbb{K}\_{i+1,1}\big] \mathrel{i}=1,2,3\ldots,(n-2) \\\mathbb{K}\_{\mathfrak{e}}(\text{rest})=0 \end{array} \tag{6}$$

Appendix A contains details on the assemblies for matrices CTPS and KTPS.

The dynamic equation of motion for the mid story isolator for proposed MSCSS under seismic excitation can be expressed as:

$$m\_{\rm lds} \ddot{X}\_{\rm lds} + \mathcal{C}\_{\rm lds} \dot{X}\_{\rm lds} + \mathcal{K}\_{\rm lds} X\_{\rm lds} = -\Gamma\_m \ddot{X}\_{\rm g} - \{B\} \begin{Bmatrix} \ddot{X}\_{P\_1} \\ \ddot{X}\_{\rm lds} \\ \ddot{X}\_{\rm TPS} \end{Bmatrix} \tag{7}$$

where *X*lds is the lateral displacement of mid-story isolator relative to the bottom face of the mid-story isolator and *Γ<sup>m</sup>* = *IeMTPS I<sup>T</sup> <sup>e</sup>* + *mlds*.

$$\{B\} = I\_{\text{cf}} \begin{bmatrix} I\_{\text{c}}M\_{TPS}I\_{\text{c}}^T + m\_{\text{lds}} & 0 & 0 \\ 0 & I\_{\text{c}}M\_{TPS}I\_{\text{c}}^T & 0 \\ 0 & 0 & M\_{TPS} \end{bmatrix}, \begin{array}{c} \mathbf{C}\_{\text{lds}} = \mathbf{C}\_{\text{l}} + \mathbf{C}\_{\text{l}d} \\ K\_{\text{lds}} = K\_{\text{l}} + K\_{\text{ld}} \\ m\_{\text{lds}} = \frac{1}{2}m\_{\text{l}\cdot 1} + m\_{\text{l}} \end{array} \tag{8}$$

where *Kls* and *Cld* are the limit spring and limit damper for the top structural displacement, respectively. *Ie* is unit line vector and *Iee* from Equation (8) can be expressed as:

$$\begin{array}{l} I\_{\mathfrak{c}} = \left[1, 1, \ldots, \ldots, 1\right]\_{(n-1)+(n-1)(n\_{\mathfrak{c}}+1)} \\ I\_{\mathfrak{c}\mathfrak{c}} = \left[1, 1, \ldots, \ldots, 1\right]\_{(n+1)+(n-1)(n\_{\mathfrak{c}}+1)} \end{array} \tag{9}$$

$$I\_{\mathfrak{c}\mathfrak{c}} = \left[1, 1, I\_{\mathfrak{c}}\right]$$

The dynamic equation of motion for the substructure for proposed MSCSS under seismic excitation can be expressed as:

$$\left\{M\_{\rm sub}\ddot{X}\_{p\_1} + \mathcal{C}\_{p\_1}\dot{X}\_{p\_1} + K\_{p\_1}X\_{p\_1} = -\Gamma\_b\ddot{X}\_{\rm g} - \{A\}\begin{Bmatrix} \ddot{X}\_{P\_1} \\ \ddot{X}\_{\rm lds} \\ \ddot{X}\_{\rm TPS} \end{Bmatrix} \right\} \tag{10}$$

where *Xp*<sup>1</sup> is the lateral displacement relative to the ground, *Kp*<sup>1</sup> is bending stiffness, *Msub* = <sup>1</sup> <sup>2</sup>*mp*<sup>1</sup> , *<sup>Γ</sup><sup>b</sup>* = *IeMTPS <sup>I</sup><sup>T</sup> <sup>e</sup>* + *mlds* + *Msub* and {A} from the equation can be expressed as:

$$\{A\} = I\_{\text{cc}} \begin{bmatrix} I\_{\text{c}}M\_{TPS}I\_{\text{c}}^T + m\_{\text{lds}} & 0 & 0\\ 0 & I\_{\text{c}}M\_{TPS}I\_{\text{c}}^T + m\_{\text{lds}} & 0\\ 0 & 0 & M\_{TPS} \end{bmatrix} \tag{11}$$

The overall dynamic equation of motion for proposed MSCSS under seismic excitation can be expressed as:

$$
\begin{aligned}
\begin{bmatrix} M\_{\text{sub}} & 0 & 0 \\ 0 & m\_{\text{lds}} & 0 \\ 0 & 0 & M\_{\text{TPS}} \end{bmatrix} \begin{Bmatrix} \ddot{X}\_{P\_1} \\ \ddot{X}\_{\text{lds}} \\ \ddot{X}\_{\text{TPS}} \end{Bmatrix} + \begin{bmatrix} \{A\} \\ \{B\} \\ \{0\} \end{bmatrix} \begin{Bmatrix} \ddot{X}\_{P\_1} \\ \ddot{X}\_{\text{lds}} \\ \ddot{X}\_{\text{TPS}} \end{Bmatrix} + \begin{bmatrix} C\_{p\_1} & 0 & 0 \\ 0 & C\_{\text{lds}} & 0 \\ 0 & 0 & C\_{\text{TPS}} \end{bmatrix} \begin{Bmatrix} \dot{X}\_{P\_1} \\ \dot{X}\_{\text{lds}} \\ \dot{X}\_{\text{TPS}} \end{Bmatrix} \\ + \begin{bmatrix} K\_{p\_1} & 0 & 0 \\ 0 & K\_{\text{lds}} & 0 \\ 0 & 0 & K\_{\text{TPS}} \end{bmatrix} \begin{Bmatrix} \dot{X}\_{P\_1} \\ \dot{X}\_{\text{lds}} \\ \dot{X}\_{\text{TPS}} \end{Bmatrix} = - \begin{Bmatrix} I\_b \\ \Gamma\_m \\ \Gamma\_{\text{TPS}} \end{Bmatrix} \ddot{X}\_{\text{g}} \end{aligned} \tag{12}$$

As the co-ordinates *Xp*<sup>1</sup> , *Xlds*, and *XTPS* in above Equation (12) are respectively relative to different co-ordinate positions; therefore, a dynamic equation cannot resolve it. Hence, the transformed co-ordinates of each mass point relative to the base ground and can be expressed as:

$$\begin{aligned} \mathbf{Y}\_{li} &= \mathbf{X}\_{li} + \mathbf{X}\_{lds} + \mathbf{X}\_{p\_1} \\ \mathbf{Y}\_{i,j} &= \mathbf{X}\_{i,j} + \mathbf{X}\_{lds} + \mathbf{X}\_{p\_1} \\ \mathbf{Y}\_{Pl} &= \mathbf{X}\_{Pl} + \mathbf{X}\_{lds} + \mathbf{X}\_{p\_1} \\ \mathbf{Y}\_{lds} &= \mathbf{X}\_{lds} + \mathbf{X}\_{p\_1} \\ \mathbf{Y}\_{D1} &= \mathbf{X}\_{D1} \end{aligned} \tag{13}$$

Hence, Equation (12) can be written as:

$$\begin{aligned} \left( \begin{bmatrix} M\_{\text{sub}} & 0 & 0\\ 0 & m\_{\text{lds}} & 0\\ 0 & 0 & M\_{\text{TPS}} \end{bmatrix} + \begin{bmatrix} \{A\}\\ \{B\}\\ \{0\} \end{bmatrix} \right) \mathcal{R}^{-1} \begin{Bmatrix} \ddot{Y}\_{P\_1} \\ \ddot{Y}\_{1\text{ds}} \\ \ddot{Y}\_{\text{TPS}} \end{Bmatrix} + \left( \begin{bmatrix} \mathbb{C}\_{p\_1} & 0 & 0\\ 0 & \mathbb{C}\_{\text{lds}} & 0\\ 0 & 0 & \mathbb{C}\_{\text{TPS}} \end{bmatrix} \right) \mathcal{R}^{-1} \begin{Bmatrix} \dot{Y}\_{P\_1} \\ \dot{Y}\_{1\text{ds}} \\ Y\_{\text{TPS}} \end{Bmatrix} \right) \\ + \left( \begin{bmatrix} \mathbb{K}\_{p\_1} & 0 & 0\\ 0 & K\_{\text{lds}} & 0\\ 0 & 0 & K\_{\text{TPS}} \end{bmatrix} \right) \mathcal{R}^{-1} \begin{Bmatrix} \dot{Y}\_{P\_1} \\ \dot{Y}\_{1\text{ds}} \\ Y\_{\text{TPS}} \end{Bmatrix} = - \begin{Bmatrix} \boldsymbol{I}\_{b} \\ \boldsymbol{I}\_{m} \\ \boldsymbol{I}\_{TPS} \end{Bmatrix} \ddot{X}\_{\text{g}} \end{aligned} \tag{14}$$

Appendix A contains details on matrix R.

The mass ratios and nominal frequencies [41] of the three-lumped-mass model are as follows:

$$\mathbf{r}\_{\text{TPS}} = \frac{\mathbf{M}\_{\text{TPS}}}{\mathbf{m}\_{\text{lds}}}; \ \mathbf{r}\_{\text{bot}} = \frac{\mathbf{M}\_{\text{bot}}}{\mathbf{m}\_{\text{lds}}} \tag{15}$$

$$
\omega\_{\rm TPS} = \sqrt{\frac{\mathbf{K\_{TPS}}}{\mathbf{M\_{TPS}}}};\\\omega\_{\rm bot} = \sqrt{\frac{\mathbf{K\_{P1}}}{\mathbf{M\_{bot}}}};\\\omega\_{\rm sub}^{\*} = \sqrt{\frac{\mathbf{K\_{P1}}}{\mathbf{M\_{bot}} + \mathbf{M\_{TPS}} + \mathbf{m\_{lds}}}};\\\omega\_{\rm lds} = \sqrt{\frac{\mathbf{k\_{lds}}}{\mathbf{m\_{lds}}}}\tag{16}
$$

After applying the classic damping assumption, the damping ratios of the superstructure, bottom structure, and isolation system, the first modal damping ratio, and the first modal participation mass ratio can be written as follows:

$$\zeta\_{\rm TPS} = \frac{\mathcal{C}\_{\rm TPS}}{2 \mathcal{M}\_{\rm TPS} \omega \nu\_{\rm TPS}}; \zeta\_{\rm bot} = \frac{\mathcal{C}\_{\rm P1}}{2 \omega\_{\rm bot}^\* \left(\mathcal{M}\_{\rm bot} + \mathcal{M}\_{\rm TPS} + \mathcal{m}\_{\rm lds}\right)}; \zeta\_{\rm lds} = \frac{\mathcal{C}\_{\rm lds}}{2 \omega\_{\rm lds} \left(\mathcal{M}\_{\rm TPS} + \mathcal{m}\_{\rm lds}\right)} \tag{17}$$

$$\mathcal{L}\_1 = \frac{\zeta\_{\rm lds}}{\left(1 + \frac{2\left(1 + \text{Trps}\right)}{\mathbf{r}\_{\rm bot}} \left(\frac{\omega \mathbf{r}\_{\rm lds}}{\omega \mathbf{r}\_{\rm bot}}\right)^2 + \frac{2\mathbf{r}\_{\rm TPS}}{1 + \mathbf{r}\_{\rm TPS}} \left(\frac{\omega \mathbf{r}\_{\rm lds}}{\omega \mathbf{r}\_{\rm TPS}}\right)^2\right)}}\tag{18}$$

$$\mathcal{L}\_{1} = \frac{\mathbf{r}\_{\text{bot}} + 2(\mathbf{r}\_{\text{bot}} + \mathbf{r}\_{\text{TPS}} + 1)\left(\frac{\omega\_{\text{lsh}}}{\omega\_{\text{bot}}}\right)^{2} + \frac{2\mathbf{r}\_{\text{TPS}}\mathbf{r}\_{\text{bot}}}{1 + \mathbf{r}\_{\text{TPS}}} \left(\frac{\omega\_{\text{lsh}}}{\omega\_{\text{bot}}}\right)^{2}}{(1 + \mathbf{r}\_{\text{bot}} + \mathbf{r}\_{\text{TPS}})\left(\frac{\mathbf{r}\_{\text{bot}}}{(1 + \mathbf{r}\_{\text{TPS}})} + 2\left(\frac{\omega\_{\text{lsh}}}{\omega\_{\text{bot}}}\right)^{2} + \frac{2\mathbf{r}\_{\text{TPS}}\mathbf{r}\_{\text{bot}}}{(1 + \mathbf{r}\_{\text{TPS}})^{2}}\right)^{a}} \tag{19}$$

From Equation (18), if the effective lateral stiffness is much smaller than the elastic lateral stiffnesses of the superstructure and bottom structure, then the first modal damping ratio will approach the first modal damping ratio of the isolation system. The relationship between the superstructure and bottom structure frequency is as follows [42,43]:

$$
\omega\_{\rm bot} = \omega\_{\rm TPS} \sqrt{1 + \mathbf{r}\_{\rm TPS}} \tag{20}
$$

From Equation (20), the higher-order modal coupling is independent of the isolation frequency, which means that higher-order modal coupling will be prevented in the midstory isolation system.

#### *4.1. Development of the Finite Element Model*

To investigate the structural control response for different configurations of the MSCSS under robust seismic excitations, finite element models are used. The traditional MSS, used in the Bank of China in Hong Kong and the Tokyo Metropolitan Government Building, is selected as the base design for different configurations of the MSCSS. The structural height of the MSCSS configuration is 144 m with a width of 40 m. A total of four mega-frames are present in the structure, and each mega-frame has its own eight-story substructure with a 4 m story height.

ABAQUS is used for preparing and analyzing the finite element models. The whole structure is created with deformable wire, except for floors, and SPSW infill panels in a 3D modeling space. Deformable shells are used to construct the floors and infill panels. It is possible to stretch and bend axially and biaxially on a three-node quadratic beam in space (B32). The finite element models used in this study do not use a fish plate for connecting infill panels with boundary members because this approximation had a negligible effect [44] on the results. The eight nodes in shell element S8R5 are used to model the infill panels and floors, with five degrees of freedom (DoFs) per node, and hourglass control has greatly increased convergence. Because of the independence between rotational and translational DoFs, the model includes rotation about the out-of-plane axis. As a result, transverse shear deformation is considered along the cross-section.

Infill panels are assigned to the material properties of ASTM A36 steel, section members are ASTM A992 grade 50 steel, and floor slabs are assigned to the material properties of concrete class C30. These materials are modeled to exhibit isotropic elastoplastic behavior with high-level strain reversal because of the Bauschinger effect of kinematic hardening during cyclic loading. In the analysis, the dynamic explicit solver based on central differences over the dynamic implicit solution is preferred and utilized.

#### *4.2. Reasons for the Preference of the Dynamic Explicit Solver*

For the following reasons, the dynamic explicit solver is preferred over the dynamic implicit solver.


With small time increments in the dynamic explicit solver, seismic loads are modeled more precisely, and convergence is easier.

#### *4.3. MSCSS Configurations*

In this paper, 33 different MSCSS configurations are used to investigate the structural response under seismic vibrations to propose the optimal design for an improved MSCSS. An improved optimal design will enhance structural stability, especially under plastic deformation, and reduce the seismic structural response. The summarized details of these configurations are as follows.



**Table 2.** Ring-Shaped Infill Panel Dimensions (5 m × 4 m).

**Figure 4.** Ring-shaped infill panel.



Figures A1–A3 in Appendix C show details about the configurations.

According to the modal analysis results, the maximum fundamental natural period for configurations 13 and 22 is 5.529 s. Both of these configurations have 5 mm thick infill panels, but configuration 30 is the most rigid and has the most structural stiffness because its natural period is 3.704 s, which is the minimum among the others. The natural vibration period of configuration 1 is 4.32 s, and Figure 5 depicts the first three modal shapes of configurations 1 and 30. Table 4 illustrates the fundamental natural period of all configurations in detail.

**Figure 5.** Modal Analysis Results: (**a**) Conf.1 mode 1, (**b**) Conf.1 mode 2, (**c**) Conf.1 mode 3, (**d**) Conf.30 mode 1, (**e**) Conf.30 mode 2, and (**f**) Conf.30 mode 3.



**Table 4.** *Cont*.

#### **5. Comparative Analyses**

For the parametric study, 32 different MSCSS configurations are used to evaluate the optimal configuration. The Nahanni time history is used in this section, as it is near-fault ground motion with high arias intensity; i.e., 3.9 ms−<sup>1</sup> and configuration 1 are used as the base case. The following observations are made after nonlinear dynamic analyses.

#### *5.1. Settlement at the Top of the Structure*

Structural settlement is observed at the top of the structure in all MSCSS configurations. This settlement occurred during the structural response under nonlinear dynamic analyses and illustrated in Figure 6. Configuration 1 showed a 1.56 m settlement, which means that 39% of the story height is settled. The configurations with SPSW showed much improvement, and a maximum settlement of 0.9 m is observed in configuration 8. The major cause of this settlement is the plastic deformation of the mega-beam bracings. When the bracings between mega-beams are replaced by infill panels of SPSW, the structural response with respect to settlement is further improved with an average settlement of 0.36 m, which is an approximately 77% improvement compared with configuration 1, between different configurations with a maximum of 0.78 m in configuration 11. Compared with configuration 1, configuration 2 showed 8.33% more settlement, as more plastic deformation occurred in the X-bracing between the center of the structure along with deformation in the mega-columns. While comparing the settlement at the top of the structure, some trends are reflected in the behavior of different configurations. Configurations that do not have an SPSW showed that the back right corner has less settlement compared to the front right corner, but this trend is reversed with the use of an SPSW in place of chevron and X-bracings in the center of the structure. Configurations that have ring-shaped infill panels in their SPSW also exhibited a 12% improvement in results compared to regular infill panels without circular cut-outs. The plastic deformation in the mega-column bracing at the first floor also causes settlement at the top of the structure. Therefore, when ring-shaped infill panels of SPSW are used only at the first floor in mega-columns, this settlement is reduced by 97% for configuration 1, and a maximum reduction occurs in configuration 32, which is 98.25% when the settlement is 2.7 cm.

#### *5.2. Equivalent Plastic Strain*

To investigate the plastic deformation in the MSCSS configurations, equivalent plastic strain is evaluated in beams and columns, as these are major load bearing components. From the results, as a general trend, the equivalent plastic strain in beams is higher than that in columns, as illustrated in Figure 7. A maximum equivalent plastic strain of 0.794 is observed in configuration 2 in the beam, while a maximum equivalent plastic strain of 0.232 is observed in configuration 11 in the column. This means that the column and beam fail completely. Configurations that have infill panels between mega-beams showed improved structural stability compared to the other configurations, as their equivalent plastic strain in beams and columns is less than 0.18. Configuration 14 showed a maximum equivalent

plastic strain of 0.173 in its beam. When infill panels are used between mega-columns, equivalent plastic strain is further reduced, and its maximum remains under 0.09, except in configurations 26 and 27, as their maximum equivalent plastic strains are 0.0968 and 0.0963, respectively, in the beam. Equivalent plastic strain in configurations that have ring-shaped infill panels in their SPSW showed improved results compared to traditional infill panels, especially plastic strains in beams.

Plastic deformation occurs in all configurations at the mega-frame on the second and third stories. Major plastic deformation occurs in bracing and mega-columns at this location. Bracing between mega-beams 1 and 2 showed plastic deformation up to the ninth configuration, triggering catastrophic structural deformation, particularly in the third substructure above the second mega-beam. Because of this deformation, the entire structure above the second mega-beam tilted to the right, as shown in Figure 8. Figure 8 illustrates where the plastic deformation occurs in configurations 1, 2, and 30. The configuration with the maximum plastic deformation was configuration 2. Severe buckling of beams and columns has also been observed in the second and third substructures. After substituting infill panels between mega-beams from configuration 10, the mega-beams show no signs of plastic deformation, preventing the structure from sustaining catastrophic damage. Bracing between mega-frames in configuration 30 showed signs of plastic strain, as shown in Figure 8c.

**Figure 8.** Equivalent plastic strain nephogram: (**a**) configuration 1, (**b**) configuration 2, and (**c**) configuration 30.

The structure underwent plastic deformation at 2.3 s when the ground acceleration was 0.31 g. The maximum plastic deformation occurs in columns at location (32.8,20,36~33.5,20,36) in configuration 1. Due to failure occurring at the beam-column joint, the plastic strain reached 0.0842, and the strain remained constant for 8.74 s but jumped to 0.097 after 9.2 s, when the ground acceleration reached 1.14 g. The trend is similar in other configurations; the maximum equivalent plastic strain in configuration 29 is 0.063, which is the minimum among all other configurations. The plastic deformation trend in beams is similar to that in columns, and the maximum equivalent plastic strain in configuration 29 is 0.0748, which is the minimum among the others, as illustrated in Figure 9a,b. Additional to this, the beam of configuration 1 has a maximum equivalent plastic strain of 0.5444.

**Figure 9.** Equivalent plastic strain in different configurations under the Nahanni earthquake: in the (**a**) column and (**b**) beam.

#### *5.3. Energy Dissipation*

Seismic energy will be dissipated from the structure because of the viscous effect of a structure, as well as the energy dissipation caused by plastic deformation occurring in the structural members. All the configurations are evaluated as a function of the viscous and plastic dissipation. A maximum viscous dissipation of 377 kJ of energy occurs in configuration 10, as depicted in Figure 10b, while configuration 29 dissipates 320.8 kJ. Configuration 30 dissipates 338 MJ of energy because of plastic deformation, as shown in Figure 10a, which is the minimum among all other configurations. Additionally, configuration 30 has 302.0 kJ viscous dissipation, which is better than half of the other configurations. The trend of plastic dissipation in configuration 30 is that it becomes constant at the end of ground motion, while the others are still reaching their peaks. Both plastic and viscous dissipation

reach their peaks after 9.2 s. Overall, configurations with infill panels between mega-beams showed an improved control response under seismic excitations.

**Figure 10.** Energy dissipation in different configurations under the Nahanni earthquake: (**a**) plastic dissipation and (**b**) viscous dissipation.

#### *5.4. Peak Transient Story Drift*

Peak transient story drift is one of the most important indicators of structural response. Different configurations, which have less than 0.2 m settlement at the top of the structure along with an equivalent plastic strain less than 0.1 in beams and columns, are selected in this section for a comparative study. The mega-frame of configuration 1, which is the base case, passed the limits of its structural response at a 40 m height, until 60 m, and showed the maximum drift at 80 m, which is 0.0399. The entire structure above the second mega-beam tilted to the right because of catastrophic plastic deformation at mega-beams 1 and 2 of configuration 1, causing peak transient story drift to exceed the safety limits. The equivalent plastic strain nephogram for configuration 1, Figure 8a, confirms the reason for exceeding limits. Other configurations did not go beyond the structural limits, as shown in Figure 11. In the peak transient story drift in the mega-frame, zero drift is observed from 4 m to 8 m on the infill panels, after which the drift increases and then decreases from 32 m to 40 m. From 40 m to 72 m, the first substructure is present and shows maximum transient drift in the whole structure. The drift in the mega-frame at the second and third substructures is smooth and does not attain any sudden peak. From the trend in transient story drift, most plastic deformation occurring in the mega-frame is at 72 m. There is little dip at 72 m, where the second substructure starts, while the initial gain in drift at 36 m is due to the presence of LRBs.

In all three substructures, the first substructure of configuration 1 went beyond its performance limits, while in all other selected configurations and other substructures, the limits were not passed. The drift in the selected configurations is approximately 40% of the drift in configuration 1, which means an approximately 60% improvement in the structural performance. Configurations 24, 25, and 26 showed a smooth gain in drift from 44 m to 52 m; the drift became almost constant, and the drifts of each of these configurations were close to each other. The other selected configurations showed a sudden peak at 48 m and then decreased gradually. In the second and third substructures, the peak transient story drift does not change much and shows smooth and steady changes. In the second substructure, the story drift changes within 10%, whereas in the third substructure, this change is within 25%. In the second and third substructures, all the selected configurations also showed approximately 60% less peak transient story drift compared to the base case.

**Figure 11.** Peak transient story drift of different configurations under the Nahanni earthquake: (**a**) mega-frame, (**b**) first substructure, (**c**) second substructure, and (**d**) third substructure.

#### *5.5. Residual Story Drift*

The same configurations with peak transient story drift are studied; residual story drift is also investigated. The trends are similar to the peak transient story drift. In the mega-frame, zero drift is observed from 4 m to 8 m, after which the drift increases and then decreases from 32 m to 40 m. From 40 m to 72 m, the first substructure is present and shows the maximum residual drift in the whole structure. Configurations 26, 27, 30, and 33 do not pass the limits, as shown in Figure 12a while the other configurations go beyond the residual story drift limits, which is a post-earthquake concern. Most configurations go beyond limits at 44 m to 60 m in the mega-frame. Configurations 24 and 25 pass the limits from 24 m to 96 m. At the top of the structure, configuration 30 showed the greatest improvement in structural performance, as its residual story drift was 81.15% less than that of configuration 1.

In the first and second substructures, configurations 1, 24, and 25 of the whole substructure go beyond the limits, while in the first substructure, configurations 29, 31, and 32 pass from 48 m to 56 m. When mega-beams 1 and 2 of configuration 1 experienced catastrophic plastic deformation, the entire structure above the second mega-beam tilted to the right, exceeding the allowable limits for residual story drift. Figure 8a is an equivalence plastic strain nephogram for configuration 1, further validating the aforementioned explanation for exceeding constraints. In the second and third substructures, configurations other than 1, 24, 25, and 26 do not show any sudden change. In all substructures and mega-frames, configuration 30 showed the most structural response control and does not cross its performance limits in terms of the substructures and mega-frame, as depicts in Figure 12. Minimum improvements of 70%, 88.73%, and 85.25% were observed in the first, second, and third substructures, respectively, compared to the base case.

**Figure 12.** Residual story drift of different configurations under the Nahanni earthquake: (**a**) megaframe, (**b**) first substructure, (**c**) second substructure, and (**d**) third substructure.

#### **6. Selection of the Optimized Design**

After a parametric study of 33 different MSCSS configurations under Nahanni ground motion, configuration 30 exhibits the optimized design, among others. Configuration 30 demonstrates minimum plastic dissipation with medium-to-high viscous dissipation. The maximum equivalent plastic strains in the beam and column are 0.0643 and 0.0893, respectively. These maximum equivalent plastic strains are 64% and 83.6% less in the beam and column, respectively, than in the base case. Configuration 30 also demonstrates minimum settlement at the top of the structure, i.e., 3.4 cm, which is 97.82% less than that for configuration 1. Residual story drift also remains under its limits, with an average improvement of 85% and a minimum of 70% in substructures compared to the base case. Moreover, the minimum is 21% and the average is 65%, which is less than the residual story drift limit (0.01), in the substructures of configuration 30.

#### *Selection of Optimized Infill Panels*

In this paper, three different infill panels are used, of which one is a conventional unstiffened infill panel and the others are ring-shaped infill panels with different dimensions. Ring-shaped infill panels showed improved seismic performance compared to conventional infill panels. In configuration 6, ring-shaped infill panels are used and show a 19.18% improvement in settlement at the front right corner, whereas at the right back corner, this improvement is 10% compared with configuration 8 with conventional unstiffened infill panels, as shown in Figure 13. Similarly, the maximum equivalent plastic strain in the column of configuration 6 is 4% less than that in configuration 8. Additionally, the maximum equivalent plastic strain in the beam is 44% less than when the ring-shaped infill is used. The ring-shaped infill panels used in configuration 6 have a mass of 448.17 kg per panel of 5 m × 4 m; however, this mass increases by 2.14 times in the conventional unstiffened infill panel of the same dimensions.

Conventional unstiffened infill panels have buckling issues because of their low lateral stiffness and energy dissipation. Ring-shaped infill panels have circular cut-outs and diagonal links, which lessen the buckling effect caused by the deformation ring properties.

**Figure 13.** Seismic performance of different infill panels in configurations under the Nahanni earthquake: (**a**) settlement at the top of the structure and (**b**) maximum equivalent plastic strain.

Large radius ring-shaped infill panels cut-outs showed slightly better performance, as configuration 12 has Ro = 0.44 m and configuration 15 has Ro = 0.42 m. The settlement at the front right corner on top of the structure is 0.46 m in configuration 12 and 0.47 m in configuration 15. Similarly, at the back right corner on top of the structure, this settlement is 0.54 m in configuration 12 and 0.56 m in configuration 15. The trend is the same with respect to the maximum equivalent plastic strain in the beams and columns between large and small cut-outs. Configuration 12 showed slightly improved performance in plastic deformation under seismic excitation, as shown in Figure 14. Additionally, the mass of infill panels in configuration 12 is approximately 6% lighter than in infill panels of configuration 15. Both configurations 12 and 15 showed the same plastic dissipation, but the viscous dissipation in configuration 12 was 16% more than that in configuration 15. Therefore, ring-shaped infill panels using configuration 12 are selected as optimized infill panels.

**Figure 14.** Seismic performance of different infill panels in the configurations under the Nahanni earthquake: (**a**) plastic dissipation and (**b**) viscous dissipation.

#### **7. Nonlinear Dynamic Analysis**

For further investigation, a nonlinear dynamic procedure (NDP) is carried out on the new optimized MSCSS, which is configuration 30, and compared with the base case. The nonlinear dynamic procedure, which is also known as nonlinear time-history analysis [34], provides the most realistic structural inelastic response, as it includes elasto-plastic behavior.

Configuration 30 showed more consistent maximum floor acceleration and displacement responses after nonlinear dynamic analysis under selected ground motions. The maximum coefficient of variation (COV) in configuration 30 is 16% and 32% in acceleration and displacement responses, respectively. The COV in configuration 1 is 15% and 134% in the acceleration and displacement responses, respectively. The maximum floor acceleration at the top of the structure in configuration 30 is 19.65 ms−<sup>2</sup> under Imperial Valley 1979 ground motion with a mean of 17.32 ms−<sup>2</sup> under selected ground motions, while configuration 1 has a mean of 28.06 ms−<sup>2</sup> and a maximum under Landers ground motion. 36.13 ms−2, as illustrated in Table 5. Configuration 1 showed catastrophic results in the maximum floor displacement at the top of the structure under Landers ground motion because of the formation of soft stories at the second mega-beam location and story. This is because buckling failure occurs in columns at the right half of structures at heights from 68 m to 76 m. Due to failure in the columns, a major collapse occurs in the structure, and the

maximum floor displacements are 18.26 m, 16.46 m, and 7.83 m at the top of the structure (i.e., 144 m height), the top of the fourth substructure (i.e., 136 m height) and the top of the third substructure (i.e., 100 m height), respectively. The mean maximum floor displacement at the top of the structure in configuration 1 is 3.55 m, while configuration 30 has a mean of 1.53 m with a maximum of 2.46 m under Taft ground motion, as illustrated in Table 6.


**Table 5.** Max. Floor Acceleration at the Main Points of Structure.

The time histories of the acceleration response of both configurations under the Landers earthquake showed that the structure experienced maximum acceleration between 10 and 20 s as the ground motion reached its peak, i.e., 11.28 ms−<sup>2</sup> at 10.2 s, as shown in Figure 15. Additionally, during this period, the structure experienced maximum fluctuations. After 20 s, the structural acceleration of configuration 30 showed more smoothness than the other configuration. However, the displacement response of configuration 1 showed that after 18.3 s, the structure tilted toward the right side and did not return to its central axis. The cause of this leaning toward one side is due to the plastic deformation in columns, particularly in mega-columns on the right side of the structure between 68 m and 76 m in height. After 20 s, complete failure is triggered in the columns and the structure completely collapses after 40 s. The maximum equivalent plastic strain in the column element occurs at (40, 0, 74~40, 0, 74.4), which is 0.583, while in the beam element, it occurs at (40, 3.25, 76~40, 3.9, 76), which is 4.7. Configuration 30 showed a controlled displacement response, and the displacement showed damping after 37.5 s. The structure did not show any structural failure. The maximum equivalent plastic strain in the column element occurs at (40, 6.5, 8), which is 0.076, and in the beam element occurs at (36.75, 6.5, 72~37.4, 6.5, 72), which is 0.144.


**Table 6.** Max. Floor Displacement at the Main Points of Structure.

**Figure 15.** Structural response time histories under Landers earthquake: (**a**) acceleration at the top of the structure, (**b**) acceleration at the top of the fourth substructure, (**c**) acceleration at the top of the third substructure, (**d**) acceleration at the top of the second substructure, (**e**) displacement at the top of the structure, (**f**) displacement at the top of the fourth substructure, and (**g**) displacement at the top of the third substructure and (**h**) displacement at the top of the second substructure.

From the time history of the equivalent plastic strain, plastic deformation starts in the column element at (40, 0, 74~40, 0, 74.4) after 1.2 s and in the beam element at (40, 3.25, 76~40, 3.9, 76) after 0.9 s in configuration 1. In the column after 34.6 s, the equivalent plastic strain reaches to 0.2, while in the beam after 18.3 s, when the ground displacement is maximum. In configuration 30, plastic deformation starts after 12 s in the column, reaches 0.02 at 19.3 s and in the beam after 10 s and reaches 0.02 at 13 s, but the equivalent plastic strain does not reach to 0.2, as illustrated in Figure 16.

**Figure 16.** Max. equivalent plastic strain under Landers earthquake: (**a**) in beam and (**b**) in column.

The equivalent plastic strain in column of configuration 30 never crosses 0.2 under selected ground motion, but equivalent plastic strain in beams only crosses 0.2 under Taft and Kobe ground motions, as shown in Figure 17a. Maximum equivalent plastic strain the beam element under both ground motions occurs at (36.1, 0, 16~36.75, 0, 16). Equivalent plastic strain crosses 0.2 at 43.4 s in Taft, after maximum ground displacement reaches 43.016 s, i.e., −0.5664 m. Under Kobe, crosses at 30.4 s, although the maximum ground displacement reaches 21.96 s, i.e., −0.76422 m. Among all selected ground motions, the maximum equivalent plastic strain in the columns is 0.187 at (0, 40, 8~0, 40, 8.4) under Kocaeli.

**Figure 17.** Max. equivalent plastic strain in configuration 30 under selected earthquake: (**a**) in beam and (**b**) in column.

Under Taft, configuration 30 leans toward the left side because of plastic deformation in mega-columns, which causes severe damage to the mega-frame of a structure. In configuration 1, the structure leans toward the right side, and this leaning causes more severe damage than in configuration 30. The structural acceleration response showed the same trend as that under the Landers earthquake. Tilting of the structure in configuration 30 started after 20 s when the structure already experienced a maximum ground acceleration of −9.12 ms−<sup>2</sup> at 6.45 s, as shown in Figure 18. Due to the leaning of the structure, the structure also showed a reduction in height. The front right and left corners on top of the structure exhibit downward displacements of 10.5 cm and 36 cm, respectively. In configuration 1, these values are 3.48 m and 1.58 m. Configuration 30 showed improved results in downward displacement at the top of the structure under Kobe, with 10.9 cm at the right front corner and 12 cm at the left front corner. Under Taft, configuration 30 dissipated 770 MJ of energy in the form of plastic deformation and 600 MJ under the Kobe earthquake, which is approximately 21.9% less than that under Taft. Plastic dissipation under Taft in configuration 30 is approximately 30% less than that in configuration 1.

**Figure 18.** Structural response time histories under Taft earthquake: (**a**) acceleration at the top of the structure, (**b**) acceleration at the top of the fourth substructure, (**c**) acceleration at the top of the third substructure, (**d**) acceleration at the top of the second substructure, (**e**) displacement at the top of the structure, (**f**) displacement at the top of the fourth substructure, (**g**) displacement at the top of the third substructure, and (**h**) displacement at the top of the second substructure.

Plastic dissipation mostly occurs because of plastic deformation of the SPSW in configuration 30, the core structural elements remain intact, and the structure does not collapse or face severe damage. However, in configuration 1, plastic dissipation occurred because of plastic deformation of the beams and columns, which triggered severe damage or collapse of the structure. Configuration 30 showed a minimum of 13% improvement in equivalent plastic strain compared to configuration 1 in beams with a mean of 51% under selected ground motions, whereas in columns, this minimum improvement was 75% with a mean of 80%, as shown in Figure 19c.

**Figure 19.** Parametric comparison: (**a**) plastic dissipation, (**b**) viscous dissipation, (**c**) equivalent plastic strain improvement, and (**d**) settlement at the top of the structure.

#### **8. Conclusions**

In this paper, nonlinear dynamic analysis was conducted on different configurations of a mega-subcontrolled structural system (MSCSS) to investigate seismic structural response and plastic deformation. The following conclusions are drawn on the basis of results from finite element and parametric studies:


Overall, nonlinear dynamic analysis shows that the proposed configuration improves seismic performance through increased energy dissipation and lateral stiffness. The results of this study pave the way for future investigations into the seismic performance of MSCSS featuring steel plate shear walls. Stochastic optimal design control will be used in the future to investigate the uncertainty related to seismic excitation in an effort to enhance the controllability index

**Author Contributions:** M.M.S.: Conceptualization (lead); methodology (lead); software (lead); writing—original draft (lead); formal analysis (lead); writing—review and editing (equal); X.Z.: funding; review and editing (equal); conceptualization (supporting); writing—original draft (supporting); writing—review and editing (equal); X.W., M.A., Y.X. and B.F.: writing—review and editing (supporting); methodology (supporting); software (supporting). All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is financially supported by the National Natural Science Foundation of China under grant no. 51878274.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

In Equation (3), the submatrix *CPA* can be built as follows:

where *CPi,j* (*i*,*j* = 1,2,3, ... ... ,(*n* − 1)) is the element of damping matrix *Cp* of the megastructure, *Ci,1* (*i* = 2,3,4, ... ... ,*n*) is the shear stiffness value of the *i* th substructure, and *Kli*−<sup>1</sup> (i = 2,3,4, ... ... ,*n*) is the damping of the *i* th−1 substructure's lead rubber bearing over the addition column.

In Equation (3), the submatrix *CS* is composed of diagonal matrix *CSi, and CSi-1* can be built as follows:

$$
\mathbf{C}\_{i\gets 1} = \begin{bmatrix}
\mathbf{C}\_{i-1,1} + \mathbf{C}\_{i-1,2} & -\mathbf{C}\_{i-1,2} & 0 & 0 & \cdots & 0 \\
0 & \ddots & \ddots & 0 & \cdots & \vdots \\
\vdots & -\mathbf{C}\_{i-1,j} & \mathbf{C}\_{i-1,j} + \mathbf{C}\_{i-1,j+1} & -\mathbf{C}\_{i-1,j+1} & 0 & \vdots \\
\vdots & 0 & \ddots & \ddots & \ddots & \vdots \\
\vdots & \vdots & -\mathbf{C}\_{i-1,p\_i-1} & \mathbf{C}\_{i-1,p\_i-1} + \mathbf{C}\_{i-1,p\_i} & -\mathbf{C}\_{i-1,p\_i} & 0 \\
\vdots & 0 & 0 & \mathbf{C}\_{i-1,p\_i} & \mathbf{C}\_{i-1,p\_i} + \mathbf{C}\_{i-1,p\_i} & -\mathbf{C}\_{i-1,p\_i} \\
\vdots & 0 & 0 & \mathbf{C}\_{i-1,p\_i} & \mathbf{C}\_{i-1,p\_i} + \mathbf{C}\_{i-1,p\_i} & -\mathbf{C}\_{i-1,p\_i} \\
0 & \cdots & \cdots & 0 & -\mathbf{C}\_{i-1,p\_i} & \mathbf{C}\_{i-1,p\_i} + \mathbf{C}\_{i-1,p\_i} \\
\end{bmatrix}
$$

where *Csubi*−1,*<sup>j</sup>* (*i* − 1 = 1,2, 3, ... ... ,(*n* − 1)), (*j* = 1,2,3, ... ... ,(*nz* + 1)), and *CSWi*−1,*<sup>j</sup>* are the floor and shear wall damping values for jth floor of the *i* th-1 substructure, respectively, while *Cai*−<sup>1</sup> and *Cli*−<sup>1</sup> are the damping of the *i* th-1 substructure's addition column and lead rubber bearing, respectively.

In Equation (5), the submatrix *KPA* can be built as follows:

$$K\_{PA} = \begin{bmatrix} K\_{p1,1} + K\_{2,1} + K\_{l,1} & K\_{p1,2} & \cdots & K\_{p1,(n-1)} \\ & K\_{p2,1} & K\_{p2,2} + K\_{3,1} + K\_{l,2} & \vdots & K\_{p2,(n-1)} \\ & \vdots & \vdots & \ddots & \vdots \\ & K\_{p(n-1),1} & K\_{p(n-1),2} & \cdots & K\_{p(n-1),(n-1)} + K\_{l(n-1)} \end{bmatrix}\_{(n-1)\times(n-1)}$$

where *KPi,j* (*i*,*j* = 1,2,3, ... ... ,(*n* − 1)) is the element of stiffness matrix *Kp* of the megastructure, *Ki,1* (*i* = 2,3,4, ... ... ,*n*) is the shear stiffness value of the *i* th substructure, and *Kli*−1 (*i* = 2,3,4, ... ... ,*n*) is the shear stiffness of the *i* th-1 substructure's lead rubber bearing over the addition column.

In Equation (5), the submatrix *KC* can be built as follows:

$$K\_{\mathbb{C}} = \begin{bmatrix} (-K\_{11})(-K\_{2,1}) & 0 & 0 & \cdots & 0\\ 0 & (-K\_{22})(-K\_{3,1}) & \vdots & \cdots & 0\\ \vdots & 0 & \ddots & \cdots & \vdots\\ \vdots & \vdots & 0 & (-K\_{l(n-2)}) \begin{pmatrix} -K\_{(n-1),1} \end{pmatrix} & 0\\ 0 & \cdots & 0 \end{pmatrix}\_{(n-1)\times 1}$$

In Equation (5), the submatrix *KS* is composed of diagonal matrix *KSi*, and *KSi*−<sup>1</sup> can be built as follows:

$$
\mathcal{K}\_{l-1} = \begin{bmatrix}
K\_{l-1,1} + K\_{l-1,2} & -K\_{l-1,2} & 0 & 0 & \cdots & 0 \\
0 & \ddots & \ddots & 0 & \cdots & \vdots \\
\vdots & -K\_{l-1,j} & K\_{l-1,j} + K\_{l-1,j+1} & -K\_{l-1,j+1} & 0 & \vdots \\
\vdots & 0 & \ddots & \ddots & \ddots & \vdots \\
\vdots & \vdots & -K\_{l-1,p\_l-1} & K\_{l-1,p\_{l-1}} + K\_{l-1,p\_l} & -K\_{l-1,p\_l} & 0 \\
\vdots & \vdots & -K\_{l-1,p\_l} & K\_{l-1,p\_l} + K\_{l-1,p\_l} & -K\_{l-1,p\_l} & 0 \\
\vdots & 0 & 0 & K\_{l-1,p\_l} & K\_{l-1,p\_l} + K\_{l-1} & -K\_{l-1} \\
0 & \cdots & \cdots & 0 & -K\_{l-1} & K\_{l-1} + K\_{l-1}
\end{bmatrix}
$$

where *Ksubi*−1,*<sup>j</sup>* (*i* − 1 = 1,2, 3, ... ... ,(*n* − 1)), (j = 1,2,3, ... ... ,(*nz* + 1)), and *KSWi*−1,*<sup>j</sup>* are the floor and shear wall stiffness values for *j <sup>t</sup>*<sup>h</sup> floor of the *i*th-1 substructure, respectively, while *Kai*−<sup>1</sup> and *Kli*−<sup>1</sup> are the shear stiffness of the *i*th-1 substructure's addition column and lead rubber bearing, respectively.

In Equation (14), the submatrix *R* can be built as follows:

$$R = \begin{bmatrix} 1 & 0 & \cdots & 0 & 0 & 0 \\ 1 & 1 & 0 & \vdots & \vdots & \vdots \\ \vdots & \vdots & 1 & 0 & \vdots & \vdots \\ \vdots & \vdots & 0 & \ddots & 0 & \vdots \\ \vdots & \vdots & 0 & \ddots & \ddots & 0 \\ \vdots & \vdots & 0 & \ddots & \ddots & 0 \\ 1 & 1 & 0 & \cdots & 0 & 1 \end{bmatrix}\_{\left[ (n+1)+(n-1)(n\_z+1) \right] \left[ \mathbf{x} \left[ (n+1)+(n-1)(n\_z+1) \right] \right]}$$

#### **Appendix B**

The dimensions of the ring-shaped infill panels used in configuration 12 are as follows:

**Table A1.** Ring-Shaped Infill Panel Dimensions.


The dimensions of steel infill panels used in configurations 15 and 16 have changed as follows:



The following are the details of the revised mega-column 1 (MC1) and substructure column 1 (SSC1) in configurations 17 and 18:

**Table A3.** Revised Section of MC1 and SSC1.


The following section members' profiles have been updated in configuration 21:

**Table A4.** Revised Section Members.


The following are the revised section members in configuration 23:

**Table A5.** Revised Section Members.


In configuration 25, MC Bracing 1 and MB 1~3 are revised, and their details are as follows:

**Table A6.** Revised Sections of MC Bracing1 & MB 1-3.


The following are the revised lead rubber bearing (LRB) parameters and substructure beams (SB) of substructures 2 to 4 in configuration 26:

**Table A7.** Revised Section of SB2~4.



**Table A8.** Revised LRB Parameters.

The first three mega-beams (MB 1~3) have been revised, as have the infill panel details added in the mega-column of the first substructures on floors 1 and 2.

**Table A9.** Revised Section MB1~3.


**Table A10.** Ring-Shaped Infill Panel Dimensions (6.5 m × 4 m).


For configuration 32, the revised bracing in mega-columns for the entire structure (MC Bracing 1 and 2~4) is as follows:



For configuration 33, the revised substructural beams for the entire structure (SB 1 and SB 2~4) are as follows:

**Table A12.** Revised Section Members of SB 1 & SB 2~4.


#### **Appendix C**

Configuration 1 serves as the base structure for configurations 2 to 22, while Configuration 23 serves as the base for configurations 24 to 33. The red structural members indicated the modification from the base structure.

**Figure A1.** Structural configuration: (**a**) Conf. 1, (**b**) Conf. 2, (**c**) Conf. 3, (**d**) Conf. 4, (**e**) Conf. 5, (**f**) Conf. 6, (**g**) Conf. 7, (**h**) Conf. 8, (**i**) Conf. 9, (**j**) Conf. 10, (**k**) Conf. 11, and (**l**) Conf. 12.

**Figure A2.** Structural configuration: (**a**) Conf. 13, (**b**) Conf. 14, (**c**) Conf. 15, (**d**) Conf. 16, (**e**) Conf. 17, (**f**) Conf. 18, (**g**) Conf. 19, (**h**) Conf. 20, (**i**) Conf. 21, (**j**) Conf. 22, (**k**) Conf. 23, and (**l**) Conf. 24.

**Figure A3.** Structural configuration: (**a**) Conf. 25, (**b**) Conf. 26, (**c**) Conf. 27, (**d**) Conf. 28, (**e**) Conf. 29, (**f**) Conf. 30, (**g**) Conf. 31, (**h**) Conf. 32, and (**i**) Conf. 33.

#### **References**


### *Article* **Modeling Profitability-Influencing Risk Factors for Construction Projects: A System Dynamics Approach**

**Shah Jahan 1, Khurram Iqbal Ahmad Khan 1,\*, Muhammad Jamaluddin Thaheem 2, Fahim Ullah 3, Muwaffaq Alqurashi <sup>4</sup> and Badr T. Alsulami <sup>5</sup>**


**Abstract:** This study addressed the complexity involved in integrating the causative risk factors influencing construction profitability. Most of the existing studies cover the individual effects of profitability influencing factors. Very few focus on the systematic impact without incorporating the complexity and associated dynamics, presenting a gap targeted by the current study. The current study aimed to assess causative interrelations and interdependencies between profitability influencing risk factors (PIRF), through systems thinking (ST) and system dynamics (SD) modeling. The SD approach was used to evaluate the integrated impacts on profitability-influencing risk categories (PIRC) in construction projects. The causative influencing factors affecting construction profitability were identified through a comprehensive literature review. These were ranked using content analysis, and categorized into significant issues. Through 250 structured surveys and 15 expert opinion meetings, the path for quantitative and qualitative evaluations was prepared. Following these investigations, a causal loop diagram (CLD) was established using the ST technique, and the integrated effect was quantified using SD modeling. The study finds the rising cost of material, supply chain process, payment issues, planning and scheduling problems, financial difficulties, and effective control of manpower and equipment resources as the most critical PIRFs. The integrated effects of PIRFs on PIRC were quantified using SD modeling. This study helps field professionals with profitability-influencing factors, diagnosing issues, and integrating impacts regarding decisionmaking and policy formulation. For researchers, it presents a list of factors that can be investigated in detail, and the holistic interrelationships established.

**Keywords:** causal loop diagram; profitability-influencing risk factors; profitability-influencing risk categories; system dynamics; systems thinking

### **1. Introduction and Background**

The construction industry is highly complicated, due to the unique nature of projects and the involvement of multiple stakeholders [1]. It becomes more challenging when the engaged stakeholders demand different profit levels to keep them in the construction business. In addition, these stakeholders are the sources of multiple risk factors [2]. Therefore, multiple challenges exist in assessing the profit, due to various risk factors encountered during the execution stages of construction projects. Generally, construction projects fail to achieve good profitability, due to issues related to time, cost, and scope [3]. This is more

**Citation:** Jahan, S.; Khan, K.I.A.; Thaheem, M.J.; Ullah, F.; Alqurashi, M.; Alsulami, B.T. Modeling Profitability-Influencing Risk Factors for Construction Projects: A System Dynamics Approach. *Buildings* **2022**, *12*, 701. https://doi.org/10.3390/ buildings12060701

Academic Editor: Audrius Banaitis

Received: 13 April 2022 Accepted: 20 May 2022 Published: 24 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

evident in developing countries that struggle to meet the project objectives, due to multiple constraints. Profitability is one of the most important goals, and an essential element of satisfaction for the construction project stakeholders. This is more evident in the case of stakeholders involved in the construction business, and may not be true for public sector organizations. The aim in the latter case is the development of projects for public benefits where profitability may not be the key concern, and other social reasons may be prioritized. Nevertheless, profitability is a key goal for most private construction organizations. Therefore, it is important to highlight the profitability-influencing risk factors (PIRFs) and issues resulting in project complexity, increasing the impacts, and ultimately reducing profitability [4].

The impact assessment of PIRFs on profitability becomes a critical requirement and is significant to every stakeholder [1] because a reasonable profit level is needed to maintain and enhance their business relationships. Moreover, well-defined and time-based impact assessment of profitability during the construction phases reduces the chances of failure, and improves project performance [5]. The causes, effects, interrelationship, and interdependency of the PIRFs are essential, and need particular focus while integrating them into the construction feedback-based system. These PIRFs exhibit complicated and causal relationships. Accordingly, there is a need to understand the involvement of complexity in the systematic integration of the causative risk factors and categories concerned with profitability [6].

Further, comprehensive and in-depth knowledge of the complexity and the integration of PIRFs is needed to analyze the dynamic effects on profitability-influencing risk categories (PIRC). Such assessment of PIRFs, and their categorization into PIRCs, is crucial for the survival of construction firms engaged in construction businesses [7,8]. This approach provides a unique angle, with critical insights into the factors influencing construction profitability and the associated risks. The link between risk to profitability is rarely addressed in the literature. The risk-based profitability model is a way to explore the causation of various issues in construction projects. Accordingly, it becomes important for construction firms to review failures in terms of encountered risks in the past, taking corrective measures, and focusing on future profitability [9]. This is pivotal to the organizational continuous improvements and growth in the era of globalization.

Due to the complexities involved in PIRFs' assessment, the ST technique may be used to assess issues in which the impact of individual components is considered in the context of the whole system [10]. Causal loop diagrams (CLDs) are used in this technique to discover key processes for gathering input, and their impact on project goals [11]. In a complex system, ST manages several variables at the same time, and highlights connections and interdependencies between them. For example, in the construction industry, PIRF and PIRC are two dimensions with inherent complexities and dynamics. Accordingly, ST is an acceptable technique for this research to identify and evaluate the causal feedback impacts and links between profitability factors.

The PIRFs in construction projects are interdependent. However, the existing studies are conducted in isolation, focusing on individual factors. Previous studies show their impacts on profitability without understanding the complexities involved while integrating their causal interdependencies. There is a limited understanding of the dynamic risks faced in construction projects. Accordingly, the effect of the influencing factors on profitability needs investigation. Further, there is a need to explore the causal relationships between PIRFs of the construction projects, and the assessment of causal interrelations between key influencing factors. As previously discussed, ST and associated system dynamics (SD) are widely used for establishing and assessing such causal relationships. Therefore, this study presented the modeling of the key PIRFs and the holistic integration and evaluation of their combined effects on PIRCs using ST and SD. Further, the current study proposed management strategies for handling issues about crucial PIRFs, in order for construction projects to enhance profit.

The current study is a novel approach to PIRF and PIRC assessment, integration, and quantification. Profitability is crucial in construction projects. There is limited understanding of construction profitability risks, complexity, and dynamics. The assessment of interdependencies between the key PIRFs remains a key issue in the literature, that, to date, is not well explored. This study addressed the complexity involved while integrating the causative PIRFs influencing construction profitability using SD. The complexity was addressed by formulating the causative CLDs (reinforcing and balancing loops) to show the qualitative impact. Afterward, SD modeling and simulation were used for the assessment of the quantitative impact within the system.

In terms of the knowledge gap, most of the studies in this field identify the individual effects of PIRFs. Very few focus specifically on the systematic impact of PIRFs and PIRCs; however, these studies do not integrate and incorporate the associated complexity. Further, so far, the SD approach has not been used for PIRF and PIRC assessment in construction projects. The current study targeted these gaps, and presented an SD approach for establishing the relationships between PIRFs and PIRCs, assessing and integrating them, and their quantifications, into construction projects.

To achieve the holistic goals, the current study has the following objectives:


To achieve the objectives, this research involved the formulation of qualitative reinforcing, balancing causal feedback loops, and a quantitative SD model of PIRFs. The overall aim was to understand their integrated influence on construction profitability. It specifically addressed the complexity, reinforcing and balancing causal impacts, and dynamics within the construction system related to profitability, which is rarely addressed in other studies. The expected contribution of the study is the system-based learning of the impacts of risks on profitability in construction projects.

This study helps the field professionals learn about PIRFs, the causal relations, and their importance in construction projects. Using the current study, professionals can diagnose profitability issues, and assess their impacts for improved decision-making, in order to manage risks and enhance profitability.

The rest of the paper is organized as follows. Section 2 presents the pertinent literature. Section 3 presents and discusses the method and data collection approaches adopted in the current study. Section 4 presents and discusses the study results, including the CLDs and related SD analyses. Finally, Section 5 concludes the study and presents the key takeaways, limitations, and future directions for expanding upon the current study.

#### **2. Literature Review**

Risk is an uncertain event that, if it occurs, has an impact on a project's objectives. There are inherent risks in construction work that impact and reduce the overall profitability of construction projects during the execution phases. The likelihood of an actual risk event, or a combination of risk events, during the whole construction process concerns the stakeholders [12]. PIRFs involve many variables, and it is often difficult to determine cause and effect, interdependencies, and correlations. These PIRFs play a significant role in decision-making and performance. In addition, these PIRFs affect other construction dimensions such as supply chains [13], cash flows [14], contingency [15], and project complexity [16].

Multiple factors are involved in various industries that influence profit margins [2]. Business failure is closely related to such profitability-impacting variables. Any firm can improve its financial benefits by exploring and suggesting preventive measures regarding profit-influencing variables [17]. The profitability levels of construction projects vary due to their complexities and challenging objectives, often constrained by time and money. The political, economic, cultural, and legal aspects can negatively impact the level of

profitability of construction projects [18]. The performance of construction firms can be measured using profitability analysis. It involves a systematic approach to defining, analyzing, and evaluating various profit-influencing variables in construction projects [19].

It is observed that construction profitability depends on multiple variables, and there are numerous criteria to assess a firm's profitability [20]. There is a multi-criteria approach for construction project profitability analysis and predictions. Overall, the success of a project is linked with profitability and its influencing variables. Accordingly, the relationship is established, keeping in view the profitability-influencing factors at different levels. During the execution phase, investigating the construction process helps avoid profit failure as unpredictable and complicated scenarios occur [19]. It is necessary to highlight and discuss such failure-related factors (or in other words, the PIRFs), to enhance the success and performance of construction projects in terms of profitability.

Construction projects come across complicated and unpredictable challenges during the execution phase [17]. Construction organizations developed some tools and procedures to decrease the possible losses, and make their projects more profitable. These tools and strategies adopted by firms are based on the experience and knowledge of the firm's engineers [21]. These profitability-influencing variables are concerned with the initiation, bidding, contracting, execution, and closing stages of construction projects. However, the holistic assessment of the negative factors, or PIRFs, is rarely reported in published literature.

The construction supply chain has four significant participants: client, main contractor, consultant, and subcontractor. The main contractor drives the construction supply chain. The main loop of the construction supply chain is between the client and the main contractor [22]. A study was carried out to identify the factors influencing the relationship between these participants, and their effect on the overall project performance and profits [23,24]. The main objective of the research was to identify the factors affecting the supply chain environment of a construction project, either directly or indirectly, causing significant variation in the project profits and performance parameters. It is concluded that the project team could control and develop factors over time, such as trust, risk management, and joint working, in order to improve profit margin.

There is profitability variation for different types of projects across the construction industry, due to various factors, such as unpredictable and complicated scenarios [17]. It is essential to evaluate these situations in construction operations, to reduce losses and increase profits. Profit projection is difficult due to the nature of construction operations, intricate procedures, tough environment, organizational structure, and several other factors. As a result, construction projects are often delayed and go over budget, especially in developing countries [25]. Furthermore, the construction sector is complicated by the presence of many specialist contractors, resulting in fragmented construction projects.

Construction companies have long sought to anticipate project cash flow at the outset of a project, which is intimately linked to payment terms and financing schedules. Diverse variables impacting project cash flow for the realm of international construction projects often vary, due to a variety of external and internal risks, reducing profit margins. Financial and project-specific variables influence cash flow soundness [26]. Changes to the original design, variations to the works, production goal slippage, delays in deciding on variation/day works, and claims settlement delays are all significant contributors to loss in profits. Based on these characteristics, and the periodic variability readings, multiple linear regression models were created and applied in construction projects. The established models offer construction contractors essential information about the expected effects of key factors on cost flow baseline forecasting at various construction phases, allowing them to take proactive measures and avoid losses [27].

A project's contingency cost is calculated depending on the degree of unanticipated circumstances. Breaking down the project into major work packages is one way to do this. Then, for each work package, independent factor analysis is performed, treating them as discrete projects to secure profitability at the activity level. Potential sources are discovered

by a factor analysis, based on the views of experienced project workers. Consequently, each work package's risk-adjusted target cost is determined. The contingency amount necessary to finish the work package is calculated using this goal and base cost estimate in the relevant study [28].

In underground construction works, it is imperative to identify the factors involved at the starting phases of the project [29]. As underground constructions are comparatively more complex, they contain unpredictable and variable subsurface conditions. Many uncertain variables encountered in project implementation dynamically affect the project's duration, and affect the project profit [30]. The contracting parties, such as owners, designers, contractors, subcontractors, suppliers, etc., also add to these projects' complexity. Therefore, assessment of the profitability-influencing factors is not easy and straightforward. Instead, it is complicated, tricky, and involves establishing multiple linear and nonlinear relationships.

#### *2.1. Systems Thinking and Complexity*

Systems thinking (ST) aims to understand a system's fundamental structure by confidently deducing its behavior [11]. Ullah and Sepasgozar [31] state that the interaction between a system's components is fundamental to ST. The ST method helps analyze the feedback behavior of each variable, and its influence on other variables, since variables in a system have intricate interactions among themselves. Accordingly, this approach is favored for investigating the systems with complex relationships, such as the construction PIRFs. This technique, however, focuses on the whole system, rather than a limited project perspective, when determining these linkages. As a result, a practical and efficient strategy for comprehending and addressing system complexity is needed [20]. This limitation is usually addressed in the next stage of ST applications, i.e., the SD modeling.

#### *2.2. System Dynamics*

The theory of nonlinear dynamics and feedback control established in mathematics, physics, and engineering forms the foundation of SD [11]. SD techniques are used to study human and technical system behaviors. To answer key real-world issues, SD relies on cognitive and social psychology, organizational theory, economics, and other social sciences [32]. The SD technique may be used to create 'micro worlds' that explain realworld situations in a clear, practical, organized, and accessible way [33]. The ability to deconstruct complicated systems into easily understandable subsystems is a key strength of the SD process. SD is a nonlinear feedback system that handles complexity and process linkages [34,35]. Thus, it is suited for assessing the complicated relations of PIRFs in this study.

#### **3. Methodology**

The research study focused on the causal impact of PIRFs and PIRCs on construction projects. A mixed method was adopted in this study: qualitative and quantitative. A qualitative approach was used to evaluate the causal interconnection, interdependencies, and impacts of such PIRFs and PIRCs on construction projects. Further, the qualitative integrated effects of causative influencing factors were assessed using ST feedback loops. The quantitative assessment was conducted using the SD approach, in line with relevant studies [36,37]. The processes involved in the methodology are shown in Figure 1.

**Figure 1.** Schematic demonstration of methodology.

#### *3.1. Desk Study Phase*

A detailed review of relevant published research articles was carried out in the desk study phase. Accordingly, the problem statement and the research objectives were developed. Relevant literature was synthesized to recognize the PIRFs and related issues concerned with the profitability of construction projects. An extensive literature study was undertaken using these aims as a guide. Research articles and conference proceedings were searched using the keywords "construction profitability," "profit in construction projects," "construction profitability impacting factors," "profit influencing factors," and "construction profit performance" on relevant literary databases. A number of scholarly research platforms, including Google Scholar, Scopus, Emerald Insight, Taylor and Francis, American Society of Civil Engineers, Elsevier-Science Direct, Springer, MDPI, and SAGE, were consulted and utilized during this process, following recent articles [37–39]. As a

result, as indicated in Table 1, 60 relevant publications were collected to identify major causal elements impacting the profitability of construction projects.

**Table 1.** Identification of profitability-influencing risk factors (PIRF).


The significant problems related to the PIRFs (listed in Table 1) in construction projects are listed in Table 2. The problems were observed due to controlling factors' impacts, and enhancing or reducing the construction profits.

The PIRFs are causative risk factors that significantly impact profitability [4] in a construction project, especially in the execution stages [12]. These were extracted through a literature review in this study, as listed in Table 1. The causes help to diagnose profitability issues and their impact. These further help in improved decision-making, in terms of causal significance, in order to reduce the native effects and enhance profitability [2].

Since there is no standard classification of risk, several methods regarding risk classification are reported in published literature, with the most common type based on the nature of risk [54]. Keeping in view the identified causative influencing factors, and related issues from the set of selected literature, the PIRCs in this study were derived mainly considering

supply chain [13], cash flow [14], contingency [15], and project complexity [16], as shown in Table 2. The PIRFs have specific natures and causes, whereas the PIRCs have combinations of different factors, and their corresponding issues, under larger categories. This study analyzes the causal interdependencies of PIRFs on the PIRCs in construction projects.


**Table 2.** Profitability-influencing risk categories (PIRC) and issues.

A content analysis was carried out on the PIRF (causes only), which included a twopart literature review, based on relevant publications, and a preliminary pilot survey in computing and assigning literature and industry scores to factors following recent approaches [37]. The frequency of occurrence of each PIRF was noted and compiled in the first portion of the literature study, giving the frequency scores. Then, on a five-point Likert scale (1 = very low, 2 = low, 3 = medium, 4 = high, and 5 = very high), the different authors (of reviewed papers) assigned a qualitative score to the PIRFs in the second section, to mark their contextual significance [36]. Finally, each component was given a combined score by multiplying its collected frequency and qualitative scores. The literature ratings were adjusted before ranking them in decreasing order.

In addition, to establish the connection to the local context, a preliminary survey was conducted to identify the PIRFs influencing construction profitability in developing countries. The primary survey was conducted to target (50) construction industry professionals appointed at various levels in construction projects. These include project managers (11), construction managers (8), project directors (7), project/site engineers (10), design team leaders (9), and contract engineers (5).

The key questions inquired about each respondent's level of understanding of the topic, knowledge of risk-based digital tools, and the implementation of risk management systems in their respective organization for managing profitability. As a result, 30% of the respondents respond with moderate understanding, and 55% respond with an advanced understanding of the topic. Further, 40% have average knowledge of risk tools, and 45% have good knowledge of tools regarding implementation, which corroborates the data quality of the current study.

Most of the field specialists were associated with contractors (24), consultants (14), and clients (12). The average experience of the group of 50 respondents was more than 15 years. Further, the distribution is 5–10 years (15), 11–15 years (12), 16–20 years (13), and over 20 years (10). Most of the responses were from the respondents associated with large-size organizations engaged with the construction of hydropower, road, and building projects. The numbers and nature of risk observed by the specialists of the organizations dealing with projects were reported as medium to high. Based on expert feedback, normalized industry scores were calculated using mode values against all responses obtained from the survey. These were subsequently ranked in descending order [36].

Overall, three surveys were conducted in this study. The first survey was carried out for ranking PIRFs, using weighted normalized and cumulative scoring of literature/industry. The second survey was carried out to assess the interrelationship and impact of PIRFs and PIRCs, in order to compute the relative importance index (RII) of responses. Finally, a third survey was carried out to confirm the linkages and polarities for the development of CLD, speed and strength of influences, and loop prioritization, in line with the recent studies [36,37].

#### *3.2. Data Collection and Analysis Phase*

During the analysis phase, factors of lower values were screened out using a simple additive weighting procedure, and the critical factors were ranked appropriately [36,60]. First, the components were given cumulative scores based on alternative weighting distributions (literature/industry), such as 30/70, 40/60, and 50/50. These were determined using the literature and industry scores as previously discussed. Then, using additional weighting distributions, a statistical check (one-way ANOVA) was run to examine whether there was a statistically significant change between the rankings of different variables. A *p*-value of 0.85 indicates that there is no statistically significant difference, prompting industry experts to speculate. As a result, 15 key PIRFs of construction profitability were ranked, as indicated in Table 3. These were chosen based on a 60% commutative score to include maximal effect, employing a 40/60 weighting distribution.


**Table 3.** Significant profitability-influencing factors risk (PIRF).

As content analysis was based on secondary data, primary data gathering was required to enhance the reliability and establish the context for the current study. To guarantee and enhance the credibility and efficacy of this study, an international survey was conducted, which included additional analysis and data collection from developing countries. The World Economic Forum's Inclusive Development Index (IDI), which assesses each country's development, was used to choose these nations (2018).

To begin the data gathering process, a structured impact matrix questionnaire was created using Google Docs. It was divided into two sections: the first section asked respondents for personal information, such as their country of origin, educational background, work experience, and organizational role, and the second section asked them to rate the impact of each causative factor on construction profitability using a three-point Likert scale (1 = low, 3 = medium, and 5 = high).

To assure a representative sample, a sample size of 96 or more was needed, as highlighted in a similar study [61]. By examining their profiles on research network sites for research, such as Research Gate, and social media, including Facebook, Twitter, and LinkedIn, 305 construction management field specialists were contacted to provide relevant data. These specialists represent the four primary internal stakeholder categories: client, contractor, consultant, and designer, and come from both multinational and local (Pakistan) construction organizations [62].

A valid response rate of 82 percent was obtained via an online survey, which yielded 250 valid replies. The reliability, consistency, and normalcy of these replies were checked using IBM SPSS Statistics. The RII approach was used to rank the derived relationships based on the importance assigned by respondents. In construction projects, the data gathering through questionnaires identified 60 influencing linkages between PIRFs and PIRCs. The RII was used to calculate importance indices for each relationship [63] and to identify the most immediate causative variables and PIRC in construction projects. Equation (1) was used for this purpose.

$$\text{Relative Importance Index} = \frac{\sum W}{A \times N} \tag{1}$$

where *W* = weights assigned on the Likert scale;

*A* = the maximum assigned weight;

*N* = total number of respondents.

The minimum and maximum values of RII are 0 and 1, respectively. It is vital to remember that evaluating all impacts, rather than direct causes, does not accurately portray the system's structure [10].

To classify the replies according to significance levels, a criterion similar to Rooshdi, et al. [64] was used, which defined RII scores ranging from 0 to 0.2 as "Very Low," 0.2 to 0.4 as "Low-Medium," and 0.4 to 0.6 as "High." "Medium" is defined as score between 0.4–0.6, "Medium-High" is defined as 0.6–0.8, and "Very High" is defined as 0.8–1.

To restrict the data to a smaller collection of summary variables, relations with RII values of less than 0.8 were regarded as most important or most urgent, as shown in Table 4, and, therefore, evaluated for further analysis using ST.


**Table 4.** Scrutiny of immediate PIRFs based on RII.

#### *3.3. Demographics of Survey Respondents*

In the third survey conducted in this study, several construction field practitioners and professionals were engaged. These respondents were appointed at various levels in in their organizations, occupying various roles and responsibilities. Additionally, they have considerable experience in executing complex construction projects. Further, these respondents belong to different developing countries of the world. The international responses were around 54%, and the remaining 46% were from the Pakistan construction industry. Qualification-wise, 66% of respondents have master's degrees, and 6% have a PhD/D.Eng. This indicates that 72% of replies are from highly qualified professionals. A repuTable 28% of responses come from B.Eng./BS degree holders. The experience and higher qualifications of the professionals responding to the current questionnaire survey highlight the responses' credibility.

The survey results show that 15% of respondents are field professionals with more than 15 years of experience in different types of construction projects. This demonstrates that the responses are from extraordinarily knowledgeable and practiced professionals. A total of 35% of respondents have 10–15 years of experience, and 32% of respondents have an experience between 5–10 years. This means a total of 82% of the respondents have more than five years of experience. The remaining 18% have less than 5 years of construction experience. Considering the organizations, most of the respondents are from multinational organizations (48%). Likewise, 27% are from government departments, and 25% are from semi-government and other organizations.

#### *3.4. Systems Thinking and System Dynamics Modeling Phase*

In this phase, various CLDs were created, and ST and SD analyses were conducted accordingly. In order to create a CLD, the data is gathered in the form of expert opinions. Interviews with industry professionals, with an average of over 20 years of experience, were performed during the first phase. These respondents were working for client, contractor, designer, and consultant companies, each with their own tasks and responsibilities. There were project team leaders (4), contract and procurement experts (2), construction managers (3), planning engineers (2), and site engineers (4), among the most important positions. This step aimed to recognize linkages between variables and give polarity to relationships between the most direct causal links in chronological order, following relevant studies [37]. Using this information, a CLD was created that revealed six major loops.

CLDs represent the causal interdependencies in the form of links and loops within the system, to visually represent the causal relationships. In CLD, there are two types of loops, i.e., reinforcing and balancing loops. In the CLDs, "R" denotes the reinforcing loop and "B" denotes the balancing loop. Arrows are used to connect the factors in the diagram, indicating their directional influence. Each arrowhead is given a polarity to represent the link between the two variables. A negative polarity (−) suggests an inversely proportional connection, while a positive (+) indicates a directly proportional link. CLDs are created through Vensim software, to analyze the impact of the causal relationships, keeping in view the negative (−) and positive (+) polarities in the system.

As a result, a stock and flow diagram was created in this study to assess the combined effect of each influencing element on the profitability risk categories, including cash flow, supply chain, contingency, and project complexity. For the created system of construction profitability, many simulations were run to provide dynamic and integrated causal effects of elements on one another and the profitability risk categories. Finally, conclusions were drawn based on the project goals and analyses.

In the second phase, the same experts involved in CLD preparation were asked to characterize each feedback loop according to its intensity and speed of effect. Using this loop-based categorization method, the system's most essential loops were determined [65]. For the interviews, the sample size was determined using the notion of sample saturation [66]. To achieve data saturation, many interviews are necessary. However, by the 15th

interview, sample saturation was attained, demonstrating that the replies were consistent, and confirming the validity of the data.

#### **4. Results, Analyses, and Discussions**

#### *4.1. Impacts of Influencing Factors on Construction Profitability*

The interdependencies between important aspects of construction profitability and the associated PIRFs and PIRCs necessitated mixed analyses: qualitative and quantitative. As a result, the assessment was conducted using the CLDs (Figure 2), which aid in a better understanding of how construction project profitability is driven inside a dynamic system. This also helped visualize the influence of various PIRFs and PIRCs in construction projects.

**Figure 2.** Causal loop diagram based on systems thinking of balancing and reinforcing loops.

The CLD constructed in this study has six important loops, representing the causality of PIRFs affecting one of the PIRCs. It is made up of four reinforcing, or positive, loops that cause a change in the same direction (growing or decreasing), represented by the symbol "R". On the other hand, it has two balancing, or negative, loops that move variables in the opposite direction (i.e., oppose the change in every cycle), marked by the letter "B". The loops are all recognized and discussed in the following sections.

#### *4.2. Reinforcing Loop-R1 (Impacting Supply Chain-PIRC)*

Figure 3 shows that the abrupt disturbance in the market conditions is due to frequent fluctuations in price. As a result, the supply of the materials cannot be provided at a constant price rate, leading to the rising cost of construction materials. The disturbance in the supply chain process is observed due to the increasing cost of construction materials, which finally impacts the overall construction project supply cost and time. This reduces the estimated profit of the construction project.

**Figure 3.** Impact on profitability risk category–supply chain (reinforcing loop-R1).

#### *4.3. Reinforcing Loop-R2 (Impacting Cash Flow-PIRC)*

Figure 4 shows that the supply of the materials cannot be provided at a constant price rate, due to the rise in the cost of construction materials. As a result, the construction contractor spends more money procuring materials than the planned and forecasted cash flow reserved for purchasing. The disturbance in the cash flow leads to an increase in the construction contractor's financial difficulties. Accordingly, the contractor cannot fulfill the payment agreement to all concerned payment-receiving stakeholders. Finally, the loop variables impact construction projects' overall cash flow behavior.

**Figure 4.** Impact on profitability risk category–cash flow (reinforcing loop-R2).

#### *4.4. Reinforcing Loop-R3 (Impacting Contingency-PIRC)*

Figure 5 shows that the risk factor "financial difficulties", from the previously derived loop, create issues such as cost overruns, time delays, scarifying quality, and, sometimes, scope variations. The economic challenges are also responsible for the ineffectiveness and inefficiency of executing critical activities in construction projects. In addition, the cost, time, quality, and scope increase the overall project complexity in different directions and stages. This further exacerbates the unexpected risks/situations for the contracting parties. As in other construction projects, time contingency and cost contingency are used to encounter uncertain problems. Finally, the loop variables impact the overall construction project time and cost contingency.

**Figure 5.** Impact on profitability risk category–contingency (reinforcing loop-R3).

#### *4.5. Reinforcing Loop-R4 (Impacting Project Complexity-PIRC)*

Figure 6 shows that the risk factor "financial difficulties" creates complexities for the construction project in terms of cost overruns, time delays, scarifying quality, and scope variations. The complexities in the execution stage of the project are responsible for the increase in schedule risks and associated issues. Moreover, the scheduling issues increase the chance of time overruns and the overall project duration. The inclement weather cycle becomes prominent in the construction project, due to such time overruns and increased project durations. The impact of the inclement weather cycle increases the project complexities and the duration. Finally, the loop variables show an integrating impact that reinforces the project complexity.

**Figure 6.** Impact on profitability risk category–complexity (reinforcing loop-R4).

#### *4.6. Balancing Loop-B1 (Impacting Project Complexity-PIRC)*

Figure 7 shows that the risk factor "project complexity" negatively impacts the schedule and project duration, and creates multiple problems related to time overruns. In a construction project, time is money for the construction contractors. Thus, when encountering an increase in the project duration, the contractor has to manage the human resources and equipment resources efficiently. Therefore, effective resources management becomes necessary to deliver the project within the specified contract period. Additionally, adequate resources management, including effective time management, helps reduce the complexities involved during the execution stage of the project. Accordingly, loop B1 shows a balancing impact on project complexity.

**Figure 7.** Impact on profitability risk category–complexity (balancing loop-B1).

#### *4.7. Balancing Loop-B2 (Impacting Project Complexity-PIRC)*

Figure 8 shows that the scope variations in the construction projects are responsible for the changes in the specification, which leads to a lack of material availability. As a result, the stakeholders must spend more money on material management. Construction site management becomes difficult due to the lack of material availability, which usually delays the execution of the construction activities, causing cost and time overruns. Poor site management demands more sharing of information. Therefore, effective information sharing, and management through information technology, becomes necessary. It helps reduce complexities in the execution stage of the project and aids effective time management for the construction activities. Accordingly, loop B2 also shows a balancing impact on project complexity.

**Figure 8.** Impact on profitability risk category–complexity (balancing loop-B2).

#### *4.8. Loop Analysis and Validation*

The speed and degree of effect of system outputs provide thorough criteria for loop classification. This category serves as a filter, making it easier to prioritize important tasks. The following order is used to prioritize the influence of all loops in the CLD: fast–strong, fast–weak, slow–strong, and slow–weak, in line with the recent studies [65]. For each feedback loop, Table 5 summarizes the findings. Reinforcing loops have a long-lasting resonant effect, while balancing loops have a fading effect that shifts with time.


As loop R2 has such a powerful, quick, and reinforcing impact, it is deemed crucial in the current study. R1, R3, R4, B1, and B2, are less important loops. Before moving on to SD modeling and the conclusion of the research, the member-checking approach, also known as responder validation, was used to confirm the reliability of the data and validate the CLD [67]. The CLD was returned to the participants involved in the expert opinion session to verify whether the dependencies still hold. Accordingly, each participant interpreted the data. They confirmed the correlations as they saw them during the first interviews, adding to the findings' credibility.

#### *4.9. System Dynamic Modeling and Simulations*

The developed CLD was converted into a stock and flow diagram to predict the behavior of the variables within the system over time. The pictorial view of SD modeling is shown in Figure 9. Figure 9 includes key causative PIRFs and the PIRC in the developed construction profitability system. PIRCs, i.e., supply chain, cash flow, contingency, and project complexity, serve as the leading stocks in SD modeling. The price fluctuation acts as an exogenous variable, and is subjected to different input values. These values range from 0–100%, depending on the economic conditions faced by the construction projects. Many simulations were conducted to judge the impact on the PIRCs, due to the system integration and nonlinear dynamic relationships among the interdependent variables (PIRFs).

**Figure 9.** System dynamic modeling.

Vensim is used to conduct SD analyses. The inputs are in the form of stocks and flows using Euler's integration based on time steps: initial time and final time. It integrates the causal feedback links and loops. It creates time-based simulation results in the form of tables, graphs, and causes trees. The model uses trees, causes strip loops, and equations. In this research, PIRCs are stocks, denoted by "C", and PIRFs are flows, denoted by

"F". The governing equations, in terms of stocks and flows, are given in Equations (2)–(5).

$$\text{PIRC:C1} = (0.5 \times \text{F12} + 0.5 \times \text{F8}) \tag{2}$$

$$\text{PIRCC-C2} = (0.5 \times \text{F14})\tag{3}$$

$$\text{PIRC-C:S} = \left(0.5 \times \text{F1} - \text{F5} + 0.5 \times \text{F2} - \text{F4}\right) \tag{4}$$

$$\text{PIRC\text{-}C\text{-}4 = (0.5 \times \text{F6})}\tag{5}$$

where the pertinent variables include cash flow (C1), contingency (C2), project complexity (C3), and supply chain (C4) as various categories (PIRCs). The PIRFs include financial difficulties due to cash flow problems (F1), inclement weather conditions (F2), increase in project durations (F3), ineffective control of the manpower and equipment resources (F4), ineffective information sharing and use of information technology (F5), interrupted supply chain process (F6), lack of material availability (F7), payment issues (F8), planning and scheduling issues (F9), poor site management and supervision (F10), price fluctuations (F11), the rising cost of material due to market fluctuation (F12), scope variations and scope changes (F13), unexpected situations during project execution (F14), and fluctuations in market conditions (F15).

The simulation results presented in Figures 10–12 show that the causative influences of PIRFs are increasing functions. The value starts at zero, and rises exponentially over the course of a year to a maximum effect value. The causal feedback influence of other interdependent variables in the system causes this nonlinearity. Price changes of 10%, 50%, and 100% were used in the experiment (as extreme conditions).

**Figure 10.** Dynamic integrated impacts on profitability risk categories over time (price fluctuations = 10%).

**Figure 11.** Dynamic integrated impacts on profitability risk categories over time (price fluctuations = 50%).

**Figure 12.** Dynamic integrated impacts on profitability risk categories over time (price fluctuations =100%).

The overall results concerned with SD integrated impacts on PIRCs are shown in Table 6. The maximum effects are observed for the category of cash flow (74.18%), followed by project complexity (52.26%), contingency (17.22%), and supply chain (14.21%).


**Table 6.** Results of system dynamic modeling and simulations.

#### *4.10. Discussion on SD Simulation*

In continuation of SD simulation, it is observed that the link of PIRC–Cash Flow to financial difficulties is the most important. This is in line with other relevant studies [2,26]. The link is a key source of contractor's payment risks, and creates cash flow problems in construction projects during execution. The second important link observed is of PIRC–project complexity to unexpected situations. A similar relation is highlighted by Ortiz et al. [58]. The increase in project complexity leads to many unexpected situations during project execution. The published articles, as mentioned previously, only show qualitative impacts, and a holistic mixed assessment has yet to be reported. To address this, the current study describes causal impact in terms of qualitative and quantitative simulations and derives the relevant values. The first link, "cash flow-financial difficulties," has a simulation value of 6.8%, and the second link, "project complexity-unexpected situations," has a value of 2.9% over the period of 12 months, against the input of price fluctuations of 10%.

Similarly, the simulation values for 20% price fluctuations are 13.70% and 5.81%, respectively. In future studies, any effect in the system can be observed for any discrete period, and simulation results can be obtained and compared for specific data sets for all causal links using the developed SD model. Accordingly, all loops can be highlighted and explored to judge the causative qualitative and quantitative impact values of PIRFs and PIRCs for construction projects.

#### **5. Conclusions**

The current study's findings show that the rising cost of materials, supply chain issues, payment issues, planning and scheduling issues, financial difficulties, and ineffective control of manpower and equipment resources are the most critical PIRFs that have a causal impact on construction profitability. The interrelationships between these PIRFs, and the associated four PIRCs, are established through ST. These are iterated and quantified through SD modeling. A total of six causal feedback loops are developed here, in which four loops are reinforcing, i.e., R1 to R4, and two loops are balancing, i.e., B1 and B2. Overall, loop R2, dealing with the PIRC "cash flow," is considered critical because it has a strong, fast, and reinforcing influence. R1, R3, and R4 are the least important loops, followed by B1 and B2.

The quantification of the integrated influences of variables in causal feedback loops is achieved through the SD tool (Vensim). These are converted into stocks (PIRC) and flows (PIRF) diagrams. The SD integrated impacts of PIRFs on PIRCs follows the pattern of "cash flow (74.18%)," "project complexity (52.26%)," "contingency (17.22%)," and "supply chain (14.21%)".

ST helps managers grasp management difficulties, not through computations, but by deductions from the system's behavior. The quantitative information (impacts) of the variables in the system over a certain time period is explained by SD. However, it is important to remember that qualitative and quantitative models may only help with decision-making by enabling linkages and interdependencies to describe complicated system behavior. They advise field practitioners on particular project-related issues. In addition, the model must be combined with case-based, or expert, systems to provide real-time advice to the project team on issues that arise throughout construction projects.

This study helps field professionals learn about PIRFs, diagnose associated issues, and assess their impacts. This helps in decision-making and policy formulation based on the PIRFs and PIRCs' causal significance, reduces the dynamic effect of their integration, and enhances profitability in construction projects. This study is limited to causes only and does not consider the positive aspects (opportunities). In the future, the causal integration and interrelationships of PIRCs, i.e., supply chain, cash flow, contingency, and project complexity, can be explored. Exploring other categories, and their integration and the risks associated with these interdependencies, may be recognized, and a risk-based model can be developed as a future priority.

**Author Contributions:** Conceptualization, S.J. and K.I.A.K.; methodology, S.J. and K.I.A.K.; software S.J. and K.I.A.K.; validation, S.J. and K.I.A.K.; formal analysis, S.J. and K.I.A.K.; investigation, S.J. and K.I.A.K.; resources, S.J., K.I.A.K., M.J.T., F.U. and M.A.; data curation, S.J., K.I.A.K., M.J.T., F.U., M.A. and B.T.A.; writing—original draft preparation, S.J., K.I.A.K. and F.U.; writing—review and editing, S.J., K.I.A.K., M.J.T., F.U., M.A. and B.T.A.; visualization, S.J. and K.I.A.K.; supervision, K.I.A.K. and F.U.; project administration, K.I.A.K. and F.U.; funding acquisition, M.A. and B.T.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all respondents involved in the study.

**Data Availability Statement:** Data is available with the first author and can be shared upon reasonable request.

**Acknowledgments:** The authors would like to acknowledge Taif University Researches Supporting Project number (TURSP-2020/324), Taif University, Taif, Saudi Arabia for supporting this work. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4390001DSR04).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Development and Application of Smart SPIN Model: Measuring the Spectrum, Penetration, Impact and Network of Smart City Industries in South Korea**

**Sungsu Jo <sup>1</sup> and Sangho Lee 2,\***


**Abstract:** The research agenda on smart cities has increasingly extended not only on perspectives of social–economic relations between technologies and cities but also on the industrial economic ecosystem. The purpose of this study is to focus on an analytical method for the characteristics of a smart city's ecology and industry. With that thought, we have developed a smart SPIN (Spectrum, Penetration, Impact and Network) model and applied it to analyze the ecology of the Korean smart city industry in general. This model consists of smart spectrum model, smart penetration model, smart impact path model and smart network clustering model. The smart SPIN model shows great potential as an analytical method for the smart city industry ecosystem. As a source of data for analyses from 1960, 1985 and 2015 via input–output table, we revised these data into 25 and 8 industries related to the smart city ecosystem. Additionally, we applied the 2015 GDP deflator. The results of analysis are as follows: First, spectrum, the number of smart industries is increasing. This means that the smart city industry scope and area are expanding. Second, analysis of the smart penetration model and smart ecological industry can be applied into other industries. In other words, traditional industries can crossover and utilize smart technology. Third, with the results of our analysis of the smart impact path model, production paths are increasing while parameter paths did not show a triple parameter path. This means the value chain of the smart city industry is highly divested, but the structure of the industry is weakening. Fourth, smart network analysis shows important clusters to be centered on traditional industries: the clusters do not appear in smart industry centers. This means the impact of the smart city is not strong. Our analysis shows that, today, the Korean industrial ecosystem of smart cities is interacting with existing industries and raising it to a more intelligent and smarter level. Thus, there is a need for this kind of analysis study in order to find optimized smart city industry ecosystem.

**Keywords:** smart city industry; industrial ecosystem; spectrum; penetration; impact; network; input–output table

#### **1. Introduction**

The World Economic Forum has stated that the most important issue in the world at present is the fourth Industrial Revolution (IR) and the Industrial Internet of Things (IIoT) [1]. The fourth IR and IIoT will accelerate IT convergence with existing sustainable businesses and industries along with hyper-connected societies while expecting to create a new business model [2]. From a sustainable perspective, there have been various city models convergent with information and communication technologies, and ecology technologies such as resilience city [3], u-eco city [4], smart city [5] and floating city [6]. Smart city is one such model. It is a burgeoning area matching the fourth IR. Smart cities can be applied as a model to solve existing urban problems while simultaneously creating a brand new industry. In general, smart cities have a special role in managing the physical

**Citation:** Jo, S.; Lee, S. Development and Application of Smart SPIN Model: Measuring the Spectrum, Penetration, Impact and Network of Smart City Industries in South Korea. *Buildings* **2022**, *12*, 973. https:// doi.org/10.3390/buildings12070973

Academic Editor: Fahim Ullah

Received: 23 May 2022 Accepted: 6 July 2022 Published: 8 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

infrastructure of the city [7]. They utilize communication technology for transportation, supply, electricity, sewage, water supply and management. This technology is then embedded in the cities, such as mobile and sensor networks, which can be controlled through wireless technology and monitored through communication with personal devices linked with building sensors [8]. Smart cities give us a new paradigm in which urban areas can converge with communication technology [9]. They join various technologies and industries such as knowledge, construction and energy. Through this, they implement a smart economy, smart environment, smart mobility, and therefore, a smart life [8,10].

The ecosystem of the smart city industry works closely with the ecology of technology and the corporate ecosystem [11]. Smart cities settle in as an important concept as well as a paradigm in industrial ecosystem aspects and technological ecosystems [12]. Consequently, there have been more studies on the ecology of smart cities recently. Typically, studies have focused on smart city industrial ecosystems [10,12], smart city governance ecosystem [13,14], smart city service and technology ecosystems [15–17] and smart city data management ecosystems [18,19]. The study aims to clarify the concepts of enterprise architecture, big and open linked data analytics and smart city and how they are related to each other [18]. The research draws on orchestration to address multi-layer tensions in smart city data ecosystems and present a case study of London's city data ecosystem between 2017 and 2019 [19].

However, most studies are discursive in nature, structured on the smart city's concept to measure economic efficiency rather than the empirical means to focus on its industrial mechanisms. There are some empirical studies, but they suffer from the limits of the analysis methodology and fragmented data on the ecosystem of the smart city industry. Most analyses of empirical studies evaluate the smart city industry ecosystem through a piecemeal approach due to analytical limitations and focus on a part of the smart city industry's ecological framework. Most empirical studies on a smart city's industry have utilized input–output tables. Most studies have been centered around economic impact such as forward linkage effect, backward linkage effect and multiplier effect [20–23]. In this study, the economic ripple effect of smart city construction was investigated through the cases of Dongtan city and Seoul city using regional input–output tables [20,22]. Jeong's study calculated the effect of smart city development on the entire industry [21]. There is a study that classifies the industry and analyzes the ripple effect by viewing the Internet of Things technology as a smart city industry [23].

Recently, these studies have added importance to the value chain in the industrial ecosystem using structural path analysis with input–output tables [10,12]. Analyzing clusters is important while performing industrial ecosystem analysis. Industry clusters help strengthen local regional economic structures and internal growth agents and positively develop the local economy through technology and knowledge promotion [24]. Distinguishing the industry cluster helps prioritize industry value chain partners as well as decision-making for industry policies [25]. Despite this, studies on the industrial clusters related to the smart city industry still have much work to do. From this viewpoint, current studies utilize a methodology that considers the relationship between industrial clusters and smart city industries to study the ecosystem of the industry. As a result, this study aims to develop an analytical model for the smart city ecosystem and apply it to case studies on changes in the Korean smart city industry and ecosystem. The introduction of different empirical approaches through the smart SPIN model can provide better insight and understanding of how smart city capital and capacity can structurally shape industrial convergence outcomes within an economic context.

The remainder of this study is structured as follows. In Section 2, we show the theoretical background and previous studies for the identification of smart city industry ecosystem in the economy. Section 3 presents the smart SPIN model and data analysis for a detailed methodological framework that enables us to quantify the smart sector's influence on the whole industry in terms of smart city. Discusses implications that derive from the results and concludes the paper in Section 5, while Section 6 presents the policy remarks and recommendations for future research. By analyzing the ecosystem of the smart city industry, this study will be able to know what industries need to be strengthened in the era of the fourth Industrial Revolution and the policy direction of the industry can be suggested.

#### **2. Literature Review**

#### *2.1. Smart City Industry*

Defining the industry scope has become difficult because of the development of communication technology, acute competition at the global level and fast integrated, converging technology [26]. The trend of the smart city industry has not been clearly defined as a result of the conversion of science and technology. The growth of science and technology integration has slowed down as well. Therefore, this study attempted to define the smart city industry through related literature and case studies. Case studies related to the smart city industry building technology were also considered.

Early studies on the definition of the smart city industry were classified diversely according to the respective researchers' study objectives and subjective opinions. Cho et al. [27] analyzed the Korean smart city industry to determine its national economic impact. They reclassified the smart city industry into 15 industry categories. These categories consist of larger groups regarding personal life (5), mechanical devices (7) and public administration and services (3).

Jeong [21] classified the following as smart city industries: communication, broadcasting, visual and sound devices, transport facility, construction, other business services, education institute, culture and entertainment services. Lim et al. [22] studied the Seoul case to suggest smart city policy directions. They defined the smart city industry as having two parts: infrastructure and utilization. They categorized the smart city industry into eight wide classifications such as electric and electronic devices, construction, real estate, business services and so on.

Kim et al. [23] looked into the supply and demand of sensors for the Internet of Things (IoT). Simultaneously, they utilized the Delphi method to analyze the relationship between IoT sensors and the smart city industry. This study has reclassified the smart city industry into 30 small categories, based on the converging characteristics of science and technology. It aimed to verify the reclassification of the smart city industry and traditional industry by analyzing existing literature. The smart industry is reclassified and built on information and communication technology (ICT), as well as software and hardware. In contrast, the traditional industry is reclassified as the construction industry and public administration services industry, the former being a definitive classification. As we have noted previously, smart cities are integrated not only in the construction industry but also in the service and manufacturing industries as well.

Overall, research on smart city industry classification has been conducted in consultation with smart city experts and the smart city industry has been classified differently according to the smart city concept definition. According to the previous research, the smart industry was classified into IT manufacturing (e.g., computer, and electronic and electricity equipment), IT service (e.g., communication, S/W and broadcast) and knowledge service (e.g., education, health and welfare, culture and sports). In order to define the smart industry, this study identified the main technologies applied to the smart city through the smart-x case such as smart car [28], smart building [29], smart farm [30] and smart factory [31]. This smart-x industry expects production to grow exponentially by 2026 [32–35].

The following procedure was performed to connect the technology and services of the smart-x case with the industry. (1) The elements constituting the smart-x case such as service, technology and infrastructure were identified. (2) Technology and industry were reconnected based on the Harmonized Classification System of ICTs developed by TTA (Telecommunications Technology Association) in South Korea. The linked industries were finally applied to the Bank of Korea's input–output table.

Therefore, this study overcame the limited industry classification to define smart city industry classification: technology derived from the cases of smart cars, smart buildings, smart factories and smart farms. We focused on the technology and mapping through the input–output table approach as an acceptable method for industry classification [12]. The smart-x industry had 20 common industries classified as IT manufacturing, IT services and knowledge services (See the Appendix A, Table A1).

In our study, smart industry is defined as: IT manufacturing (semiconductor manufacturing, electronic display manufacturing, printed circuit board manufacturing, other electronic components manufacturing and computers and peripherals manufacturing); IT service (information service, software development supply services and communication and so on); knowledge service (research and development, building and civil engineering services, scientific and technical services and so on). In other words, all industries other than the smart industry are classified as traditional industries. For this study, we defined smart city industry as the conversion of the smart city industry with traditional industries (Figure 1).

**Figure 1.** Concept of the smart city focused on industries.

#### *2.2. Industrial Ecosystem of Smart City*

#### 2.2.1. Theory of Ecosystem

The word 'ecosystem' has been used in various fields such as industry ecosystem, business ecosystem, innovation ecosystem and urban ecosystem. Industries began discussing the key concepts of the ecosystem where the materials and energy flow effectively match [36]. Additionally, the industry value chain labor division started to increase the interdependency between business and industry. This discussion was activated as a base requirement. Research on the industry ecosystem started with Ayres in 1989. When he studied the environment and industry, he discovered industrial metabolism and related studies. Tibbs [37] focused on the harmony between industry and the natural ecosystem to help one understand the basic mechanisms of the natural ecosystem.

The industrial ecosystem (industrial value chain) in the business ecosystem focuses on the relationship between enterprises [38]. The industrial value chain is an industrial and economic concept based on technical and economic relationships between industries [39]. The industry ecosystem in the innovation ecosystem is shaped by the relationships among performers (material resources, human resources) and entities (institute systems) [40]. It is differentiated from the research economy and commercial economy as well as innovation ecosystems based on the energy circle. The urban ecosystem recognizes a city as an ecological unit and it analyzes the material and energy circle aspects. It regulates the various inputs and outputs: if the strength of the metabolism increases, the inputs and outputs increase. When there is a mismatch between input and output in the urban ecosystem, it is necessary to intervene from the external environment to balance the ecosystem. We should comprehend a city as a system and attempt to match the theological structure. In order to understand the urban ecosystem, one should know that a city is alive and can be interacted with as a system. Thus, we need a comprehensive and macro perspective on the urban ecosystem.

To summarize, the theoretical framework of the ecosystem for the study is as follows: (1) ecosystems converge on the defined keyword of interaction; (2) economic, industrial, technical perspectives and ecosystems accommodate a new thing appearing to characterize and increase its scope and territory; (3) an ecosystem has the characteristics to create new value through convergence between technologies and businesses; and (4) optimal ecosystems can be made and characterized by forming clusters integrated among different industries [12].

#### 2.2.2. Previous Study of Smart City Industrial Ecosystem

Studies of smart city industrial ecosystems have seen little research and progress worldwide. Most representative and qualitative research studies such as system architecture design and governance establishment were on smart city ecosystems [19,41–43] and smart city innovation ecosystems [44], whereas quantitative studies were on the changes of smart city industry convergence [10,12,44].

The study by Abella et al. [41] presented the continuous reuse of data, which are produced, collected, processed, treated and circulated from smart cities. This produced an ecosystem model that created new value. This data ecosystem model consists of three stages. The first stage is framed to validate the reuse of open data. The second stage utilizes value created by continuous reuse of open data. The third stage is economic and social value creation based on the first and second stages. This study builds a staged smart city data ecosystem and provides smart city services according to citizens' practical needs.

Gupta's [19] study performed a smart city data ecosystem of the orchestration governance direction through the London case study. This form of governance has zeroed-in on openness, diffusion and sharing. He asserted that the openness of archives has decreased the duplication of administration and, at the same time, maximized efficiency when technique and organization have had flexible systems. Diffusion was also implemented through innovative organization based on literacy. In other words, it needed capable organization to obtain information and power to understand the information. He claims that sharing ownership was based on co-work at public institutions in London.

Ahlers et al. [42] presented a system architecture for an IT-oriented smart city ecosystem. Its architecture aims to integrate systems with the multiple stakeholders that operate and manage smart cities. In addition, it is a supporting system that can be easily replicated into other smart city projects. This architecture will investigate the influences of the energy transition into urban management and planning, the integration of eMaaS (e-Mobility as a service) into positive energy communities and the growth of local trading markets and new business models [42].

Linde et al. [43] provide insight into how to develop dynamic capabilities to innovate the smart city industry ecosystem. The dynamic functions are configuring ecosystem partnerships, value proposition deployment and governing ecosystem alignment. This study offered insights into the specific micro-foundations or seizing and reconfiguring capabilities, which are necessary to orchestrate ecosystem innovation through a multiple case study of smart city initiatives [43].

Kim et al. [44] focused on the activation of smart city innovative ecosystems. They argue that the smart city innovation ecosystem defines a view of traditional cities combined with the Korean smart city model. The study proposed a framework on smart city innovative ecosystems, consisting of six key points: physical resource; virtual assets; human resource; economic assets; governing system; and socio-culture. Additionally, they suggested ways to activate smart-city-related businesses and analyzed the smart city innovative ecosystem centered around the six key points.

The study by Jo and Lee [10] defined smart city industry as consisting of IT manufacturing, IT services and construction based on knowledge services. Data for this study used input–output tables from the years 1980 and 2012. This study focused on analyzing qualitative change, quantitative change and conversion change in the smart city industry ecosystem. The results of analysis and the smart city industry were growing from both the qualitative and quantitative points of view. The smart city industry is led by conversion changes, which the analysis revealed to be a central force for strengthening the ecosystem. However, when we examine industrial perspectives in general, traditional industries show greater conversion changes than the smart city industry. Traditional industries are leading changes in conversion, which means smart cities are still at the beginning stage in Korea.

Jo et al.'s study analyzed how the ecosystem of the smart city industry is changing from a sustainable perspective. This study used the input–output model and structural path analysis. The analysis data are input–output tables from 1960 to 2015. As a result of the analysis, it was confirmed that the smart city industry is replacing other industries in the overall industry structure and creating a new value chain [12].

Most of the existing research has been limited to qualitative research such as system development and design and governance establishment. Quantitative research in industrial and economic aspects was analyzed using the input–output model and the structural path analysis and has limitations that cannot deviate from the analysis of input–output coefficients and multiplier coefficients. Therefore, this study developed a smart SPIN model based on the input–output model, structural path analysis and network analysis methods to develop and apply a new methodology that is different from existing studies.

#### **3. Research Model and Data**

#### *3.1. Development of a Smart SPIN Model for Analyzing the Industrial Ecosystem in Terms of Smart Cities*

This study developed a new analytical method: smart SPIN model. The smart SPIN model formulation uses data from input–output tables and consists of four models: smart spectrum model (SSM), smart penetration model (SPM), smart impact path model (SIM) and smart network clustering model (SNM). Our smart SPIN model was formulated by fixing and upgrading (as well as integrating) existing analytical tools. In order to understand the smart SPIN model, it is necessary to understand models such as the input–output model [45], structural path analysis [46] and social network analysis [47].

The outline for understanding the model is as follows. First, the new interpretations of the input–output model are SSM and SPM. Analysis using the existing input–output model focuses on measuring the ripple effect using the multiplier coefficient such as employment inducement effect, income inducement effect and value-added inducement effect. However, it is not suitable for analyzing the ecosystem characteristics of industries, such as the expansion of the industrial scope and the analysis of inter-industry convergence.

Therefore, in this study, a smart spectrum model was presented as a model to analyze how much scope the smart industry has. In addition, a smart penetration model was developed to measure how much the smart industry penetrates (or converges) into the traditional industry and the smartization of the traditional industry. Second, this study presented a SIM that reinterpreted structural path analysis in a macroscopic dimension. As described in the literature review, studies using the existing structural path analysis are limited as they focus on finding new paths and extinct paths. In other words, it is difficult to find the characteristics of the industry creating a new value chain. Therefore, this study focused on the smart industry and analyzed the macroscopic smart city industry ecosystem called the industrial value chain through the total number and pattern of industrial paths. In particular, this model can analyze industry intervention paths that have not been analyzed in other studies. This is an analysis of how important the smart industry is among other industries. Third, we developed a SNM. This is an application of the existing network analysis. Existing network analysis focuses on the study of industry centrality indicators. Analysis using centrality can confirm the relationship between industries, but there is a limit to finding clusters between industries. This study presented a method that enables clustering based on the existing network analysis. The smart SPIN model presented in this study can be developed by interpreting the existing model from a new perspective, focusing on the smart city industry rather than focusing on various industries. Detailed smart SPIN modeling is given in Figure 2.

**Figure 2.** Concept of the smart SPIN model.

#### 3.1.1. Smart Spectrum Model

The smart spectrum model can analyze the scope of the smart industry in its ecosystem. The scope of smart industries is the sum of the number of smart industries among whole industries in input–output tables (See Figure 3). The scope of the smart city industry is analyzed by the sum of the number of industries (Equation (1)). The model shows how many smart industries are within the nation's industries.


**Figure 3.** Example of the calculation of the smart spectrum.

$$\text{Ss} = \left(\sum\_{j=1}^{n} I n\_j\right) / n \tag{1}$$

where:

*Ss*: smart spectrum;

*Inj*: industry placed in rows in the input–output table; *n*: total number of whole national industries.

$$\begin{cases} In\_j = 1 & \text{If } In\_j \in \text{Smart Induustries} \\ In\_j = 0 & \text{otherwise} \end{cases} \tag{2}$$

#### 3.1.2. Smart Penetration Model

Smart penetration model is calculated as a technical coefficient (input–output coefficient, see Figure 4). This model is able to analyze smart input and the level of conversion. The smart penetration model can be written as Equation (3). It has been analyzed through a technical coefficient. The smart industry input means that when all industries produce one unit of production, it shows the smart city industry's input by measuring a portion of smart industry revision compared to traditional industries.

**Figure 4.** Example of the calculation of the smart penetration.

$$Sp = \left(\sum\_{i=1}^{n} \sum\_{j=1}^{n} \left(a\_{ij} \cdot In\_i\right)\right) / n \tag{3}$$

where:

*Sp*: smart penetration;

*aij*: technical coefficient;

*Ini*: industry that appears in a column in the input–output table; *n*: total number of whole national industries.

$$\begin{cases} In\_i = 1 & \text{If } In\_i \in \text{Smart Induustries} \\ In\_i = 0 & \text{otherwise} \end{cases} \tag{4}$$

#### 3.1.3. Smart Impact Path Model

The smart impact path model is a tool to analyze the production paths and intervention paths. Here, we analyze smart production paths as measured by two indicators: the number of smart production paths and the number of production-type paths. The smart production paths are the sum of paths shown in the evaluation of structure paths. Smart production path analysis is able to see production triggers. More paths in the smart production path means a stronger link in the value chain (Equation (5), Figure 5).

**Figure 5.** Concept of analysis for the smart production path and the smart intervention path.

$$Spp = \sum\_{n=1}^{m} p\_n \text{ (si \to oi)}\tag{5}$$

where:

*Spp*: smart production path;

*pn*: production path in the industry structural path;

*m*: total number of production paths;

*si*: smart industries;

*oi*: other industries.

$$\begin{cases} P\_{\text{ll}} = 1 & \text{If } In\_{\text{l}} \in \text{Smart Indutivies} \\ P\_{\text{ll}} = 0 & \text{otherwise} \end{cases} \tag{6}$$

There are four types of production-type paths. They are *direct*, *via*1, *via*2 and *via*3 types. *Direct* refers to no stopping in the middle; this type goes direct. *Via*1 is a path from the beginning; there is one industry at the beginning and another industry before the last industry. *Via*2 means there are two different industries to get the last industry. *Via*3 means there are three different industries linked from the star industry to the final industry. The *direct* type is a simple production path, while the *via*3 type is a complex production path. If the smart industry has new value chains and a business structure more diversified than *via*2 and *via*3 types, it will turn up more than direct and *via*1 types. Equations for obtaining a production path are as follows:

$$Spp\_{(D)} = \sum\_{n=1}^{m} D\_n(\text{si} \to \text{oi}) \tag{7}$$

$$Spp\_{(vi\!u1)} = \sum\_{n=1}^{m} via1\_n (si \to ei) \tag{8}$$

$$Sppp\_{(vid2)} = \sum\_{n=1}^{m} via2\_n (si \to oi) \tag{9}$$

$$Spp\_{(via3)} = \sum\_{n=1}^{m} via3\_n (si \to ei) \tag{10}$$

where:

*Spp*(*D*): smart production path of *direct* type; *Spp*(*via*1): smart production path of *via*1 type; *Spp*(*via*2): smart production path *via*2 type; *Spp*(*via*3): smart production path *via*3 type; *Dn*: number of smart production paths of *direct* type; *via*1*n*: number of smart production paths of *via*1 type; *via*2*n*: number of smart production paths of *via*2 type; *via*3*n*: number of smart production paths of *via*3 type; *m*: total number of production paths.

The smart intervention path can find out the smart industry's linking contribution to other industries. For example, in the *via*2-type path (I1-I2-I4-I3) situation, all the paths are summed up as ITM/ITS, ITM/KS and ITS/KS. Earlier information about the *via*1, *via*2 and *via*3 path types could be confused with the explanation of smart production paths. Therefore, when we analyze smart intervention paths, *via*1 is a single intervention path, *via*2 is a double intervention path and *via*3 is a triple intervention path. If there is an increase in the number of smart intervention paths, smart industries growing as intermediate goods or semi-finished products or ingredients at the same time, then transactions among industries are very active. This is expressed in Equations (11)–(13).

$$Sip = \sum\_{p=1}^{k} ITM\_p + \sum\_{p=1}^{l} ITS\_p + \sum\_{p=1}^{m} KS\_p \tag{11}$$

where:

*Sip*: single intervention path; *ITMp*: ITM production path; *ITSp*: ITS production path; *KSp*: KS production path; *k*: number of ITM production paths; *l*: number of ITS production paths; *m*: number of KS production paths; *p*: total number of production paths.

The double intervention path comes from calculating Equation (12) below.

$$Dip = \sum\_{p=1}^{k} \left( ITM\_r \restriction TS \right)\_p + \sum\_{p=1}^{l} \left( ITS\_r \restriction S \right)\_p + \sum\_{p=1}^{m} \left( ITM\_r \restriction S \right)\_p \tag{12}$$

where:

*Dip*: double intervention path; (*ITM*, *ITS*)*p*: gets each ITM and ITS production path; (*ITS*, *KS*)*p*: gets each ITM and KS production path; (*ITM*, *KS*)*p*: gets each ITS and KS production path; *k*: number of ITM and ITS production paths; *l*: number of ITM and KS production paths; *m*: number of ITS and KS production paths; *p*: total number of production paths.

The triple intervention path comes from calculating Equation (13) below.

$$Tip = \sum\_{p=1}^{n} \left( ITM, \ ITS, \ KS \right)\_p \tag{13}$$

where:

*Tip*: a triple intervention path;

(*ITM*, *ITS*, *KS*)*p*: obtains an ITM, ITS and KS simultaneously in the production path;

*n* : total number of production paths which obtain ITM, ITS and KS simultaneously in the production path.

#### 3.1.4. Smart Network Clustering Model

The analysis of network clustering used minimum spanning tree (MST), which would extract the backbone of the industrial network—that directionality exists—from production inducement coefficients [48,49]. MST is an algorism to extract a whole network to look for minimum distance, which is then combined sequentially. This study assumed that production inducement coefficients are the minimum distance between industries. In other words, if we find the smallest distance between different industries, those connect these industries. The industries connected by minimum distance means to look for the smallest production inducement coefficients of all industries until one industry is linked to the minimum distance industry. This process is repeated until all industries are connected [49]. The core which obtained the most production inducement coefficients in the industry network, or extracted the basic structure that this algorism implemented, can do the clustering [50,51]. We can write it in equation form, which would be Equation (14).

$$BN\_{(ka)} = \min \sum\_{p=1}^{n} \text{net}\_{p(b\_{i\bar{j}})} \tag{14}$$

where:

*BN*(*ka*): backbone network using MST;

*netp*(*bij*): multiplier coefficient in network path.

The backbone network is the summation of minimized paths so that production inducement coefficients become the smallest. In this study, we applied the MST algorithm which prediction inducement coefficients multiply by minus one. This is to look for the minimum coefficient at the path.

The industrial network clustering can be found using the backbone network as derived previously. The industrial network clustering must comprise a minimum of three or more industrial nodes bundled as one and also include two or more links within. The two conditions noted above must be satisfied to form a network clustering. The boundary of network clustering is established when the links of nodes are connected in opposite directions (See Figure 6) [51]. We found industry clusters using these methods from MST. Equation (15) is another way to express this.

**Figure 6.** Example of the smart network clustering.

$$\mathbb{C}laster\_{(\text{BN})} \Leftrightarrow \mathcal{N}\_{(i)} \geq \mathfrak{Z} \land L\_{(i)} \geq \mathfrak{Z} \tag{15}$$

where:

*Cluster*(*BN*): MST cluster in backbone network; *N*(*i*) ≥ 3: industry node value of more than three; *L*(*i*) ≥ 2: industry link value of more than two.

When these two conditions are fulfilled, network clustering could be built in the backbone network. Smart network clustering model can analyze how the industry clusters are formed and connected, and their level of complexity. The indicator is the number of smart clustering. Through this, we can find out its existence and numbers.

#### *3.2. Data Analysis*

This study used an input–output table for the years 1960, 1985 and 2015 from the Bank of Korea [52]. The input–output tables issued take into account the commodity prices during the given time period. These years were specifically selected as 1960 is the first year Korea made an input–output table, 2015 is the most recent input–output table available and 1985 falls in the middle. Smart city (information city) started in South Korea with government informatization as a starting point in 1960 [53]. Various smart policies have been established in South Korea (e.g., Administrative Computerization Plan (1978), National Infrastructure Network Plan (1987), Cyber Korea (1999), e-Korea (2002), Broadband IT Korea (2003), u-Korea (2006), U-City Raw (2008) and the first and second U-City Comprehensive Plan (2009, 2014)).

The input–output table established by the Bank of Korea is published every five years. The latest version of the input–output table is 2015. This study used the 2015 input–output table as the latest data at the time of conducting the study. Therefore, the data from 1960 to 2015 are data that can examine changes in the smart city industrial ecosystem in South Korea. This study analyzed changes in Korean smart city development. It utilized the GDP deflator of the year 2015 to compare annual input–output tables and to smoothen commodity increase, which will eliminate the nominally increased portion.

Smart industries can be classified by understanding smart technologies. In this study, smart technologies were defined as a sensing, processing, networking and interfacing [4,8,53]. The smart technologies characteristics are as follows: Sensing are technologies of seeing (e.g., camera and lens), hearing (e.g., auditory sensor) and smelling (e.g., olfactory sensor). Processing is a technology that processes data received from a sensor such as knowledge algorithm (e.g., artificial intelligent) for data processing. The technologies used to transmit data for processing are in the area of networking technologies such as Wi-Fi, 5G and broadband network. Interfacing are technologies for output by display such as monitor and large electronic display. These technologies were able to classify as an area of three from industrial perspectives. Firstly, sensing and interfacing technologies were related to IT manufacturing industries. Secondly, processing technologies included knowledge-based service industries. Thirdly, networking technologies can be said to be related to the IT service industries area. These contents are also related to smart-x case analysis (see the Appendix A, Table A1).

In particular, the OECD broke the traditional dichotomy between the manufacturing and service industries for economic activity by introducing a formal definition of the industrial sector of information and communication technology and knowledge service [54–56]. Although the OECD's definition of industry for ICT and knowledge services is a useful classification, it may not cover the full range of related activities [10,12,57]. However, the industrial classification approach introduced by the OECD is undoubtedly the most appropriate classification method because it can be quantified from an economic point of view and can conduct economic comparison studies between countries [57]. Based on this classification, this study classified relevant smart technology industries into IT manufacturing, IT services and knowledge services [8,10,12,57].

We reclassified industries for the three years of input–output tables to analyze the smart city industry ecosystem; the sub-subcategories that are the minimum unit of each industry were reclassified into 8 and 25 industries (See the Appendix A, Table A2). Analysis data on SSM, SPM and SIM were used for the 8 reclassified industries and SNM was adapted for the 25 reclassified industries. Each model uses different data because each model has different characteristics. For example, in the case of SNM, if we had used the eight reclassified industries, it would have been difficult to find industry clusters.

The eight reclassified industries are: agriculture and mining (AM), traditional manufacturing (TM), IT manufacturing (ITM), construction (C), energy generation and supply (E), IT service (ITS), traditional service (TS) and knowledge service (KS). We reclassified ITM, ITS and KS industries as smart industries and the other AM, TM, C, E and TS industries as traditional industries.

This study reclassified AM based on primary industries such as agriculture, fisheries, mining and quarrying. TM was classified as secondary industry oriented. In the case of ITM, we classified by criteria based on the recent technologies' statistic data and literature search. ITM has smaller detailed classifications: it has 11 categories such as semiconductor, electronic display equipment, circuit board, electric device and computer. C is classified as building construction and civil construction. E included electricity, gas and waterworks recycling energy. ITS was categorized around wired/wireless communication, communication service and software service. Based on existing literature, KS included finance and insurance, professional science and technology, education, health and culture [58,59]. The KS industry includes ITS, but since our study was focused on a specific ecosystem analysis of the smart city industry, we separated the two. TS covered food, accommodation, real estate, business support and other services.

#### **4. Analysis Result: Application and Interpretation of the Smart SPIN Model**

#### *4.1. Result of Application of Smart Spectrum Analysis*

This section of the study shows an analysis on the increased scope of smart industries through the spectrum model (Table 1). The smart industry proportion was 8.4% (9) of the total Korean industries in 1960. Proportions of the smart industry increased to 10.7% (17) in 1985 and 20.1% (33) in 2015. The analysis revealed an increase in spectrum. However, the traditional industry showed spectrum decrease annually: 91.7% (99), 98.3% (142) and 79.9% (131).


**Table 1.** Changes in the smart spectrum (unit: no, %).

We can also see impressive changes in the number of units. The smart industry increased by 8% between year 1960 and 1985 and by 16% from 1985 to 2015. Conversely, traditional industry increased by 43% between the years 1960 and 1985 and then decreased by 11% from 1985 to 2015. ITS was categorized as the communication industry until 1985; however, in 2015, it emerged as the information service, software development supply and wired/wireless communication services. In 1960, KS was classified into three parts, education service, medical/health service and cultural service, but the finance and insurance categories were added in 1985. Professional science and technology services were added, and KS was further subdivided in 2015. In a nutshell, since the smart spectrum is increasing, the smart industry is expanding.

Smart spectrum analysis showed that the smart industry is becoming increasingly segmented. This means that the smart industry is building a new economic system. It also means that the smart industry is emerging as an important industry in the industrial ecosystem due to the continuous development of information and communication technology.

#### *4.2. Application Results of Smart Penetration Analysis*

The results of smart penetration analysis revealed the pace with which the smart industry is progressing. It also shows the contribution made by the smart industry's ingredients into the industry (Table 2). The penetration ratio of smart industry to whole industries was 5.2% in 1960. However, it rose to 22.8% in 1985 and 27.6% in 2015. On the contrary, the traditional industry's penetration ratio was 94.7% in 1960. It declined to 77.1% in 1985 and reduced to 72.4% in 2015. This ratio means the penetration of the smart industry has increased while traditional industry has decreased in all Korean industries. To produce a unit of goods, smart industry penetrates faster and traditional industry's penetration ratio decreases as technology progresses towards 5G, AI, robots and IoT.


**Table 2.** Changes in smart penetration (unit: coefficient, %).

Smart industry's penetration ratio increased year on year: 17.6% between 1960 and 1985 and 4.8% from 1985 to 2015. On the contrary, traditional industry reduced for the same time periods. Smart industry's technical coefficient rose from 0.156% in 1960 and 0.840% in 1985 to 1.192% in 2015. This means that each year every industry needs smart industry input worth USD 19 in 1960, USD 105 in 1985 and USD 142 in 2015 to produce goods worth USD 1000. This input will keep rising as we move to the fourth IR.

Analysis result of this study shows ITM and KS categories have a big penetration ratio in all smart industries. ITM had a bigger penetration rate than KS in 1960 and 1985. The average rate of increase, smart industry wide, for ITM is 236.1%, for ITS is 183.7% and for KS is 282.7%. On the contrary, traditional industry's TM, E and TS are much lower compared to KS. Finally, the penetration of smart industry is on the rise, meaning that the smart industry has become an important material for growth. It can be stated that smart industry already penetrates other industries.

The results of this analysis suggest that the city is changing from a physical city of rebar and concrete to a smart city that can see, hear and smell. In other words, smartization is progressing in all areas of industry. In particular, it can be confirmed that these results are closely connected with Korea's smart (or informatization) policies [53].

#### *4.3. Application Results of Smart Impact Path Analysis*

Analyzing smart impact path tells us how the structure of production is changing and how it impacts many industries. Smart impact path analyzed two sides: one is smart production path and the other is parameter path. Smart industry's impacts to other industries are shown as production path and the results of the analysis are shown in Table 3. It is evident that the average number of smart production paths is increasing. It was 50.3 in 1960, 58 in 1985 and 70 in 2015. Average number of traditional industries showed 38, 58.6 and 73.8, respectively.


**Table 3.** Changes in smart production pathw (unit: no).

Looking at the average increase, smart industry's ITM increased to 6.8, ITS increased to 41.3 and KS increased to 11. In most of traditional industries, the production paths were higher than the smart industry: AM 73, TM 40, C 42.5, E 33.4 and TS 24.8. This result can observe that smart industry's ITM and KS average increase in the number of paths is lower than the traditional industry. This contrasts smart spectrum analysis as the traditional industries have better and various paths than smart industry to deliver the industry's impact. At the same time, value chain started by smart industry is weaker compared to traditional industries.

Analysis of smart intervention path reveals the extent of the mediation of the smart industry between industries and convergence (Table 4). In general, several cases show the smart industry's single intervention path. In 1960, 56 single intervention paths appeared and two double intervention paths. In 1985, there were 133 single intervention paths, and the number increased to 218 in 2015. In 1985, there were 4 double intervention paths and 22 paths in 2015. These increased about five times from 1985 to 2015. However, these numbers seem unremarkable when compared with single intervention paths. Our study carefully checked for any triple intervention paths but found none for all the years studied. When we examine single intervention paths from year to year, ITM had the most paths in 1960, but KS mediated more industries since 1985. From a double intervention path point of view, it was clear that the ITM and the ITS were converged to connect other industries in 1960. The ITM and KS were converged in 1985, and the ITS and the KS were converged to connect other industries in 2015.


**Table 4.** Changes in smart intervention paths (unit: no).

These results show that the smart industry creates new value chains on behalf of the existing traditional industry. In addition, it complements other industries and, eventually, becomes a substitute. However, now it has become a mandatory industry. Notably, the smart city industry ecosystem moves from a simple converging structure to multiple, complex converging structures. Considering that a triple intervention path has not appeared, we assume that the smart city industry is still in its infancy.

#### *4.4. Application Results of Smart Network Clustering Analysis*

An analysis of the smart network shows whether clusters were formed centered around the smart industry. Through the minimum spanning tree algorithm function which generates the backbone network, we look for clusters around the smart industry. (Table 5, see the Appendix A, Figure A1).


**Table 5.** Changes of smart network clustering (unit: no).

Note: accommodation and food service = AFS, broadcast and publishing = BP, chemical industry = CI, IT manufacturing = ITM, IT service = ITS, light industry = LI, non-metal and metal industry = NMMI, professional, scientific and technical activities = PSTA, real estate service = RES, wholesale and retail trade = WRT.

There were three clusters formed in 1960: light industry, metal and non-metal industry within TM. Isolated nodes appeared for broadcasting, publishing, professional, scientific and technical activities within KS, and accommodation and food services in the traditional industry. KS and ITS linked with the light industry in 1960, while ITM linked with the metal and non-metal industry, and the medical and human health service linked with the chemical industry. In 1985, three clusters were formed centered around the chemical industry and the metal and non-metal industry. Isolated node confirmed KS's broadcasting and publishing. On the smart industry front, finance and insurance services linked closely with business service and real estate service in the traditional industry. While KS's professional, scientific and technical activities linked with the light industry, the smart industry linked with the chemical industry, which has the largest node.

There were five clusters in 2015 centered around the traditional industry. Isolated nodes did not appear. From the smart-industry-centered view, the chemical industry linked with education service, medical and human health service and ITM in 2015. The light industry is associated with broadcasting, publishing and cultural service. Professional, scientific and technical services are associated with broadcasting and publishing. Finance and insurance services are associated with ITS. The analysis shows that no smart cluster appeared in the smart industry for the years 1960, 1985 and 2015. Based on this, it can be said that the smart industry acts as a supporting industry to the traditional industry. This means the smart industry is insignificant to connect small clusters. In other words, impact ramifications are small and the smart industry lacks the capacity to form an industry ecosystem by itself.

#### **5. Conclusions**

The purpose of this study is two-fold. First, to develop a tool to analyze changes in the Korean smart city industry ecosystem—smart SPIN model. Second, to apply the analysis tool (smart SPIN model) with Korean input–output tables of three years to find the interaction among industries.

The findings and conclusions can be summarized as follows. First, from the smart spectrum model, the number of smart industries has increased. This means the scope of smart industries is expanding. Second, an analytical study of penetration shows that the smart industry has higher technical penetration than traditional industries. This means traditional industries use the smart industry as an important base. The smart industry integrates into various traditional industries and increases their value. Third, analysis of the smart impact path revealed that the number of production paths increases. The value chain structure started by the smart industry is weaker than the traditional industry, which means that the Korean traditional industry manufacturing is strong. The smart city industry ecosystem's value chain is not strong enough and the smart industry substitutes existing industries to make new value chains. Fourth, smart network clustering showed that important clusters are centered on the traditional industry. At the same time, the cluster did not appear around the smart industry, which tells us that while the smart industry is growing along with the traditional industry, it is still weak to stand on its own or be indigenous.

We believe, this study can contribute in the following ways. First, our study defined the term 'smart city' based on technology factors structuring the smart city. This study proposes a new method to analyze the ecosystem beyond the existing studies of the smart industry's impact such as production inducement impact and labor inducement impact. Finally, we were able to analyze the smart city industry ecosystem using the smart SPIN model. The satisfactory results confirm that the model is well used.

#### **6. Policy Remarks and Future Research**

In the industry ecosystem, the birth of a new industry translates to the parasite stage of an existing industry, expanding the new industry to integrate with the existing industry (called the conversion stage) and eventually replacing the existing industry (called the conception stage). At the moment, the Korean industrial ecosystem in a smart city is interacting with the existing industry and transforming it to a more intelligent and smarter stage. However, the Korean policy on the smart city is to throw the existing traditional industry away and treat the smart city industry as an independent industry. The government pushes it forward as a new growth engine of the nation to guide the economic future of Korea. We think such a policy is pre-mature. We should monitor the emerging industries and focus on converging the smart and traditional industries.

This study has several limitations. This study has classified the smart city industry through smart-x cases studies despite the fact that there are limitations in the classification. Moreover, there are limitations in the analysis models. The input–output model is a macroscopic analysis method and hence there is a limit to the analysis of rapid changes in economic conditions. This study was analyzed by reflecting three input–output tables. This data sample has limitations as it does not reflect the overall characteristics of the Korean industry.

The study presents several options for future research. The analysis model can be applied and used not only in Korea but also in other countries and produce a reasonable output. Thus, we need comparative studies with other countries. However, it is difficult to use the model presented in this study in countries where input and output tables are not prepared. It is necessary to study new models using qualitative indicators. We must also encourage continuous research to find the optimized smart city industry classification. The findings of this study can help us understand the path that the smart city, as a new growth engine of the national industry policy, will lead to.

**Author Contributions:** Conceptualization, S.J. and S.L.; Methodology, S.J. and S.L.; Data curation, S.J.; Formal analysis, S.J.; Writing—original draft, S.J.; Writing—review and editing, S.J. and S.L.; Supervision, S.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea grant funded by the Korea government (Ministry of Science and ICT) (No. 2021R1F1A1049301).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The assistance provided by Byung-Ho OH is greatly appreciated.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

**Table A1.** Reclassification of industries using cases of smart-x.


Source: Jo et al., adapted from ref. [10], revised and supplemented. The o means that the smart-x case has been applied to the relevant industry.


**Table A2.** Reclassification of smart city industries (unit: industry no.).

Source: Jo et al., adapted from ref. [10], revised and supplemented.

**Figure A1.** Changes in Smart Network Clustering. (Note: The red nodes are smart industries. The green nodes are traditional industries).


**Table A3.** Abbreviation List.

#### **References**


### *Article* **Towards an Inclusive Walking Community—A Multi-Criteria Digital Evaluation Approach to Facilitate Accessible Journeys**

**Xiaoran Huang 1,2,\*, Marcus White <sup>2</sup> and Nano Langenheim <sup>3</sup>**


**Abstract:** Half the world's population now lives in cities, and this figure is expected to reach 70% by 2050. To ensure future cities offer equity for multiple age groups, it is important to plan for spatially inclusive features such as pedestrian accessibility. This feature is strongly related to many emerging global challenges regarding health, an ageing population, and an inclusive society, and should be carefully considered when designing future cities to meet the mobility requirements of different groups of people, reduce reliance on cars, and encourage greater participation by all residents. Independent travel to public open spaces, particularly green spaces, is widely considered a key factor that affects human health and well-being and is considered a primary motivation for walking. At the same time, unfavourable steepness and restrictive access points to open spaces can limit accessibility and restrict the activities of older adults or people with mobility impairments. This paper introduces a novel open access proximity modelling web application, PedestrianCatch, that simulates pedestrian catchments for user-specified destinations utilising a crowd-source road network and open topographic data. Based on this tool, we offer a multi-criteria evaluation approach that considers travel speed, time, urban topography, and visualisation modes to accommodate various simulation needs for different urban scenarios. Two case studies are conducted to demonstrate the technical feasibility and flexibility using the proposed evaluation approach, and explain how new renewal strategies can be tested when designing a more inclusive neighbourhood. This evaluation tool is immediately relevant to urban designers, health planners, and disability communities, and will be increasingly relevant to the wider community as populations age, while the corresponding analysis approach has a huge potential to contribute to the pre-design and design process for developing more walkable and accessible communities for all.

**Keywords:** inclusive city; digital tool; age-friendly; walkability; accessibility; multi-criteria evaluation

### **1. Introduction**

Urbanisation has been one of the most important drivers of global growth. More than half of the world's population presently lives in cities, a number expected to rise to 70% by 2050 [1]. While urbanisation is advancing the global economy, growing inequality inside cities has the potential to stymie growth [2]. The international community of urban researchers, planners, and designers have recognised the importance of creating more inclusive cities and ensuring that all people benefit from well-planned urbanisation. The World Bank's dual aims of eradicating extreme poverty and achieving shared prosperity put inclusion at the forefront [3]. Similarly, The UN-Habitat has proposed a stand-alone goal for cities and urban development in the 2030 Agenda: Sustainable Development Goal 11 (SDG 11), "make cities and human settlements inclusive, safe, resilient and sustainable" [4]. Despite worldwide recognition and commitment, building inclusive cities remains a big challenge in contemporary society. It is critical to recognise that the notion of inclusive cities encompasses a complex web of multiple spatial, social, and economic aspects in order

**Citation:** Huang, X.; White, M.; Langenheim, N. Towards an Inclusive Walking Community—A Multi-Criteria Digital Evaluation Approach to Facilitate Accessible Journeys. *Buildings* **2022**, *12*, 1191. https://doi.org/10.3390/ buildings12081191

Academic Editor: Fahim Ullah

Received: 4 July 2022 Accepted: 4 August 2022 Published: 9 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

to ensure that tomorrow's cities offer opportunities and improved living conditions for all. Spatial inclusion necessitates the provision of affordable needs, including access to vital infrastructure and services, while social inclusion emphasises the need to guarantee the equal rights and participation of all, including the most marginalised [5,6]. The spatial and social components of urban inclusion inextricably underpin one another.

Today, one in three people live in residential areas with inadequate facilities and amenities in the developing world [7]. In developed countries, non-car-based access to essential services for vulnerable and minority demographics can be limited, contributing to significant health, social, and economic issues in these communities [8].

Designing better walking and wheelchair-friendly environments with good proximity (pedestrian accessibility) to key destinations has social equity benefits for older adults, women, and children. Better walking environments promote autonomous mobility for older people and members of the disability community, greatly improving their independence, mental health, and well-being [9]. Walking environments which are perceived as safe promote the use of active transport modes for women [10], school drop off trips made by women with young children, which they do up to four times as often as men [11,12], and the unaccompanied school trips of older children which contribute to the development of life-long active transport habits [12].

However, existing approaches to accessibility modelling have significant limitations. Without accounting for topography, the accessibility model has been constrained to overly simplified and inaccurate circular catchment buffers or sophisticated network buffers generated using geoprocessing routines within geographical information systems, which can be expensive and require a high level of expertise to operate [13]. For urban designers and policy makers, the significance of this research gap is exacerbated when prompting inclusive city schemes where specific micro-level elements such as a sidewalk slope and cul-de-sacs are to be considered [14]. To facilitate accessible journeys for all, we argue there is a desperate need to establish a multi-criteria evaluation approach for strategic urban renewal and transformation in the coming decades to promote inclusive, active, and healthy urban communities.

This article highlights both the challenges and opportunities of urban accessibility in regards to health, an ageing population, and an inclusive community, demonstrating key features that have been neglected in the fast urbanisation era. By summarising the prevailing accessibility modelling strategies and applications, we propose a user-friendly analysis and digital design tool, PedestrianCatch, that is immediately relevant to the urban planning and urban design communities, health planners, the disability community, and will be increasingly relevant to the broader community as populations age. We use two case studies, a gated community in Chaoyang, Beijing, China, and an urban renewal project in Maribyrnong, Australia, to test the multi-criteria catchment design and analysis tool. The case studies demonstrate the feasibility, flexibility, and relevance of the approach for evaluating multiple urban design scenarios in radically different contexts. The implementation and adoption of the tool by planners addressing the physical inclusivity of the urban form could potentially contribute to more walkable and accessible communities for all.

#### **2. Accessibility Matters: Challenges and Opportunities**

#### *2.1. Accessibility and Health*

Physical inactivity is the fourth largest contributor to the global burden of disease [15] and is a vital consideration when designing future cities. In Australia, overweight and obesity are expected to cost more than AUD 55 billion annually [16] and has surpassed smoking as the top cause of premature death and illness [17]. Apart from a considerable discussion and comparison of Australian, European, and US city morphologies in relation to obesity levels [18,19], there is also a growing concern about obesity in developing nations experiencing fast urbanisation. For example, China is experiencing a major increase in automobile ownership concurrent with fast urbanisation, and the change from traditional walkable Chinese urbanism to car-dominated urban forms is directly related to rising obesity levels [20]. Obesity levels are increasing as a result of changing lifestyles (e.g., diet) and, in a large part, due to car use in increasingly inaccessible metropolitan areas [21,22].

As cities transform to accommodate population growth and rapid urbanisation [23], whether through urban densification or lateral expansion, walking and cycling access to destinations such as health services, public open spaces, jobs, education, retail, and public transportation is critical for developing less car-dominated cities. There is a strong association between the built environment's urban design qualities and residents' physical activity, with well-connected, accessible street networks supporting walking for transportation and enjoyment [24,25]. Cities that encourage walking have been found to decrease the prevalence of non-communicable diseases such as cardiovascular disease, obesity, and type 2 diabetes [26,27].

One of the most critical elements affecting walking is proximity to services or points of interest (POI). Residents' tendency to walk rather than drive is significantly improved if parks, stores, and, notably, public transportation hubs are located within a short walking distance of their residences [28]. Therefore, we need to design our future cities to be inclusive and promote physical exercise to all citizens with innovative strategies of urban design centred on closeness and accessibility.

#### *2.2. Accessibility and Ageing Population*

Population ageing has become one of the most important social trends of the 21st century. In almost every country in the world, the balance of senior citizens is increasing [1]. The share of Australia's population over the age of 65 is expected to nearly double in the next 50 years [2], and there are similar projections for China [29]. Labour and financial markets, demand for goods and services such as housing, transport, social security, and family structures, and intergenerational relations are all substantially impacted by this demographic adjustment, presenting a rising number of social, economic, and public health challenges [30]. Particularly in China, the world's largest and most populous developing economy, the pressure on elder care needs is exacerbated and accelerated by the emergence of the "421 family", an outcome of the one-child policy, formed in the 1980s. China's 421 families consist of four grandparents, two parents, and one child, who must support all six older adults in the foreseen future [31].

Mobility independence is particularly important for elderly citizen health and wellbeing. Empirical studies demonstrated a significant link between built environment characteristics such as proximity to specific destinations and mobility promotion in older individuals [9]. Therefore, it is critical that we offer this ageing population comfortable walking access to shops, good food, public transportation, recreation facilities, and public open space, particularly green spaces such as parks and gardens. Additionally, it is crucial to evaluate topographical barriers to walkability (steep inclines, stairs, and ramps) because, in addition to distance, these factors can significantly impact their degree of accessibility and negatively correlate with physical activity in older adult individuals [32].

#### *2.3. Accessibility and Inclusive City*

The 2030 Agenda of the United Nations SDG highlights the importance of including considerations of disability and minority groups in urban developments. Most cities still present major barriers to inclusion and engagement for the world's one billion people living with disabilities, ranging from inaccessible infrastructure and a lack of user-sensitive facilities to limited access to essential public services. In the scope of the new urban agenda, inclusion and accessibility are prerequisites for the development of fully connected and sustainable cities [33].

Individuals with mobility limitations confront many disadvantages when navigating cities and seeking services. The UN Convention on the *Rights of Persons with Disabilities* report states that "disability results from the interaction between persons with impairments and attitudinal and environmental barriers that hinder their full and effective participation in society on an equal basis with others" [34]. Members of the disability community have a right to maximise their autonomy in the community, and this should be enabled to the maximum practical extent [35].

Urban design solutions that aid people with mobility impairments to regain their independence of movement and access to community services, facilities, and friends could potentially alleviate issues of social isolation, improve well-being, and decrease public healthcare cost impairments.

#### **3. Current Approaches to Pedestrian Accessibility Modelling**

Numerous approaches to spatial network analysis and pedestrian modelling have evolved over the last three decades, with significant contributions by Kansky [36], Pushrarev and Zupan [37], Hillier [38], Batty [39], and Torrens [40], as summarised by Sevtsuk and Mekonnen [41]. Significant improvements in computer hardware mean sophisticated pedestrian modelling, computationally unfeasible ten years ago, is becoming available to urban designers [42].

A critical aspect affecting walking that is important to model, particularly when considering walking for transportation, is the proximity to services or "points of interest". People have been shown to be willing to walk between approximately five and ten minutes to stores and public transportation, respectively [37]. Until recently, modelling access to services such as schools and transport nodes was limited [43] to 'Euclidean buffers' (circular catchments), which are an as-the-crow-flies distance from services being the most common approach [13]. The approach of drawing a circle of an 800 m radius to represent 10 min walk at 1.3 m/s [37,44] to approximate a pedestrian catchment for a chosen node is still widely used in urban design and planning practice. Despite criticism of the inaccuracy, these overestimated catchment areas are incomplete accounts of street networks and barriers such as rivers or railroad tracks [43]. The approach also fails to consider physical environmental aspects that influence walking, such as gradients, perceived safety, and climatic conditions [10], or allow 'what if' scenario testing. The development of proprietary GIS software with additional network accessibility (ESRI™ Arc Map with Network-Analyst™ plugin) has dramatically improved accessibility catchment modelling [13] with a vector distance-based Service Area Approach or 'ped-sheds'. This method can produce more accurate and useful accessibility analysis, but can be prohibitively expensive and require high-end GIS software and specialist staff [45].

#### **4. A Multi-Criteria Digital Evaluation Approach**

#### *4.1. Tool Development Aims*

The discussion above indicates that there is an increasing need to improve pedestrian accessibility for the health of the whole population, especially older individuals, and to deepen our knowledge of accessibility for those with mobility limitations. New evaluation and pedestrian modelling tools are required to solve the aforementioned challenges and problems.

The novel approach and techniques discussed in this article intend to provide a tool that meets five criteria:

#### *User-friendly and inclusive*

To develop an open-access, user-friendly interactive web-based application that overcomes the expense and steep learning curves associated with many of the aforementioned pedestrian modelling systems.

#### *Modelling dynamic catchments (spatio-temporal)*

Combine the benefits of pedestrian access mapping with other dynamic factors within precinct catchment regions, such as varying wait times at crossing locations and depicting pedestrian flow over time.

#### *Designed for those with mobility disabilities (speeds and gradients)*

Allow individuals with mobility impairments, disability communities, and urban planners to model and comprehend accessibility in their neighbourhood, while also offering tools for designing and advocating for more accessible urban programmes. The tool must be capable of accounting for differences in walking speeds across individuals with varying degrees of mobility and assessing slopes to identify topographical obstacles for those with mobility limitations.

#### *Flexible in terms of data input and approximation (including missing footpath data)*

Being able make use of government network data sets from sources such as AURIN, as well as open sources for non-academic users and people from other regions of the globe. Where local expertise might help enhance the quality of the data, the tool per se should be able to utilise an editable and repairable data source (crowd-sourced). Due to the fact that many cities lack footpath data, the tool should also be capable of simulating pathways in the absence of accessible data while using existing data.

#### *Suitable for iterative scenario testing throughout design*

To be able to rapidly test what-if scenarios for both greyfield and greenfield sites to enable researchers, planners, and urban designers to modify neighbourhood walkability to improve access to points of interest and generate comparable metrics for decision-making progress.

#### *4.2. Tool Development and Multi-Criteria Digital Evaluation Approach*

To accomplish these goals, we developed PedestrianCatch (www.PedestrianCatch.com, accessed on 10 July 2022), an online accessibility mapping tool that simulates urban pedestrian catchments focusing on walkability for mobility-challenged individuals [46–48]. Pedestrian access is calculated in the tool using large numbers of intelligent agents to measure the pedestrian catchments for a central node (start marker). The agents make basic decisions in moving away from the central node (e.g., a school), at walking speed, interacting with the streets, traffic, and crossings, measuring and mapping all the possible journeys that can be walked in a specified time (e.g., 10 min). The analysis, an animated isochrone with a catchment area analytics output, is suitable for comparative scenario studies and for stakeholder engagement as it highlights pedestrian access barriers and allows users to propose and rapidly test design options or interventions.

#### 4.2.1. Executing the PedestrianCatch Scenario

The PedestrianCatch interface simplifies the process of modelling catchments by providing live, animated feedback. PedestrianCatch is developed in JavaScript as a browserbased client that queries an Enterprise JavaBeans webserver for scenario solutions. The normal, behind-the-scenes action happens when a scenario starts with requesting the OSM (OpenStreetMap) geo-referenced road network street segments or 'ways' for a chosen area. These paths are processed upon return and are transformed into discrete modelling segments. The server is in charge of creating a geo-referenced graph of path segments and modelling pedestrian agents on top of it. Each agent in the simulation is responsible for selecting which paths it should take and verifying whether a path has previously been travelled, while keeping track of its limitations, such as time. The user interface has been designed to be as simple and intuitive as possible without sacrificing essential functionality. When users first visit the website, they are welcomed with straightforward instructions and a Google Maps satellite background (Figure 1). The user then picks a place for a scenario by entering a specific site name in the search box or navigating to a point on the map and then right-clicking. A gold-coloured node 'start' place marker is presented at a central default point on the route network, and the pedestrian ways around the specified site are automatically requested, downloaded, and displayed.

**Figure 1.** Screen capture of PedestrianCatch greeting screen showing simple instructions with Google Maps background and graphic user interface.

Following that, the start marker may be moved to a specific point of interest on the street network (e.g., a train station). After entering the maximum duration time in minutes, for example, 5 or 10 min, the scenario can be executed by pressing the 'Start' button. The programme then radiates out virtual pedestrian agents or "walkers" from the specified start point, travelling at a default speed of 1.33 m/s. The walkers are animated as they explore each conceivable route from the starting point to the furthest destination they can reach, leaving clickable 'breadcrumbs' in their footpaths. When these 'breadcrumbs' are clicked, the elapsed time and distance to that location are shown.

#### 4.2.2. Visualisation and Result Analysis

Convex hull isochrones or network buffers (based on a selected simulation mode) are displayed at every two-minute interval during the animated simulation. These visualisation layers may be clicked to check the amount of time elapsed and the area covered. At the end of the scenario animation, a final isochrone or buffer zone is inserted at the appropriate termination time interval. Again, this isochrone or buffer area may be queried to get the time and total area or "catchment" region for the selected destination (Figure 2). A maximum theoretical as-the-crow-flies Euclidean buffer circle is also added, centred on the start point with a radius based on distance (without taking into account the real way network) for the selected speed. It may be clicked to see the distance and area to allow the catchment efficiency or Pedestrian Catchment Area Ratio calculation, as defined by Schlossberg [46].

**Figure 2.** Screen capture of PedestrianCatch example of simulation run on Ormond Station in Glen Eira, Victoria, Australia, using default settings in convex hull display mode.

As found in the previous study by White and Badland [45,47,48], an essential feature of accessibility modelling that is not presently incorporated in other modelling methodologies is the traffic light cycle's impact on the time taken to cross major roads and streets. This functionality was implemented in PedestrianCatch in two ways. First, crossing ways may be explicitly stated in a GeoJSON network dataset file which can be imported into Pedestrian-Catch with a specific wait time. Second, PedestrianCatch may automatically offset the way of the centrelines of the main highways to construct footpaths on each side, omitting the original centreline and substituting an interpolated crossing with a default wait time of 30 s (Figure 3).

**Figure 3.** Screen capture of PedestrianCatch displaying approximated ways offset to simulate footpaths and crossings around Myer, Melbourne.

Based on previous work by White and Badland et al. [45] and feedback from the industry stakeholders, it was clear that visualising pedestrian movement is a key component for understanding accessibility modelling, especially for non-experts. The scenario testing conducted by PedestrianCatch is depicted with animated pedestrian locations which mimic the walking movement over time (Figure 4). The animated 2-min convex hull isochrones or buffer zones further help to explain the final resulting catchment (accessible) area.

**Figure 4.** Sequence of screen captures of Melbourne CBD PedestrianCatch simulation animation.

The results of the requested scenarios present various analyses that are displayed to the user. The tool creates a variety of interrogatable points and shapes. Each of the 'breadcrumbs' left by a walker may be clicked to see the distance from the starting location and the time taken to get to that point. Each convex hull polygon or buffer zone may be pressed to check out the exact time consumption and total area.

The data generated may also be exported by clicking the export buttons. The scenario can be exported as a KML file (Keyhole Markup Language), an XML-based file format for displaying geographic data in an Earth browser such as Google Earth or Google Maps. The data from the road network may also be stored as a GeoJSON file.

#### 4.2.3. Towards an Inclusive Community: Designing for People with Mobility Impairments

To address mobility issues for older adults and people with mobility impairments, the tool was designed with a graphical user interface that contains a slider with multiple preset speed values, ranging between 0.5 m/s to 10 m/s. This flexibility means that a user can test scenarios using a "typical" walking speed (1.33 m/s is used in the old style Euclidean buffers), speeds based on research on average speeds of a particular demographic, assisted/unassisted wheelchair speeds, or even cyclist speeds. Individuals can also use their own typical walking speeds based on data captured by pedometer/health smartphone applications such as Google Fit™, Samsung's S Health™, or Apple Health™.

To account for topographic obstacles, PedestrianCatch calculates estimated slopes using open-source elevation data in combination with OSM data. To facilitate this computation, desired OSM paths are divided into straight-line segments of a maximum of 20 m. The elevation data for each of these paths' endpoints is then obtained to calculate the gradient (increase over run) for the whole network. This gradient is then stored as a variable within the way and displayed on-screen as a colour range from flat as white to an increasing gradient as increasingly red-hued.

The steepest gradient that walkers may traverse can then be specified in the graphical user interface, allowing for the modelling of mobility-impaired accessibility. The gradient may be set as a ratio (for example, 1:14) or as an angle (for example, 5.25◦) (Figures 5 and 6). Road segments that surpass the prescribed gradient are highlighted in bright red with a

thicker line style and are removed from the visualised scenario since they are impassible to walkers.

**Figure 5.** Screen capture of PedestrianCatch displaying topographic analysis of gradients with maximum gradient set to 1:14. (red colour indicates inaccessible road segments).

**Figure 6.** Screen capture of PedestrianCatch displaying topographic analysis of gradients with maximum gradient set to 1:20. (red colour indicates inaccessible road segments).

A catchment scenario can then be run with the chosen walker speed and topographic gradient threshold. A simulation is executed to determine the maximum distance convex hull or network buffers with the given walking speed for the specified duration. The convex hull is comparable to a secondary 'maximum' convex hull that considers the route network, but ignores slopes and crossing delays. A similar technique applies to the network buffer mode too. These polygons and buffers may be clicked to provide additional information such as time, distance, and area for the user (Figure 7).

**Figure 7.** Screen capture of PedestrianCatch showing catchment simulation for Melbourne CBD with accessible buffer zone reduced by the steep gradient.

#### 4.2.4. Testing What-If Scenarios

If users desire to test their own customised route network, they may simply drag and drop their own customised GeoJSON file into PedestrianCatch. This enables urban planners to examine the current urban fabric of a grey field site for accessibility and impediments, but also to propose and test the effect of prospective urban interventions.

Likewise, a planner or urban designer may be involved in the design of a new urban development on a greenfield site, and PedestrianCatch can be implemented to evaluate multiple design options, providing clients and community members with visual feedback on each option's accessibility as well as metrics suitable for numeric comparative analysis.

It should be noted that OSM maps must not be edited to test *what-if* scenarios as this would corrupt the integrity of the OSM dataset.

#### **5. Case Studies in Beijing and Melbourne**

To test the applicability of this technology, we applied the PedestrianCatch tool and the proposed digital evaluation approach in two different urban contexts where accessibility for ageing and disability populations are seriously challenged: Beijing, the biggest city in north China and one of the most populous cities around the globe, is experiencing critical mobility problems due to a lack of barrier-free facilities and the wide existence of impermeable gated communities; Melbourne, ranked as one of the most liveable cities of the world, also suffers from compromised walking infrastructure arrangements in suburban locations due to entrenched car-dependent development patterns and the location of the majority of open space on the sloping banks of waterways. Despite the differences in these urban contexts, both cities have seen severe challenges for people with mobility impairment, older adults, and children in terms of access to essential services (open space) due to topographic

conditions. Common scenarios, such as parents pushing prams, children riding bikes, or grandparents walking grandchildren to school, are all negatively impacted by long walking distances or steep gradients. Urban designers have a growing need for tools and appropriate accessibility modelling approaches that allow the assessment of street networks and gradients, and a need to identify both distance and topographical barriers to the design of inclusive urban environments for their communities, too.

#### *5.1. Accessibility Evaluation in JiuLongHuaYuan, Chaoyang, Beijing*

As one of the pilot cities for China's fast urbanisation, gated communities have grown rapidly in the Beijing metropolitan region in the past few decades. Until February 2020, there were roughly 10,300 privately secured gated communities [49]. Based on theories from community openness evaluations, current gated compounds in the Beijing urban area may be classed into four categories on a continuum from closed to open: an enclosed or fully-gated community, a semi-enclosed community, a semi-open community, and an open community [50]. From the 1950s to the 1990s, enclosed communities with guarded gates and fences were widely built. These communities' block edges are relatively long, ranging from 60 to 400 m; however, the number of gates on each side of the block is frequently limited to one or two [51], and accessibility issues for specific demographics are often overlooked, posing a significant barrier to facilitate the inclusive walking community. Compared to the enclosed community, the other three types have smaller blocks with more gates and higher openness, yet still lack essential pedestrian infrastructures for the older adults and people who have mobility impairments.

The JiuLong HuaYuan neighbourhood is located at the very core of the Beijing CBD precinct and is considered one of the largest residential blocks in the surrounding area. The whole compound is primarily residential, accommodating 2333 households in total, and contains other building types such as office buildings and commercial buildings. As a typical gated community, JiuLong HuaYuan has relatively low permeability that inhibits pedestrian through-travel to both internal and external public facilities. The longest block side is around 400 m with merely five main gates in its perimeter. The road has a steep slope near the south entrance.

To thoroughly investigate the site and test the potential applicable renewal scheme, a simulation was conducted using PedestrianCatch. The starting point is positioned near the southern garden, one of the most popular spots for local residents. The first scenario uses default settings with a 1.33 m/s agent travelling speed and does not consider the gradient (topography). This approach intends to evaluate the pure network arrangements and pedestrian connectivity within 10 min. The result indicates that the total pedestrian accessible area is 1046,236 m<sup>2</sup> at 10 min and 319,264 m2 for 6 min using the convex hull method. The visualised catchment also reveals that the access to the southwestern area is relatively limited, and the eastern regions are more reachable than the west pre-six minutes due to the position of the gates and the overall sidewalk layout (Figure 8).

When taking urban topography into account, the trend is amplified. The convex hull again expands eastward as the steep infrastructure (coloured in red) severely undermines the overall accessibility (Figure 9). By also lowering the walking speed to 0.80 m/s (the typical average speed for older adults and people with light mobility impairment), we can also observe a significant drop in the pedestrian accessible area to 327,915 m<sup>2</sup> at 10 min and 107,065 m2 at 6 min (Figure 10), indicating that the current permeability level is poor, preventing through-travel and making adjacent amenities exclusive for many local populations.

**Figure 8.** Screen capture of PedestrianCatch showing catchment simulation for JiuLong HuaYuan neighbourhood with default settings.

**Figure 9.** Screen capture of PedestrianCatch showing catchment simulation for JiuLong HuaYuan neighbourhood with convex hull reduced by the steep gradient.

**Figure 10.** Screen capture of PedestrianCatch showing catchment simulation for JiuLong HuaYuan neighbourhood with slowed speed setting for people with mobility impairment.

Essential infrastructure renewal and 'ungating' strategies can be used to mitigate this pressing concern. Although fully opened community development is often perceived as an ideal conceptual agenda in China regarding privacy, security concerns, and densified urban patterns, Beijing's Jianwai SOHO neighbourhood is considered a relatively successful attempt at "ungating". Designed by Riken Yamamoto and Field Shop, the superblock was split into nine smaller groups, and arterial roads and subsidiary streets were introduced to the site. The mixed road system and compound building type offer more mobility options that contribute to the community's high level of vitality [52–54]. While the full ungating approach applied in Jianwai SOHO cannot be implemented in every compound, this study argues that maximising openness and bringing in more accessible infrastructure would be greatly beneficial for a broader community. To test a similar/modified strategy applied at Jianwai SOHO and other open communities, we set up a new simulation scenario using revised road networks by connecting internal footpaths, extending cul-de-sacs, and setting new entrances and barrier-free infrastructure. The simulation result shows that the walkable area reaches 1,146,553 m<sup>2</sup> at 10 min and 364,517 m<sup>2</sup> at 6 min (Figure 11), where the overall accessible areas are increased by 9.5% and 14% compared with the existing scenario (Figure 8), and 56% and 91% when considering the gradient (Figure 9).

#### *5.2. Accessibility Evaluation in Footscray East, Maribyrnong, Melbourne*

The City of Maribyrnong is a small, highly populated precinct on the banks of the Maribyrnong River in Melbourne's inner west. With 40% of residents born outside of Australia, the city has the second most ethnically varied demographics in Victoria [55]. The reorganisation of local manufacturing businesses and the removal of Commonwealth defence facilities from the region have resulted in changes in land use from industrial to mixed-use residential/business.

Unlike Beijing, the gated community strategy is rarely seen in the Melbourne urbanisation process. The proposed case study site is a typical block neighbourhood located at Footscray East, bounded by Footscray Park and Newell's Paddock Urban Nature Park from the north and east (Figure 12). In Footscray, population growth and ageing demographics combined with the conversion of industrial property to residential areas will put

growing challenges on the precinct's pedestrian infrastructure during the next few decades (Figure 13). These two parks serve as essential leisure walking infrastructure and open public spaces that the local residents frequently visit. However, due to the undulating topography and limited pedestrian paths, free access to these areas is still challenging for people with mobility impairment.

**Figure 11.** Screen capture of PedestrianCatch showing catchment simulation for JiuLong HuaYuan neighbourhood with revised street network and barrier-free infrastructures.

**Figure 12.** Screen capture of PedestrianCatch showing the topography and footpath condition in East Footscray, Melbourne.

**Figure 13.** A steep ramp spotted in Cranwell Garden, which is ubiquitous amongst Footscray, makes this place less accessible not only for wheelchair users and seniors, but also for cyclists and leisure walkers.

In this scenario, we set the simulation 'starter' at the crossing of Lynch St and Moore St, which is the approximate centroid point of this block neighbourhood. Due to the topography, major pedestrian obstacles can be found near the primary entrance on Ballarat road. Ramps and the primary walking route within the two parks, Footscray and Newels Paddock, are also overly steep. The total walking coverage area is 823,023 m<sup>2</sup> at 10 min, with an ideal walking pattern on the west and south, given the permeable network layout in the residential neighbourhood, leaving most of the landscape and waterfront public spaces in both the north and east directions inaccessible (Figure 14). To advocate the inclusive development scheme that accommodates the need for quality pedestrian infrastructure for all, we proposed a new scenario test by introducing new internal routes and ramps winding their way up to Ballarat road. The original network data were downloaded in GeoJSON format and edited on GIS platforms (Figure 15). The result shows the walkable area hits 1,075,575 m<sup>2</sup> at 10 min with a 31% improvement compared to the previous scenario (Figure 16). Most of the open public spaces along the river can be visited, even considering the speed declination of the people with impaired mobility (Figure 17).

Through these two case studies, we could argue that the PedestrianCatch tool and proposed multi-criteria digital evaluation approach can be applied to test both existing and what-if scenarios under different urban contexts. This digital evaluation approach with a user-friendly simulation procedure can positively contribute to both pre-design and design processes, assisting urban designers, government officials, and local residents to better identify local accessibility challenges and test multiple design solutions.

**Figure 14.** Screen capture of PedestrianCatch showing catchment simulation for East Footscray neighbourhood with buffer zone reduced by the hilly topography around the gardens.

**Figure 15.** Screen capture of QGIS showing how new zigzag footpaths can be easily added to the existing road network.

**Figure 16.** Screen capture of PedestrianCatch showing catchment simulation for East Footscray neighbourhood with revised road networks.

**Figure 17.** PedestrianCatch catchment simulation for East Footscray neighbourhood with slowed speed setting for people with mobility impairment using revised road networks, providing intuitive visual feedback of how accessible area increased compared to the original condition.

#### **6. Conclusions and Discussion**

This paper has proposed a multi-criteria digital evaluation approach using the PedestrianCatch tool to facilitate accessible journeys delivering a more inclusive city. Responding to the pressing global need to increase city accessibility for health, ageing populations, and disability inclusiveness, this study develops an urgently needed walk-quality-focused online simulation tool for local governments, built environment professionals, and researchers to make more informed, integrated, and effective planning policy decisions. The online mapping tool can be applied in any urban context and enables the integration of data relating to the road network of specific local scenarios. The tool and the evaluation approach could also facilitate more informed understanding and participation in planning processes for the broader community.

The PedestrianCatch tool has proven effective and flexible, providing a platform for a diverse group of stakeholders to test a variety of urban scenarios for promoting an inclusive neighbourhood, in this case, parks and gated communities; however, it could also be applied to scenarios seeking to select the optimal location of new amenities, aged care facilities, housing for disability communities or medical facilities, and impacts of potential urban interventions to increasing catchments. As the walkable proximity to services or points of interest is a key factor in encouraging an inclusive and active community, PedestrianCatch provides a user-friendly tool for assessing accessibility that can be used particularly for improving life quality for older adults and people with mobility impairments. The unique use of animated agents as indicative walkers to visualise the pedestrian catchment is one of its key strengths as an urban design advocacy tool for a wide-ranging audience.

The ability to import network data in the GeoJSON format allows urban designers and planners to test what-if scenarios for future urban renewal schemes, from new pedestrian linkages and bridges, to highway downgrades and the lowering or raising of railway lines. The proposed evaluation approach may also be used to compare alternative street layout choices for future projects. This flexible procedure, coupled with freely-available crowd-sourced data in an online platform, enables tailor-made site-specific analysis to inform the re-design and adaption of urban infrastructures.

The two case studies in China and Australia demonstrate the feasibility and flexibility of the proposed evaluation approach for adapting radically different urban fabrics and conditions. In the Australian case study, mobility-compliant ramps added distances to pedestrian routes, but resolved the primary challenge of steep topography by making most of the waterfront all-age and ability accessible. Similarly, the ungating method applied in the Beijing case study may represent an additional civic expense and potentially cause more unpredicted security problems, however, the benefits gained from enhancing accessibility and permeability would arguably contribute not only to people with impairment, but also to wider populations with more significant social capital considering the emerging traffic and energy concerns.

Although the PededestrianCatch tool has proven useful in urban design research and practice (with over 20,000 uses worldwide), the tool is currently limited to the number of network accessibility factors. The tool does, however, provide an extensible platform to build upon and allow the integration of other crucial physical factors impeding active journeys, such as safety risks, air quality, and human thermal comfort in our future development. There is also a potential for PedestrianCatch to be developed as a smartphone app for both IOS or Android so it can be used by an even broader public user group. A smart-device version of the application suitable for tablets could be used in the field or on tablets in a community consultation environment.

The elevation data used for PedestrianCatch's topographic analysis is currently too course to pick up detailed mobility barriers such as missing curb cuts, poorly maintained footpaths, or microscaled changes in grade or single steps. In future, it may be possible to add detailed survey data or fine-grain LiDAR topographic data for specific study areas. The tool's interface could also potentially be further refined to be more inclusive for people with visual impairment through the development of an Accessibility Mode to allow for users of assistive technology, such as speech recognition software and screen readers.

In conclusion, the findings of this research are immediately significant to the urban planning and urban design community, as well as health planners and the disability community, and will become more relevant to the general public as the population ages. Our digital evaluation approach demonstrates how it may help to create more walkable and accessible communities for everyone and further contribute to the development of a more equal, healthy, and inclusive city.

**Author Contributions:** Conceptualisation, M.W. and X.H.; methodology, M.W. and X.H.; software, M.W. and X.H.; validation, X.H.; formal analysis, X.H.; investigation, X.H.; resources, M.W. and X.H.; data curation, X.H.; writing—original draft preparation, M.W. and X.H.; writing—review and editing, M.W. and N.L.; visualisation, X.H.; supervision, M.W.; project administration, X.H., M.W. and N.L.; and funding acquisition, X.H., M.W. and N.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This project is funded by the Australian Research Council Linkage Project [LP190100089], National key R&D programme "Science and Technology Winter Olympics" key project "Evacuation system and support technology for assisting physically challenged communities" [2020YFF0304900], the Beijing High-level Overseas Talents Support Funding, R&D Programme of Beijing Municipal Education Commission (KM202210009008), the NCUT Young Scholar Development Project, and The University Innovation and Entrepreneurship Training Programme [108051360022XN353] [108051360022XN370].

**Data Availability Statement:** All scenario tests conducted in this research can be reproduced along with accessible data on www.pedestriancatch.com (accessed on 10 July 2022).

**Acknowledgments:** This research has built on prior work funded by the Australian Urban Research Infrastructure Network (AURIN) and The Australian National Data Service (ANDS). Preliminary research has been published in White M., Kimm G. PedCatch—inclusive pedestrian accessibility modelling using animated service area simulation with crowd-sourced network data. In: Healthy Future Cities. Beijing: China Architecture and Building Press; 2018. pp.190–202. We also appreciate Goeff Kimm and Tianyi Yang for their great contribution in developing and maintaining www. pedestriancatch.com (accessed on 10 July 2022).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Impact of Political, Social Safety, and Legal Risks and Host Country Attitude towards Foreigners on Project Performance of China Pakistan Economic Corridor (CPEC)**

**Amer Rajput 1, Ahsen Maqsoom 2, Syed Waqas Ali Shah 1, Fahim Ullah 3, Hafiz Suliman Munawar 4, Muhammad Sami Ur Rehman <sup>5</sup> and Mohammed Albattah 5,\***


**Abstract:** The China Pakistan Economic Corridor (CPEC) project was signed between China and Pakistan in the year 2013. This mega project connects the two countries to enhance their economic ties and give them access to international markets. The initial investment for the project was \$46 billion with a tentative duration of fifteen years. Being an extensive project in terms of cost and duration, many factors and risks affect its performance. This study aims to investigate the effects of political (PR), social safety (SR), and legal risks (LR) on the project performance (PP) of the CPEC. It further investigates the significance of the host country's attitude towards foreigners (HCA). A research framework consisting of PR, SR, and LR as independent variables, PP as the dependent variable, and HCA as moderator is formulated and tested in the current study. In this quantitative study, the Likert scale is used to measure the impact of the assessed risks. A questionnaire survey is used as a data collection tool to collect data and test the research framework and associated hypotheses. The partial least square structural equation modeling (PLS-SEM) is used to perform the empirical test for validation of the study, with a dataset of 99 responses. The empirical investigation finds a negative relationship between PR, SR, LR, and PP. It is concluded that PR, SR, and LR negatively influence the PP of CPEC. Furthermore, HCA negatively moderates the PR, LR, and PP of CPEC. In contrast, the value of SR and PP is positive in the presence of the positive HCA.

**Keywords:** China Pakistan Economic Corridor; host country attitude towards foreigners; legal risk; political risk; project performance; social safety risk

### **1. Introduction and Background**

Global trades are the backbone of the modern globalized economy. Accordingly, governments of different countries are pursuing expansion of their business for better economic results [1]. Connectivity is essential for expanding business or economic activities for which economic corridors are utilized. These corridors provide relations and connections between different economic sectors within the concerned geographies. The fundamental aim of developing corridors is to boost economic activities in the different regions worldwide. Various initiatives and technologies are used in this context [2,3]. For example, China proposed the One belt One Road (OBOR) initiative to achieve the goal of connectivity between all the countries and the people of China [4]. According to the

**Citation:** Rajput, A.; Maqsoom, A.; Shah, S.W.A.; Ullah, F.; Munawar, H.S.; Rehman, M.S.U.; Albattah, M. Impact of Political, Social Safety, and Legal Risks and Host Country Attitude towards Foreigners on Project Performance of China Pakistan Economic Corridor (CPEC). *Buildings* **2022**, *12*, 760. https:// doi.org/10.3390/buildings12060760

Academic Editor: Pierfrancesco De Paola

Received: 13 April 2022 Accepted: 1 June 2022 Published: 3 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Chinese regime, OBOR depends on six economic corridors of collaboration within the country and the outside world. These corridors include the New Eurasian Continental Bridge Economic Corridor, China-Mongolia-Russia Economic Corridor, China-Central Asia-West Asia Economic Corridor, China-India-Burma Economic Corridor, China-Indo China Peninsula Economic Corridor, and China-Pakistan Economic Corridor (CPEC).

These corridors enable communication, connections, trades, and interaction between different nations. The CPEC is an important part of the overall OBOR project [5]. The commencement of the CPEC project is beneficial for both nations (China and Pakistan). It is considered more valuable for China compared to Pakistan because China can achieve a shorter route via Gwadar Pakistan for its trade worldwide through CPEC. In comparison, a route more than 12,000 km long via sea is used through China's Shanghai area to access the Arabian Sea. Table 1 shows the cost savings for China when using CPEC compared to using its Shanghai area.

**Table 1.** Trade distance difference (Shanghai vs. Gwadar).


Source: Pakistan's Potential as a Transit Trade corridor and transportation challenges [6].

The major purpose of China's OBOR project was to connect and link different countries to access Central Asia and Africa for trade purposes [7]. The objective of the MoU signed between China and Pakistan was to enhance regional connectivity and provide a trade corridor between Western China and the Gwadar Port of Pakistan to expedite economic activities [8]. The CPEC project joins the Gwadar port of Pakistan with China's western region of Xing Shang with the help of highways, railways, and pipeline networks. CPEC construction is planned to be completed in the next 15 years, with an initial investment of \$46 billion. CPEC is the extension of an existing project, the Silk Road, and will come into operation within three years. It will boost the economies of China and Pakistan and strengthen China's trade with Central Asia, the Middle East, and Africa.

This mega-project comprises multiple sub-projects. These include power generation projects, roads and highways, telecommunication, pipelines, railway networks, construction of Gwadar port, and other supporting infrastructure projects. CPEC projects would produce 17,000 megawatts of power with a cost estimate of \$34 billion, using hydro, coal, wind, and solar power plants. The remaining \$12 billion is allocated to develop the transportation infrastructure, spanning 2700 km. Other activities include establishing communication networks, widening the Karakoram Highway, and modernization of Gwadar airport [7].

Previous studies concluded that political (PR), social safety (SR), and legal risks (LR) are often more serious and sensitive in mega construction projects [9–13]. Unfortunately, previous researchers have paid little consideration to exploring the effects of PRs in the international construction industry or international joint ventures (IJVs) [10]. Consistent with previous studies, sociopolitical stability, legal and regulatory aspects, social safety, attitude towards foreigners, perceptions, and other risks are associated with the external threats in the overseas construction market [9,11,14–17]. These risks need to be investigated and mitigated in order to have a smooth foreign direct investment in mega construction projects [18]. As per the recommendations of previous studies, this study investigates the PR, SR, and LR in an international mega construction project, i.e., CPEC. Further, it investigates the role of the host country's attitude towards foreigners (HCA) in this project.

The significance of the CPEC project is very high for Pakistan, China, and the rest of the world. However, due to its lengthy duration, it faces several challenges and risks. These include uncertain situations, such as a lack of proper planning, security, PR, environmental protection risks, supply chain risks, LR, SR, and others [19–21]. Many studies have been conducted on the challenges and risks of similar international projects. However, a holistic study investigating the impacts of PR, SR, LR, and the HCA on project performance (PP) of the international construction project (CPEC in this study) has not been reported to date. The results of the study contribute to the literature by investigating the relationship between the mentioned risks and the HCA in the performance of international construction projects.

Risk management is critical in planning and managing various types of national or international projects [22]. The construction industry is comparatively more vulnerable to risks than other industries due to its inherent complexities [23]. From its inception to completion and functioning, the course of a construction project is a multifaceted process [24]. Pakistan and other developing countries suffer from multiple issues and misinterpretations of risk management, including socio-political, legal, and other risk identification and analysis [10,25,26]. Accordingly, research studies focused on addressing such risks must be conducted in developing countries. The current study targets this gap and assesses the PR, SR, LR, and HCA on the PP of CPEC. It can be helpful for future international construction projects to mitigate these identified risks in IJVs before executions.

Political regimes in Pakistan remain fragile and unstable, as history tells us. In such changing political scenarios, agreements furnished by the previous regime are jeopardized [10]. Because of this uncertainty, there is a lack of a friendly business environment in Pakistan. The risks are amplified when foreign workers and investments are involved [27]. Moreover, the strength of the local legal system is also tested in IJVs. The unsound legal system of the host country promotes the risk related to improper construction procedures, illegal bid activities, illegal interferences, breached contracts, and frequent changes of laws. This study can help the international construction firms and countries involved in projects to evaluate the influences of negative risks on international construction projects. This can be applied to the execution of projects such as dams, highways projects, high-rise buildings, and other major global projects performed under IJVs in developing countries like Pakistan.

Accordingly, this study investigates the moderating effects of the HCA on the relationship of PR, SR, and LR with the PP of CPEC. A questionnaire survey approach is adopted to collect data from 99 respondents from the construction industry in Pakistan. SMART PLS3 software is used for testing the hypotheses based on the survey data. The current study explores the following research questions (RQs):

RQ1: What type of relationship exists between PR and the PP of CPEC? RQ2: What type of relationship exists between SR and the PP of CPEC?

RQ3: What type of relationship exists between LR and the PP of CPEC?

RQ4: Does the HCA moderate the relationship of PR, SR, and LR with the PP of CPEC?

The research questions are addressed in this paper through a hypothesis development guided by the literature. These hypotheses have been cross-referenced and tested through data collected for this study. The following sections provide insights into the study hypothesis and research framework, followed by methodology, study results and discussion, and conclusions.

#### **2. Theoretical Foundation and Research Hypotheses**

#### *2.1. Risk*

The terms uncertainty, hazard, and threats are commonly used for risk [28]. Risk is also defined as the situation that alters or deviates the predetermined goals and values from the actual outcomes for a particular event or activity [29]. Overall, the risk is a combination of threats, hazards, and vulnerabilities that can occur when two conditions overlap [22]. Risk has been assigned different definitions by various studies. This is because the risk concept changes for different individuals according to their perspectives, experience, and attitudes. For example, a designer, engineer, and contractor look at risk from a construction

and innovation perspective [30,31]. Developers and bankers view risk from an economic and financial perspective [32,33]. Similarly, environmentalists, chemical engineers, and doctors look at risk according to an environmental and health perspective.

Although many classifications of risks exist, the most general classification of risk, especially construction-related risk, involves financial, economic, commercial, natural, logistic, construction, and technical. The variables of risk are grouped into three basic groups: (1) internal, (2) external, and (3) project-specific [34]. The five main and common risks faced by international contractors include PR, government risk, LR, SR, and natural risk. Risks are divided into the four types that can occur in an international project: PR, cultural, financial, and natural risk [35].

Risk Management has received attention in the last two decades in construction projects worldwide. Different risks have different effects on the project objectives, construction firms, and client/owners. IJVs face various risks due to the uncertain business environments in developing countries demanding systematic risk management [10]. CPEC, an IJV, faces multiple risks, including the PR, LR, and others. Accordingly, it has been targeted in the current study.

#### *2.2. Political Risk*

PR arises from fluctuations and cutoffs in the commercial environment due to political instability. The consequences of PRs are macro and micro, affecting all types of businesses, including construction. Types of macro-PR are revolutions, civil wars, nationwide strikes, protests, riots, and mass expropriations. The micro risks include elective expropriations and discrimination. Although identifying PRs is not unheeded in IJVs because these risks are unconventional compared to the local atmosphere, the impacts of these risks can be predominantly large for such projects. Overall, PRs are related to changes and fluctuations in the country's political system [19].

PR is the risk that a host government will suddenly change the "rules of the game" under which businesses operate [36]. This seriously influences projects, resulting in more uncertain investment outcomes. PR is defined as the execution of political power in such a way that it threatens the company's values [37].

PR deals with unsolicited changes and unpredicted consequences for international business and projects resulting in political action. Such actions significantly affect overseas projects and contractors. Central Asian countries have a high level of risk associated with politics because of cultural, safety, and religious issues. The working situation of the IJV is closely linked with the host country's politics. Therefore, comprehensive knowledge of PR is essential for construction projects and businesses in the global market [38].

PRs must not be ignored in international construction projects. In the last three decades, the world has been affected by political events, such as monetary crises, wars, SARS flu scares, terrorism, and regional depressions, which caused uncertainty in the IJVs involving construction projects [35]. Global construction projects face a high degree of public concern and political issues. Accordingly, such international construction projects are influenced by the social, legal, cultural, religious, and other factors of the host country [10].

Identification of the causes of PRs is very important in IJVs. Unfortunately, prior researchers pay little consideration to PRs in the international construction industry [10]. Due to the complex nature of the PR concept, it is challenging to quantify it for both academics and corporate decision-makers. In addition, very few studies are undertaken related to PR, especially in the context of IJVs. The most prominent factor affecting the CPEC performance and completion of the project is the bad political situation in Pakistan. Most international firms have limited understanding and analysis of political circumstances in another country. If the political instability of the country is high, investors may postpone or cancel their deals [39]. As a result, IJV-based construction projects are typically risker in foreign countries than domestic ventures [36]. The current study targets this aspect. Overall, this study assesses the influence of PR on CPEC project performance and presents the following hypothesis.

#### **Hypothesis 1 (H1).** *Political risk negatively affects CPEC project performance.*

#### *2.3. Social Safety Risk*

Construction projects face time and cost overruns that often lead to project termination [40]. The construction projects have four main stages during their lifecycle: inception, planning, execution, and closure of the project. SR refers to the negative effect of businesses or projects on communities or groups. SRs affect the project objectives in all four phases. Many social events are triggered if risks are poorly managed [41]. SRs impact the cost and quality of projects and are of high significance in IJVs. Moreover, SRs include security issues that can result in loss of resources and destruction of equipment pertinent to public response [10].

Construction industries and projects encounter many social risks. For example, highways and railway projects are spread over a wider geographical area and face social issues [42]. These risks negatively affect the project objectives. In the past, Chinese international construction enterprises have been plagued with dogmatic risks associated with social safety worldwide, such as terrorism, kidnapping, and arm conflicts in countries like Iraq, Yemen, Mali, India, Pakistan, Sudan, Libya, Syria, and Afghanistan.

In the case of the CPEC project, there are five main SRs. These are crime, terrorism, violent demonstrations, armed conflicts, kidnapping, or extortion. The CPEC project is executed by different international and national firms. Security is an important factor for international firms to deliver a successful project. An unsecured situation for CPEC brings incredible threats to the personnel and properties of international organizations [40].

A relevant study of social impact assessment in CPEC 2018 highlighted that most of the sub-projects of CPEC will be carried out in remote and tribal areas [43]. Usually, clusters of the population living on the periphery remain in conflict with the federation. If not addressed formally, this factor will result in time and cost overrun of the projects. Pakistan's tribal areas, known as Federally Administered Tribal Area (FATAs), of the northwestern zone of Pakistan have an area of 27,000 sq km situated along the Afghanistan border. The inhabitants of these tribal areas have great concern and sensitivity to self-rule and independence. If a factional conflict arises in these areas with the government or between themselves, the consequences can be very serious for foreign organizations undertaking projects in the area. Thus, the host country should pay utmost attention to cater to this potential risk for encouraging investments in the IJVs such as CPEC. In the absence of such considerations, the associated SRs may cause project failures and huge losses [44]. Therefore, CPEC could only be fruitful if the security challenges are actively addressed.

Both China and Pakistan are facing security threats within their countries. China has a security threat in its province Xinjiang, whereas Pakistan has a serious security challenge in its tribal areas. In Pakistan, the security challenge is a major factor affecting the PP. China is making five different economic zones in Kashgar that have the potential to create some tensions in Xinjiang, fueling further security concerns. Similar predictions are made about nationalist and militant movements in Baluchistan, Pakistan. As a significant part of CPEC is in Pakistan's territory, security and SR challenges to CPEC in Pakistan are the main concerns about its PP. There are two types of SRs relevant to CPEC in Pakistan: internal and external. The first and most severe internal challenge is the presence of anti-state elements in FATA and the western part of Pakistan. In addition, Pakistan faces security threats from other organized religious and ethnic groups. Further, the bordering tensions with its neighboring countries also add to the SRs as external factors. Therefore, these SRs present serious threats to the CPEC project. Hence, the following hypothesis is formulated in this study:

**Hypothesis 2 (H2).** *Social safety risk negatively affects CPEC project performance.*

#### *2.4. Legal Risk*

Construction projects include PR, SR, environmental, technical, economic, cultural, and LRs. These risks are faced by AEC firms when undertaking international construction projects such as IJVs. The word "legal" refers to all legal expectations such as employment, taxation, resources, import–export, and other factors related to projects. The associated LR arises on both sides, i.e., host country and international project firms [45]. These LRs comprise a breach of contract and the lack of enforcement of the legal judgment in construction IJVs. Therefore, the strength of the legislation system in host countries is important for successfully conducting IJVs. An appropriate legal system of the country can help better understand and manage the project in IJVs. Sophisticated methods can help manage claims, disputes, conflicts, variance, and other contractual issues through a sound legislation system [14].

LRs constitute losses incurred by a business due to a lack of awareness or misunderstanding, ambiguity, and carelessness in the compliance with laws related to the business. A study of critical external risks in IJVs in Pakistan highlighted that insufficient legal infrastructure, nationalism, and protectionism are the main LRs [10]. Usually, governments in developing countries formulate laws and by-laws aimed at protecting the interests of local businessmen and companies. This aims to facilitate and increase the position of local companies and vendors. Such kinds of legislation discourage international enterprises from doing business in these countries. Authorities and regulation systems (state laws and regulatory requirements for billing, claims, security/privacy of the international companies), altered contract forms, lack of a legal system, corruption, and nepotism are some other LRs to the construction IJVs in Pakistan [5,6,20,27]. A further lack of an independent judiciary and weak and irregular regime-changing systems in such countries add to the LRs [10]. Based on these discussions, the relevant hypothesis is proposed as follows.

#### **Hypothesis 3 (H3).** *Legal risk negatively affects CPEC project performance.*

#### *2.5. Host Country Attitude Towards Foreigners*

The influences of risks such as LRs, SRs, PRs, and others on the CPEC PP are measured through the HCA. HCA comprises three major factors, i.e., hostility to foreigners, confiscation or expropriation, and discrimination against foreign companies [40].

The globalization of construction businesses generates opportunities for collaborations of international construction firms and contractors. However, the execution of construction projects has a high level of uncertainty in overseas projects compared to domestic construction [46]. The unfavorable HCA is the most significant variable in IJVs. A positive HCA is beneficial, creates a friendly environment for international construction firms and contractors, and reduces the impact of related risks. In this context, HCA is the most important variable for encouraging Foreign Direct Investment (FDI) [47].

Due to the internationalization of all business sectors, including the construction industry, a high level of competitiveness, uncertainty, and risks when undertaking overseas projects has emerged [10]. Multinational companies and firms participate in IJVs for construction projects with different political, social safety, and legal backgrounds. Other uncertainties due to the host country's environments, such as economic, cultural, policy, environmental, market, and production risks, can also influence the PP [48]. Thus, the HCA is an important variable for measuring PP. It is strongly related to the foreign policies of the host country's government. The adverse approach and policies made by the host country can produce various types of negative risks, such as confiscation, overseas investment limitations, unfair compensation, land ownership limitations, foreign exchange restriction, and capital limitations, for IJVs in projects like CPEC [40]. Due to such negative HCA, international contractors and firms may bear significant losses. Therefore, it is important to determine host country-related threats and opportunities in an international venture like CPEC [10]. Accordingly, the following hypotheses are formulated:

**Hypothesis 4 (H4).** *Host country's attitude towards foreigners moderates the relationship of political risk with CPEC project performance.*

**Hypothesis 5 (H5).** *Host country's attitude towards foreigners moderates the relationship of social safety risk with CPEC project performance.*

**Hypothesis 6 (H6).** *Host country's attitude towards foreigners moderates the relationship of legal risk with CPEC project performance.*

#### *2.6. CPEC Project Performance*

The construction industry is one of the most important global industries. About 10% of the Gross Domestic Product (GDP) of Pakistan rests on it. Therefore, the construction sector needs to evaluate the impact of PP, which can affect national economies positively or negatively [49].

PP is defined as the evaluation of performance related to demarcated objectives and goals that provides the status of the project and where it is heading. PP is an important factor for the construction industry as it provides information on the status and direction to the project team members. Measuring the PP widely depends on project objectives such as time, cost, quality, and client satisfaction [50]. Therefore, many studies are conducted to explore the impacts of PP in developing countries [49,50].

Poor PP results in time delays and cost overruns [51]. Other problems include poor project quality, client satisfaction, poor contractual management, wastage and shortage of material, poor financial systems, and changes in site conditions [12,27,51].

Completing the project within the budget is a key objective of construction projects. Xu et al. [52] stated that the difference between budgeted cost and actual cost (variance) is one of the simple techniques used to evaluate the PP. Completing the project within the budget is an important factor in measuring its performance. Similarly, completion of the project within the specified timeframe is essential for the construction industry, as the stakeholder and general public measure the project's success through timely completion. Li et al. [53] mentioned that comparisons of the planned schedules and the actual project completion time are the best techniques to evaluate the PP.

Nevertheless, the PP is affected by various risks. Accordingly, it is important to consider the impacts of risks such as LR, SR, PR, and the associated HCA for evaluating PP. CPEC, being a project of global interest, needs to include consideration of these risks and their impacts on PP. Accordingly, this study uses LR, SR, PR, and HCA to assess the PP of CPEC.

#### *2.7. Research Framework*

The conceptual framework presented in Figure 1 illustrates three main risks faced by CPEC: PR, SR, and LR that directly or indirectly affect the PP. The HCA is involved as a moderating element that negatively influences the project PP. The PR is assessed by four factors, i.e., communication barriers, delivery of improper benefits (such as corruption and bribery), protests organized by Non-Government Organizations (NGOs), and cross-border projects triggering international conflicts (Kashmir Region). These major elements create political instability in Pakistan. There is often conflict between the political parties and different NGOs and other organizations in Pakistan, creating factional conflicts that cause political instability and directly affect projects like CPEC.

As the CPEC project is a joint venture of two countries, social safety is a key benchmark that needs to be achieved to complete the CPEC project. The associated SRs have certain contributing factors: crime, terrorism, violent demonstration, armed conflicts, kidnapping, or extortion. All these factors directly or indirectly affect the activities involved in CPEC project completion and may hinder its performance.

**Figure 1.** Research framework for studying risk factors associated with CPEC.

Another important risk involved in CPEC performance is the LR, as both countries have different sets of legislation systems, applicable laws, and resolutions. Therefore, it will also directly or indirectly affect the CPEC PP. Both the parties need to understand their legal responsibilities and complications to avoid affecting the PP of CPEC. Therefore, LR must be identified and mitigated to run the CPEC project activities smoothly.

The moderator used in this framework is the HCA. This point focuses on the rules or policies implemented by the host countries for foreigners. If the policies are foreignerfriendly, they are likely to invest in that country. Otherwise, they will rethink their commitment. Therefore, there should ideally be no discrimination between local and foreign investors. Rules and regulations should apply to all. Foreign investment should be welcomed as it will increase foreign exchange and lead to the economic development of Pakistan.

The success of projects depends on four major factors: time, cost, quality, and client satisfaction. The project should be completed within the stipulated time, and the cost should not exceed the budgeted cost. Similarly, the quality of the project should be maintained. CPEC is no exception to these measures.

CPEC is a mega-project facing various risks. To successfully complete this project, the risks and associated factors must be identified and mitigated in a timely manner to avoid losses to Pakistan, China, and Central Asia. These factors and the link between them are explained in Figure 1, which shows that the HCA is at the center of all other risks associated with a mega-scale international project such as CPEC.

#### **3. Research Methodology**

The current study is based on a deductive approach. The survey method is used for data collection in this study. SMART PLS3 is used to test the developed hypotheses and perform statistical analysis. Other methodological details are explained subsequently.

#### *3.1. Measurements of Variables*

The variables taken for this study are PR, SR, LR, HCA, and PP. These are based on a detailed literature review. A close-ended questionnaire was used to measure these variables on a five-point Likert scale from "strongly disagree" to "strongly agree." The questionnaire is mainly divided into two parts. The first part consists of the demographical information of individuals who took part in filling in the questionnaire. The second section contains different questions about the variables and their scoring on the Likert scale.

#### *3.2. Data Sources and Collection*

The data sources can be classified into primary sources and secondary sources. Primary data sources provide direct evidence of the event, person, or object. Primary data is collected from focus groups, panels, and individuals. Secondary sources use data collected by other researchers and organizations. Secondary data sources include consensus, company records, the information provided by government departments, industry analysis, and research accessible through the internet. This study collects primary data from the relevant people, such as contractors, engineers, managers, and suppliers involved in different projects and subprojects under the CPEC.

The population for the current study comprises respondents with engineering backgrounds, including civil, mechanical, and electrical engineers working on different projects conducted under the CPEC. Questionnaires were sent through email and different social media apps (Facebook, LinkedIn, Twitter) for convenience. Though various engineering professionals were contacted, only civil, electrical, and mechanical engineers' responses were received. Overall data were collected from 112 respondents working on the CPEC. Thirteen responses were removed from the data sample due to incompleteness or outlying responses. The remaining 99 responses were used for analysis.

The data characteristics of the study include the demographic information of the respondents. The demographic information is based on the classification by gender, age, qualification, field experience, employment status, and respondents' specialization status. Most of the respondents are male, as there are fewer female members in the Pakistani construction industry due to social and cultural barriers. The most frequent responses were received from individuals aged 26–30 years (62%).

Similarly, in terms of education and experience, most respondents had a Bachelor's degree (60%). The experience of the majority of the respondents was over five years (58%). In addition, 69% of respondents were working with national or local companies, and 31% of respondents were working with multinational companies and firms. Furthermore, 63% of participants were civil engineers, 9% were electrical engineers, and 28% were mechanical engineers. The demographic information of the respondents is provided in Table 2.

#### *3.3. Sampling Technique*

Convenience sampling was used in this study based on non-random or non-probability sampling approaches. Participants for the research were selected based on the defined basic criteria. Convenience sampling includes geographical proximity, accessibility at a specific point of time, easy accessibility, or willingness to respond to the survey. Data were collected from respondents working in engineering sectors, especially people involved in the construction projects under CPEC. The contributions of this engineering sector are significant in different parts of the CPEC project. Though this sector contains a large population, it is impossible to cover all the engineers due to constraints like project confidentiality, cost and time, resource scarcity, and accessibility due to COVID-19. Using the criteria defined in Ullah et al. [54], a sample of more than 96 respondents considering a 50/50 split and 10% sampling error is sufficient for representing such populations. Accordingly, a total of 99 responses are considered in this study.


**Table 2.** Demographic information of respondents.

#### *3.4. Research Instrument*

Closed-ended questions were generated for variables, and the responses were sought from the respondents using a questionnaire survey in this study. The key constructs, codes, measures, and relevant references are provided in Table 3. PR is an independent variable having four assessment items, as previously discussed. These are coded as PR01, 02, 03, and 04. SR is another independent variable having five assessment items, i.e., crime, terrorism, violent demonstrations, armed conflicts, kidnapping, and extortion. These are coded with the acronym SR. LR is an independent variable with the six assessment items previously discussed. HCA is a moderating variable, reflecting three items: hostility to foreigners, confiscation or expropriation, and discrimination against foreign companies. Finally, PP is a dependent variable that reflects four items: completion within budget, time, achieving required quality, and client satisfaction, coded as PP01, 02, 03, and 04 in Table 3.




**Table 3.** *Cont.*

#### *3.5. Data Analysis*

The partial least square structural equation modeling (PLS-SEM) technique is applied for data analysis in this study. The SMART PLS3 tool is used for this purpose. PLS-SEM delivers the most vigorous approximations of the structural model. Many previous studies of construction and project management have applied this technique for pertinent analyses [55].

#### **4. Results and Analyses**

This section presents the results and analyses of the current study. The pertinent results and analyses are presented subsequently.

#### *4.1. Reliability Analysis*

The reliability test ensures that there are no biases in item measurement. In addition, it ensures the stability and consistency of the variables. Cronbach's alpha is used to analyze the reliability of the variables. Table 4 shows the reliability of the variables used in the current study. The reliable value of Cronbach's alpha is 0.7 to 1.


**Table 4.** Reliability analysis of variables.

In the current study, the value of Cronbach's alpha is 0.845 for PR, 0.828 for SR, 0.893 for LR, 0.833 for HCA, and 0.807 for PP; thus, all the variables have an alpha value greater than the threshold value of 0.7. This shows the reliability of the variables used in the current study. Therefore, we can conveniently apply the statistical framework for a generalized result based on this data.

#### *4.2. PLS Factor Analysis*

PLS factor analysis was performed to measure the factor loadings, average variance extracted, and cross-loadings. The threshold value to construct loadings is greater than 0.7. As shown in Table 5, all the values are greater than the threshold, suggesting that the error difference is less than the difference shared by every item and its principal variable [56]. Thus, the data are suitable for construct loading to measure the variables.


**Table 5.** Construct loadings for the variables used.

#### *4.3. Average Variance Extracted (AVE)*

The average variance extracted (AVE) calculated in this study is shown in Table 6. The threshold value for AVE is 0.5. As shown in Table 6, AVE values for all the variables are more than 0.5, which shows the model is satisfactory [57]. Accordingly, the data are useful for pertinent analyses and discussions.

**Table 6.** Average variance extracted (AVE) of the variables.


#### *4.4. Cross-Loadings*

Cross-loadings are shown in Table 7. The cross-loading criteria imply that an indicator's self-loading should be more than all its cross-loadings against other indicators. This is evident from Table 7, where the self-loading for all indicators is greater than the loading against other indicators. This validates the data and associated framework. Table 7 further shows the values for all affiliated variables of each indicator. For example, the four variables assessed for PR are coded PR01, PR02, PR03, and PR04. The relevant value for each variable is highlighted in Table 7 in bold. The same logic is followed for all affiliated variables of the five indicators used in this study.


**Table 7.** Cross-loadings of the variables.

#### *4.5. Combined Effects*

The purpose of combined effects is to know the effect of combined variables on the dependent variable without involving moderation. A positive association was found between all independent variables, including PR, SR, LR, and the HCA and PP. However, no negative association was found between all independent variables towards PP. Table 8 represents the model summary with all the necessary values. The values are significant at a 95% confidence interval. The value of R2 shows the explained variance of the dependent variable caused by the independent variable. Here, the value of R<sup>2</sup> is 0.36, which means 35.9% of the variance in PP is due to the independent variable. Beta values represent path coefficients. The path coefficient (β = 0.225) and the *t*-value (2.601) of PR are significant. A path coefficient of 0.225 shows that keeping all the other variables the same, if altered by one unit, PR will result in 0.225 units' variation in the PP.

**Table 8.** Main effects analysis for variables.


When *β* = 0.285 and keeping all other variables constant, one unit change in SR will lead to a 0.285 unit change in PP. Values of *t* (3.006) and *p* (0.003) suggest that SR is individually significant in the model. In addition, *β* = 0.229 depicts that a unit change in LR will cause a 0.229 unit change in PP. Further, *t* (2.187) and *p* (0.029) represent that LR is significant in the model. The *β* value for HCA and PP is 0.187, which is the lowest in terms of the independent variable and depicts that a unit increase in HCA (positive) will result in a 0.187 unit increase in PP. The values of *t* (2.004) and *p* (0.046) show that the HCA is significant. No negative association was found between any independent variables and PP.

#### *4.6. Moderation Analysis*

Table 9 represents the results of the moderation analysis conducted in this study. Moderation was applied using SMART PLS 3. First, to obtain the path coefficients, the PLS Algorithm was run. Then, to obtain the significance level, bootstrapping was applied. R2 (0.414) indicates that independent variables cause 41.1% of the variance in the dependent variable (i.e., PP). The path coefficient (*β* = −0.180) for PR and HCA is significant at a 95% confidence interval, as the *t*-value is 1.991 and the *p*-value is 0.047. This means that if a one-unit change occurs in the interaction of PR, it will result in a 0.180 unit decrease in the PP.



On the other hand, SRs and the HCA interaction are insignificant in the model. The *t* value is 0.271, which is less than the threshold value of 1.96. Further, the *p*-value is 0.787, which is more than the threshold of 0.05. The interaction of LR and HCA has a path coefficient of 0.183. It has *t*- and *p*-values of 2.114 and 0.035, respectively, indicating that the interaction is individually significant in the model. In addition, β = 0.183 shows that if a one-unit change is made in the interaction of LR and HCA, it will result in a 0.183 unit rise in the PP.

#### **5. Discussion**

China has heavily invested in the CPEC project and aims to make it a success. Accordingly, most of the stakeholders and analysts are only considering the significant opportunities and positive and profitable aspects of the project. The adverse aspects and risks of the project are ignored. However, in the era of circularity and sustainability, this must change. Accordingly, a holistic assessment is needed. Therefore, this study focused on the threats and risk assessments of CPEC projects to measure the PP. Accordingly, risks consisting of LR, SR, PR, and the moderating effect of HCA on CPEC PP were assessed in this study.

In this study, first, it was hypothesized that the three risks (LR, PR, and SR) negatively influence the PP of CPEC. Then, three hypotheses (H1, H2, and H3) were developed and tested. It was found that there is a positive and significant relationship between PR, SR, LR, and PP. Therefore, the hypotheses (H1, H2, and H3) were accepted, and we concluded that the above-mentioned risks negatively affect the PP.

For (H1), the results support the study conducted by Chang, Deng, Zuo and Yuan [37], which also showed a negative association between PR and PP. PR deals with unsolicited changes and unpredicted consequences for international business and projects resulting in political action and significantly affecting overseas projects and contractors. In addition, the working situation of IJVs is closely linked with the host country's politics [37]. Therefore, comprehensive knowledge of PR is essential for construction projects and businesses in the global market and joint ventures in CPEC.

The results of (H2) also support the previous study's findings of a negative association between SR and PP [58]. The construction industry bears positive and negative risks due to the involvement of many individuals and groups, such as contractors, sub-contractors, consultants, distributors, dealers, suppliers, fabricators, different government departments, and the local public of the area. Previous studies confirmed that infrastructure construction projects are spread over a wider geographical area and face social issues due to the large

numbers of involved parties. These risks negatively influence project objectives [37]. Thus, the SR needs to be managed for the CPEC project.

The results of (H3) supported that LR is negatively linked to PP. Pan et al. [59] stated that construction projects contain many legal, political, social, environmental, technical, economic, and cultural risks. An appropriate legal system of the country can help better manage the project in IJVs [10]. Accordingly, the same applies to CPEC, and strong legal protection is required to ensure an unhindered PP.

After assessing the three risks, the moderating effect of the HCA was studied by developing three hypotheses (H4, H5, and H6). The results showed that PR (H4) has a negative relationship with PP and has significant results in the presence of a moderator (i.e., HCA). Therefore, PR negatively influences the PP of CPEC. This hypothesis deals with the HCA being negatively moderating amid PR and PP. The negative path coefficient interaction of PR and HCA shows that it will negatively moderate the PR and PP if the HCA is high.

SR (H5) shows a positive relationship with PP but has insignificant results in the presence of the moderator. So, (H5) is rejected, and it is concluded that SR does not negatively influence the PP of CPEC in the presence of HCA. Analysis of the data revealed a positive path coefficient for the interaction of HCA and SR that will positively affect the relationship between SR and PP of CPEC. However, the value is not significant at a 95% confidence interval. Therefore, (H5) is rejected, as no evidence was found that supports it.

Finally, LR (H6) shows a positive relationship with PP and significant results in the presence of HCA as moderator. The data analysis revealed that the path coefficient for the interaction of LR and HCA is positive and significant at a 95% confidence interval. It is appropriate to note that the research was conclusive in achieving its goals, where all the established hypotheses except (H5) remained aligned with similar studies.

#### **6. Conclusions, Implications, Limitations, and Future Directions**

The current study concluded that PR, SR, and LR negatively influence the PP of CPEC. The HCA plays an important role in moderating these risks. The current study found that the three risks negatively influence the PP of CPEC in the presence of unfavorable HCA. All hypotheses of this research study except SR (H5) were accepted, showing the negative influence of risks on PP. The role of the HCA towards foreign firms and investors is imperative and rests on the legal and political system provided by the host country's government. Therefore, it is recommended that Pakistani regimes should formulate longterm laws and policies or initiate safety measures to avoid adverse effects on the CPEC project and avoid economic misfortune.

The current study highlights the expected risk in megaprojects like CPEC. Three types of risks are involved in assessing the PP: PR, SR, and LR. The priorities of the mega projects should be set by government institutions rather than the reigning parties to tackle PR. The technical management should be consistent and continue the planned proceedings of the mega-projects regardless of who rules the country. For this purpose, policies need to be developed by all key stakeholders, including policy organizations, parliamentarians, and the government.

The SR associated with unpredicted arm conflicts between Pakistan and India at different border areas and China and India on different conflicting territories can also affect the CPEC project. The LR is another major concern for long-term and sustainable investment in CPEC. Due to the law-and-order situation in Baluchistan and threats to local movements of project personnel, the CPEC project may be affected in terms of deadlines and performance. Local hospitality and encouragement are also important for accomplishing CPEC goals. For this to materialize, local government and national-level policies should be developed considering the benefit ratio for the short- and long-term objectives. This will enable developing countries like Pakistan to gain benefits from international projects like CPEC.

Besides the practical implications suggested above, this research contributes to the literature in terms of investigating the role of the HCA toward the different types of risks and the PP of megaprojects. A very limited number of studies have investigated the influence of PRs on the PP of international construction projects. Thus, this study adds value to the published literature in terms of investigating the PR, SR, and LR influence on PP of CPEC (an international project) as well as exploring the moderating effect of the HCA. These provide research directions to future studies for further exploration of the key contributing factors of these risks on mega-projects such as the CPEC in developing countries.

In terms of the limitations of the study, the sample size of this study is comparatively small and predominantly from the local and accessible construction sites. Moreover, the scholars could not reach out to construction managers in remote areas due to time constraints and the outbreak of COVID-19. Therefore, a study inclusive of samples across the country is recommended to enhance the understanding of different risks and PP assessment criteria in relation to the CPEC and similar projects. In addition, the scholars may implement the same kind of study on other international projects in developing countries. This will highlight the differences and lead to the generalization of current results across developing economies. Further, the current study investigated the moderating role of the HCA towards foreigners on the PR, SR, LR, and PP. However, other variables may affect the association between these risks and PP, and therefore they need to be investigated in future studies.

**Author Contributions:** Conceptualization, A.R., A.M. and S.W.A.S.; methodology, A.R., A.M., S.W.A.S. and F.U.; software, A.R., A.M., S.W.A.S. and F.U.; validation, A.R., A.M., S.W.A.S., F.U., M.S.U.R. and M.A.; formal analysis, A.R., A.M. and S.W.A.S.; investigation, A.R., A.M. and S.W.A.S.; resources, A.M., S.W.A.S., F.U., M.S.U.R. and M.A.; data curation, A.M., S.W.A.S. and F.U.; writing original draft preparation, A.R., A.M., S.W.A.S. and F.U.; writing—review and editing, F.U. and H.S.M.; visualization, A.R., A.M. and S.W.A.S.; supervision, A.R. and A.M.; project administration, F.U., M.S.U.R. and M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data are available from the first author and can be shared with anyone upon reasonable request.

**Acknowledgments:** The authors would like to acknowledge the support from the Office of the Associate Provost for Research, United Arab Emirates University.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Identifying and Ranking Landfill Sites for Municipal Solid Waste Management: An Integrated Remote Sensing and GIS Approach**

**Bilal Aslam 1, Ahsen Maqsoom 2, Muhammad Dawar Tahir 2, Fahim Ullah 3, Muhammad Sami Ur Rehman <sup>4</sup> and Mohammed Albattah 4,\***


**Abstract:** Disposal of municipal solid waste (MSW) is one of the significant global issues that is more evident in developing nations. One of the key methods for disposing of the MSW is locating, assessing, and planning for landfill sites. Faisalabad is one of the largest industrial cities in Pakistan. It has many sustainability challenges and planning problems, including MSW management. This study uses Faisalabad as a case study area and humbly attempts to provide a framework for identifying and ranking landfill sites and addressing MSW concerns in Faisalabad. This method can be extended and applied to similar industrial cities. The landfill sites were identified using remote sensing (RS) and geographic information system (GIS). Multiple datasets, including normalized difference vegetation, water, and built-up areas indices (NDVI, NDWI, and NDBI) and physical factors including water bodies, roads, and the population that influence the landfill site selection were used to identify, rank, and select the most suitable site. The target area was distributed into 9 Thiessen polygons and ranked based on their favorability for the development and expansion of landfill sites. 70% of the area was favorable for developing and expanding landfill sites, whereas 30% was deemed unsuitable. Polygon 6, having more vegetation, a smaller population, and built-up areas was declared the best region for developing landfill sites and expansion as per rank mean indices and standard deviation (SD) of RS and vector data. The current study provides a reliable integrated mechanism based on GIS and RS that can be implemented in similar study areas and expanded to other developing countries. Accordingly, urban planning and city management can be improved, and MSW can be managed with dexterity.

**Keywords:** geographic information systems; landfill site selection; landfill site ranking; remote sensing; solid waste; solid waste management

### **1. Introduction and Background**

With the growth in global populations (particularly in cities), concerns regarding urban health are rising [1,2]. Various waste reduction techniques such as lean, total quality management, and six sigma have been presented to reduce and minimize waste [3,4]. The solid waste in the form of trash, garbage, and refuse daily dumped by urban and rural populations is known as municipal solid waste (MSW). Every year, around 1.3 billion tons of MSW are generated worldwide, which is expected to increase to 2.2 billion tons by 2025, with over a third of this MSW left uncollected [5]. The United States, Canada, Australia,

**Citation:** Aslam, B.; Maqsoom, A.; Tahir, M.D.; Ullah, F.; Rehman, M.S.U.; Albattah, M. Identifying and Ranking Landfill Sites for Municipal Solid Waste Management: An Integrated Remote Sensing and GIS Approach. *Buildings* **2022**, *12*, 605. https://doi.org/10.3390/ buildings12050605

Academic Editor: Cinzia Buratti

Received: 9 April 2022 Accepted: 3 May 2022 Published: 6 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Germany, South Africa, France, and the United Kingdom are among the highest per capita MSW-generating countries [5]. Expanding global urbanization, inadequate urban waste management, and a lack of resources around the globe contribute to the rise in MSW [6,7]. Every day, 0.74 kg of rubbish is generated per person in the municipality of Phnom Penh, Cambodia alone [8]. The World Bank claims that by 2050 the MSW generation will reach 3.4 billion tons [9]. Around 70% of MSW is dumped in landfills, while 19% of the waste is recycled, and 11% is used for energy generation. Among the world's current population, i.e., 7.6 billion people [10], around 3.5 billion have no access to basic garbage collection services [11].

Solid waste management aims at disposing of the garbage in the most environmentfriendly manner possible. This is achieved through the assistance of the local people directly impacted by a region's solid waste program [12]. Solid waste is collected from houses, workplaces, small companies, and commercial enterprises. In the EU, this is considered a special waste stream. Such waste combined with the waste created during construction, renovation, and demolition is referred to as the MSW. Kitchen rubbish, paper and cardboard, yard waste, metal, plastic and rubber, electronic waste, glass, bricks, concrete, inert materials, and miscellaneous garbage are all examples of MSW. MSW is classified in various ways by global municipalities. It contains organic and inorganic components and biodegradable and non-biodegradable components. To minimize the generation of solid waste, various strategies are employed globally. Preventing, reusing, recovering, recycling, and disposing of waste are the most popular approaches to reducing solid waste [13]. The regular storage of solid waste is another strategy utilized to avert potential environmental hazards [14,15].

MSW management techniques differ by municipality, city, state, and nation based on the waste composition. Poor MSW management increases greenhouse gas emissions and has serious consequences for human health and environmental safety [16–18]. Different treatment and recycling processes are used globally for managing MSW. Classified recycling, incineration, landfilling, composting, and anaerobic digestion are some examples of MSW treatment and recycling procedures [19–21]. Most developing nations burn the MSW or gather and dump it at specific locations in the form of landfill sites [22]. For example, in Iran, the bulk of MSW is buried in open pits. Such open dumping poses long-term environmental and human health risks [6,23].

Landfill sites are commonly used for burying non-recyclable garbage across the world. These landfills must be inspected and compliance assured before being utilized as a solid waste dumping site. In addition, these landfills must meet regulatory, geographical, hydrological, and topographical requirements to manage and reduce environmental, economic, hygienic, and social concerns [1,24,25]. Nonetheless, rubbish is dumped into pits in several underdeveloped countries rather than buried in the ground. Despite the rapid development of alternate disposal techniques, the landfill in the forms of open dumping and sanitary landfill remains the most preferred disposal option in such countries. This is due to the lesser costs and technical requirements for such dumping in developing economies.

According to the United Nations Environment Programme (UNEP), open dumping and sanitary landfills account for 51% and 31% of waste disposal in Asia. Incineration and recycling account for just 5% and 8% of total waste. In Africa, open dumping and sanitary landfills account for 47% and 29% of total waste [26]. In North America, sanitary landfills account for 91% of garbage disposal [27]. It illustrates that most nations utilize landfills to dispose of their MSW, since it is less expensive than alternative options [28]. Landfills are used as approved locations for MSW dumping, with garbage processing and recyclable material sorting regulated before dumping. Landfilling is a frequently utilized procedure in municipalities worldwide for safe processing and disposing of solid waste [29,30]. Landfilling has long been a popular waste disposal practice in many developing countries [31]. This is because in such weaker economies, cost is the key factor and there is generally a lack of environmental considerations in developing countries. However, this must change in the era of striving for global sustainability and environmental protection. Accordingly, incentives must be provided to relevant stakeholders to conduct resource recovery operations and reduce the environmental burdens of such landfills.

On the other hand, incineration requires significant infrastructure investments and can create extremely high temperatures and adversely affect the climate and environment. Similarly, resource recovery processes such as pyrolysis, liquefaction, gasification, anaerobic digestion, and composting have extensive staffing, equipment, and cost requirements. Therefore, landfilling is preferred due to the cost-effectiveness and labor-intensive procedures in developing countries. Furthermore, the combined landfill may create profits by generating electricity from landfill gas and leachate. Landfilling is a common practice in developing countries; however, as previously discussed, this practice should not be encouraged, and more environment-friendly and sustainable approaches should be adopted. These include using greener materials, encouraging and incentivizing recycling, and other green initiatives aligned with the United Nations' sustainable development goals. Climate change cannot be tackled in the absence of such holistic measures and considerations for the environment. This also goes against the circular economy concept, which is at the forefront of global greening initiatives.

Overall, landfilling is still a prevalent MSW technique but cannot be termed as the best option unless actions are taken to transform the dump into something useful. For example, these landfills can be transformed from "garbage dumps" to "energy powerhouses" by installing integrated technology to generate recycled materials and renewable energy. According to Nabavi-Pelesaraei et al. [32], landfills and treatment facilities for domestic rubbish, hazardous chemicals, radioactive wastes, construction, demolition, and renovation wastes are all located in distinct areas that can serve as energy generation points. In addition, landfill mining reclaims valuable recyclables and combustible landfill gases from landfill sites to help free up landfill areas and promote sustainability [33,34].

Landfills can be divided into different classes based on the usage. Class 1 landfills are used for soil disposal. Class 2 landfills are used for mineral disposal and construction and demolition waste. Class 3 landfills are used for the disposal of MSW. Class 4 landfills are used for the disposal of commercial and industrial trash. Class 5 landfills are used for disposing of hazardous waste. Finally, Class 6 landfills are used for dangerous underground waste disposal [35]. In terms of types of landfills, the most common ones include secure landfills, monocle landfills, reusable landfills, and bioreactor landfills [36]. To stall harmful environmental consequences, the wastes are enclosed in secure landfills. The waste that cannot be treated by incineration or composting is dumped into monocle landfills. Reusable landfills enable rubbish to settle for longer periods before digging for recovery of metals, plastics, and fertilizers.

In terms of control, there are three types of landfills: semi-controlled, open, and sanitary landfills [37]. MSW dumped in an open environment is called an open dump landfill. Most developing countries have open dumps, where MSW is randomly discharged into low-lying open regions. In such poorly managed landfills, scavengers, other birds, mosquitoes, bugs, rodents, and deadly germs find a home, promoting health concerns.

Researchers have investigated various methods for choosing dumping or landfill locations globally. Scholars have used mathematical models to choose dump locations [38]. Based on the analytical hierarchy process (AHP), Lokhande et al. [38] used GIS to locate a trash disposal site. The same has been used by other studies [39–43]. For example, Spigolon et al. [40] determined landfill siting based on optimization, multiple decision analysis, and GIS. ¸Sener et al. [41] selected solid waste disposal sites with GIS and AHP methodology using a case study in Senirkent–Uluborlu (Isparta) Basin, Turkey. Similarly, Sumathi et al. [42] used a GIS-based approach for optimized siting of municipal solid waste landfills.

A study was conducted in Iran using the GIS and multi-criteria decision-making methods (MCDM) [44]. GIS-based multi-criteria decision analysis (MCDA) and evaluation were used for landfill site selection in Ethiopia [45]. The authors used AHP and weighted linear combination models. In the city of Rudbar in Iran, with a harsh morphological and sensitive environment, fuzzy logic spatial modeling has been used for landfill site selection [46]. Wang et al. [47] selected waste disposal sites and highlighted the associated environmental risks. In Javanrud, Iran, trash was disposed of in a landfill using GIS and MCDA [23]. In Syria, GIS-based normalized difference vegetation index (NDVI) and normalized difference snow index (NDSI) techniques have been used to dispose of war trash [48]. GIS and RS have also been used for managing rising environmental problems of waste disposal [49]. In Pakistan, different combinations of satellite based bio-thermal indicators were used to monitor open dumps [50]. However, a study for identifying landfill sites for MSW has not been reported to date for Pakistan. This presents a gap targeted in the current study. For this purpose, integrated RS and GIS have been used in this study.

According to the literature, most researchers relied on judgments regarding numerous factors involved in their search for the best MSW disposal locations. These opinions were combined with GIS data to locate the landfill sites [1]. The GIS and RS data were used to create a rating system for identifying landfills that ranged from the least to the most acceptable. Similarly, rather than building new facilities, the authors suggested researching growing nations using a ranking system based on Thiessen polygons to locate appropriate locations meant for landfill development. The current study builds upon these works and aims to offer a forum for decision-makers to analyze feasible landfill expansion regions in Pakistan. This is evident in developed countries like Canada, where GIS and RS data are commonly utilized for making informed landfill decisions [51]. Accordingly, the site of a landfill expansion is selected using factors such as proximity to garbage sources to reap financial advantages from lower waste transportation costs and less severe environmental and health impacts [47].

The current study capitalizes on these relevant works. It is a novel attempt at locating appropriate landfill sites for dumping MSW in Pakistan. The classification is regardless of the type of landfills. The aim is to highlight and rank the sites that can later be categorized for various types of landfills in the following studies. The key objective of the current study is to analyze appropriate landfill locations for MSW disposal using integrated GIS-RS indices. For this purpose, Faisalabad, an industrial city in Pakistan, was used as a case study. Thiessen polygons were utilized for relevant area identification.

#### **2. Study Area**

Faisalabad, an industrial city with an area of 3344.9 sq. km, was selected as a case study for current research. It is the third-largest city in Pakistan by population, located in the rolling flat plains of northeast Punjab, as shown in Figure 1. It is an industrial center with many textile mills, large agricultural processing plants, electronic equipment, and the furniture industry. The city is 605 feet above sea level, with latitudes 30◦ and 31.5◦ north and longitude 73◦ and 74◦ east [52]. The average temperature of Faisalabad ranges between 39 ◦C and 27 ◦C. January is the coldest month, with an average of 17 ◦C and 6 ◦C, while June is recorded as the hottest month [53]. The average annual rainfall is only about 375 mm (14.8 in), and half of it occurs in the monsoon season. The population of the city is around 3.56 million as of 2018 [52], spreading over 118 union councils and approximately producing 1600 tons of MSW with a 0.45 kg/capita/day rate. This huge production of MSW must be dumped at a proper place (landfill sites). Currently, there is no appropriate system or landfill sites marked for the study area. As a result, most MSW is dumped at random sites near populated areas, creating both health hazards and environmental concerns. The current study addresses this problem by locating and ranking appropriate landfill sites for dumping MSW in Faisalabad.

**Figure 1.** Study area—Faisalabad, Pakistan.

#### **3. Methodology**

The current research adopts a holistic methodology based on multiple steps. The methodology for identifying the landfill site in the current study is shown in the flowchart in Figure 2. The methodology is based upon vector data and RS rather than opinions from experts for the ranking of parameters. The resulting categories have been proposed based on landfill site suitability. Technology (GIS-RIS) suggested rankings have been used in this study instead of expert opinion, where the flat surfaces have been represented by Thiessen polygons. This flexible approach provides a competitive edge for pre-or post-decision making in areas where expert advice is not accessible.

**Figure 2.** Workflow for landfill site assessment.

Furthermore, it is a self-contained approach that uses a mix of vector data and RS to identify and rank landfill area. The overall steps of the holistic methodology adopted in this study are presented in Figure 2. Accordingly, the three key steps include data collection, analysis, and output generation. In data collection, remotely sensed data from Landsat-8 (OLI + TRIS) was downloaded. It consisted of water bodies, roads, air quality, temperature, vegetation, population, and other details. In the data analysis stage, various normalized difference indices were used and spatially integrated using a GIS platform. Finally, the output stage involves a ranking function and zonal statistics to identify the suitable landfill expansion sites in the case study area. The associated datasets, indices, and polygon creation are subsequently discussed.

#### *3.1. Satellite Dataset*

Using path/row 144/052 with sensor Operational Land Imager and Thermal Infrared Sensor (OLI + TRIS) on 11/12/2021 and a spatial resolution of 30 m of Landsat-8 satellite, imagery for the study was obtained, as shown in Figure 1. The satellite imagery was acquired using Land viewer by EOS (https://eos.com/lv/ accessed on 20 March 2022). Imageries downloaded are of 30 m resolution, neglecting the cloud cover for maximum accuracy of <2%. Bands 2 (Blue), 3 (Green), 4 (Red), 5 (Near Infrared), and 6 (Infrared) of Landsat images were employed in the investigation (Shortwave Infrared). The three bands (2, 3, and 4), when combined, provide a natural color picture that may be used to determine the land use and land cover composition in the research region. Accordingly, this dataset has been utilized for the study area of Faisalabad in this research.

#### *3.2. Creation of Theissen Polygons*

Thiessen polygons are used for the analysis of proximity and neighborhoods. To produce Thiessen polygons, the distribution of sites in a specified distance is considered a parameter of influence. In this study, a Thiessen polygon mesh was created using nine suburban locations and villages for a 25 km radius of the district's headquarters. These locations include Lyall Pur Town, Chak Jhumra Town, Madina Town, Iqbal Town, Jinnah Town, Jaranwala Town, Tandliawala Town, Dijkot, and Summundari Town. Each polygon was numbered from 1 to 9 and given a unique ID. Thiessen polygons were originally produced using the ArcMap program, as shown in Figure 3.

**Figure 3.** Thiessen polygons of the study area.

The Delaunay triangulation technique tool in ArcMap was used in this study to construct Thiessen polygons, following Richter et al. [54]. This approach verifies that the Delaunay criteria are discreet. The Delaunay criteria must be satisfied by the points formed in a triangular irregular network (TIN) [55]. Thiessen polygons are formed by bisecting each of the TIN's edges perpendicularly. This should ensure that the TIN's centers become the Thiessen Polygons' vertices [55]. In many studies [56–58], Thiessen polygons are more typically utilized in investigations involving hydrological factors and RS indices based on Landsat-8 and include an NDVI. Multiple formulas are used for associated calculations, as given below in Equations (1)–(3):

For waste management [59],

$$(\text{Band } 5 - \text{Band } 4) / (\text{Band } 5 + \text{Band } 4); \tag{1}$$

For normalized difference moisture index (NDMI) [60],

$$(\text{Band } 3 - \text{Band } 5) / (\text{Band } 3 + \text{Band } 5); \tag{2}$$

For normalized difference built-up index (NDBI) [61],

$$(\text{Band } 6 - \text{Band } 5) / (\text{Band } 6 + \text{Band } 5). \tag{3}$$

#### *3.3. Remote Sensing Indices*

Multiple RS were used in this study, as shown in Figure 4. These include NDVI, NDBI, and Normalized Difference Water Index (NDWI). NDVI was used in this study because this metric indicates the density of greenness on the ground surface. Therefore, it is a critical consideration while looking for a good dump location [62]. The Landsat-8 OLI dataset was used to construct the NDVI using Equation (4).

$$\text{NDVI} = (\text{Brir} - \text{Bred}) / (\text{Brir} + \text{Bred}) \tag{4}$$

where Bnir is a near-infrared band and Bred is the red band of Landsat-8. The NDVI value spans from −0.401 to 0.831. Barren terrain, open space, and rocky places have a lower value, grassland and shrub have a moderate value, and wide leaf rain forests have a higher value. Transient emissions have been reported to cause a decrease in the vegetation index surrounding landfills [63]. As a result, building landfills in places where the NDVI is lower will have a reduced impact on healthy vegetation [64].

The NDBI and associated calculations are used to assess the urban built-up size and geographical distribution of the study area. It also provides a comprehensive picture of urban land cover. The Landsat-8 OLI dataset was used to create an NDBI map in this investigation. Shortwave infrared (SWIR) and near-infrared (NIR) bands were employed for pertinent calculations, as given in Equation (5).

$$\text{NDBI} = (\text{Bswir} - \text{Bnir}) / (\text{Bswir} + \text{Bnir}) \tag{5}$$

where Bswir is the shortwave infrared band and refers to Band 7 of Landsat-8. Bnir refers to Band 5 of Landsat-8. In this study, the computed value of NDBI varies from −0.269 to + 0.684. A higher NDBI value implies a significant concentration of built-up area and should not be considered for developing a sanitary landfill site [65]. Conversely, the lower number suggests a smaller concentration of urban built-up area, which may make landfill placement more acceptable [66].

The NDWI is used to measure the moisture content in plants and soil, which is calculated using Equation (6).

$$\text{NIDWI} = \text{NIR} - \text{Swir} / \text{NIR} + \text{Swir} \tag{6}$$

where NIR has wavelengths ranging from 0.841 to 0.876 nm and SWIR wavelengths ranging from 1.628 to 1.652 nm. Water does not absorb this portion of the electromagnetic spectrum; hence, the index is resistant to atmospheric impacts. Furthermore, when monitoring forests, the NDWI index has a steadier fall in values when approaching critical anthropogenic load, making it a better predictor of the ecological status of forests than the NDVI.

**Figure 4.** Normalized RS Indices and population distribution map of the study area. (**a**) NDBI, (**b**) NDWI, (**c**) NDVI, (**d**) Population.

High NDWI readings (in blue) indicate a high plant water content and a high plant fraction coating. Low vegetation content and cover with low vegetation correspond to low NDWI values (in red). The NDWI rate will drop during times of water stress. The presence of moisture in plant cover is determined using the NDWI index for determining fire danger, which can help tackle forest and bushfires. According to a recent study, landfills negatively influence the region's groundwater and surface water supplies [67]. As a result, the drier locations may be the better candidates for landfill growth to mitigate the possible negative impacts on local water supplies.

The population distribution map in this study was also obtained through Landsat-8, as shown in Figure 4. The thematic map shows the population of 9 chosen locations, as previously shown in Figure 3. Some have a population of fewer than 0.5 million, while others have more than 2 million. This irregular population distribution dictates the careful selection of landfill sites. Specifically, the areas with higher populations must be avoided for landfills.

#### **4. Results**

The GIS platform's "ranked overlay approach" provides an appropriate solution to examine the RS and vector data. A raster overlay approach was used for the gathered datasets to rank the attribute values and apply weightage to each map formed. The final overlay map was created by allocating weightage depending on the significance factor (see Table 1). The highest weightage was assigned to NDBI, followed by NDWI and NDVI. In addition, 26% of the area is declared a protected zone by the government and cannot be used for landfill purposes.

**Table 1.** Details of Weightage assigned to the Indices.


From Figure 5, 70% of the study area was found to be appropriate for developing new landfill sites or the expansion of an existing dumpsite. A zonal statistics tool from GIS was used to rank the places with mean values based on the standard deviation (SD). This is a raster representation of the result in Figure 5. Accordingly, the study area is classified on a five-point scale for its suitability for landfills. The scale ranges from very good (dark green color) to very poor (dark red color). As expected, the area on the outskirts of the case study is more suitable for landfills. Specifically, the area in the southeastern suburbs is declared very good for landfill development and expansion. This area constitutes the regions of Jaranwala and Tandlian Wala towns.

#### *4.1. Average Ranked RS Indices and Vector Data*

RS indices and vector datasets of the study area are shown in Figures 6 and 7. With a mean value of 0.95, polygon 6, i.e., Tandlian Wala Town is selected as the best location for landfill site development and expansions. The order of ranks for polygon 6 in terms of physical factors shows a Waterbodies > Population > Roads pattern. The relative ranked mean indices for this polygon are NDBI > NDWI > NDVI. It indicates that the area of built-up regions and moisture is more than the vegetative area. However, it must be noted that the built-up areas in this region do not imply a higher population. On the contrary, this polygon has one of the lowest populations, as made evident by Figure 3. The collective sum of the mean indices and factors for polygon 6 is 1.321.

**Figure 5.** Suitability of Landfill sites.

Polygon 7 and 8 have been ranked as the worst for landfill site development, i.e., Madina Town and Lyallpur Town, with the order of ranks for the mean value as NDVI > NDBI > NDWI and NDBI > NDVI > NDWI, respectively. In these polygons, the results suggest that the water bodies are relatively small compared to the vegetation and built-up area. The distribution includes Water Bodies > Population > Roads when it comes to physical aspects. The sum of mean indices for polygons 7 and 8 are 0.37 and 0.44, respectively. It must be noted that these two polygons contain the most populated areas, as previously shown in Figure 3. Therefore, it makes more sense to avoid populated areas for landfill site development. Accordingly, the automated GIS tool shows similar ranks for these polygons, and hence these are the less preferred areas for landfill development or expansion.

**Figure 6.** Relative ranked mean of the factors.

**Figure 7.** Relative ranked mean of the Indices.

Table 2 compares all polygons and shows the data for NDVI, NDBI, area, population, roads, and water bodies. In terms of area, Jaranwala Town, with an area of 795.9 sq. km, is the largest polygon, followed by Tandlian Wala Town, whereas Summundari Town has the lowest area. In terms of NDVI, the highest vegetation is recorded for Jaranwala Town (0.52), followed by Dijkot (0.49), whereas the lowest vegetation is observed in Tandlian Wala Town (0.31). For NDWI, the highest values are reported for Jaranwala Town (0.48), followed by Dijkot (0.45), whereas the lowest value is reported for Tandlianwala Town (0.33). In terms of NDBI, the highest built-up areas are reported in Iqbal Town (0.46), closely

followed by Jaranwala Town (0.45), whereas the lowest value is reported for Tandlian Wala Town (0.34). In terms of water bodies, Jaranwala Town (0.22) has the highest value, closely followed by Chak Jhumra and Jinnah Towns (0.21), whereas Tandlian Wala Town (0.14) has the lowest value. As evident from the above discussions, the values for almost all assessment parameters are the lowest for Tandlian Wala Town, making it the best landfill development area for the city of Faisalabad.


**Table 2.** Relative ranked mean values of the indices and factors.

#### *4.2. The Standard Deviation of RS Indices and Vector Data*

After the basic comparisons of the RS indices for the 9 study polygons, the SD of the RS indices (See Figure 8) and the physical factors (see Figure 9) were calculated. The SD depicts the variance across polygons and is also used to support mean ranked sum maps. From Figure 8, Figure 9 and Figure it is evident that polygon 1, i.e., Dijkot, has the least sum of the SD of indices, with a value of 0.27. The associated grading is NDBI > NDWI > NDVI for RS indices and Population > Roads > Waterbodies for physical factors.

On comparing the indices values of all polygons, it was noted that polygons 7 and 8 have the highest NDBI and NDVI values. As a result, these polygons were deemed the least favorable for landfill extension based on the average SD and the physical parameters. According to the average ranking, 30% of the study area was deemed unsuitable for landfill growth. This is because the high SD value indicates less uniformity among the data. Also, it is noted that five out of nine polygons were suitable for landfill development, whereas the remaining four were deemed unsuitable. These least-favorable polygons have major water bodies and more population, making landfill development or expansion unfavorable.

Table 3 provides a comparison of the SDs of all polygons. In terms of NDVI, the highest SD is recorded for Lyall Pur Town (0.21), followed by Iqbal Town (0.16), whereas the lowest SD is observed for Dijkot (0.05). For NDWI, the highest SD values are again reported for Lyall Pur Town (0.16), followed by Jinnah Town (0.11), whereas the lowest value is reported for Dijkot (0.06). In terms of NDBI, the highest SD is reported for Madina Town (0.23), followed by Iqbal Town (0.18), whereas the lowest value is reported for Tandlian Wala Town (0.11). Finally, in terms of water bodies, Lyall Pur Town (0.13) has the highest SD value, closely followed by Iqbal Town (0.12), whereas Dijkot (0.05) has the lowest value.

Based on the above analyses, Figure 10 provides the holistic ranking of the study area polygons. The ranking follows a range from very good to very poor. Accordingly, Tandlian Wala Town is declared very good (the best in this case study) for landfill site development and expansion. Dijkot and Jaranwala towns are declared good (2nd best) for landfill site development and expansion. Chak Jhumra and Summundari towns are declared fair enough for landfill site development and expansion. Iqbal and Jinnah towns are declared poor sites for landfill development and expansion. Madina and Lyall Pur towns are declared very poor sites for landfill development and expansion. Based on the above, three polygons are declared good, two average, and four as bad for landfill development and expansion.

**Figure 8.** Standard deviation of the RS indices.

**Figure 9.** Standard deviation of the factors.


**Table 3.** Standard deviation values of the indices and factors.

**Figure 10.** Ranked Thiessen polygons from Zonal Statistics.

As shown in Figure 11a, Tandlian Wala consists of a majority of agricultural land with less population and few water bodies. Thus, it is a very good site for landfill expansion for dealing with MSW. Further, if landfill mining is done for MSW in this zone, it would have fewer adverse effects on the environment and human population. Landfills in this polygon will have less hygienic, economical, environmental, and social expenses and meet the hydrological, geographical, topographical, and regulatory requirements in the case study area. Typically, landfill sites should be accessible by road to reduce the financial burden on the economies of developing countries like Pakistan [47]. Figure 11b shows that Madina and Lyallpur towns are densely populated residential areas, making them the least favorable for MSW landfills. The irrigation network is also shown in Figure 11, showing a denser presence in Tandlian Wala than in populated regions of polygons 7 and 8. There is more vegetated and agricultural area in Tandlian Wala than polygons 7 and 8. Therefore, landfill site development in polygons 7 and 8 will adversely affect the environment. Other relevant studies have not ranked the landfill areas, and only favorable and unfavorable areas were defined [48,65]. In comparison, this research ranks the entire area for the suitability of landfill sites based on RS and vector data.

**Figure 11.** Irrigation network of the best and worst area. (**a**) water bodies in Tandlian Wala Town (**b**) irrigation network in Madina and Lyallpur towns.

#### **5. Discussion**

With the increasing population, solid waste production rises rapidly [1]. This issue is faced all over the world. USA, Australia, and Germany are among the largest producers of MSW [5]. Global urbanization, insufficient urban waste management, and a global shortage of resources all contribute to the growth of MSW [6].

Many studies have been conducted on selecting suitable sites for landfills. For example, mathematical models have been used to choose dump locations [38]. Likewise, AHP, RS, and GIS are used to locate a trash disposal site [40–43]. Also, the waste was disposed of in a landfill chosen using GIS and MCDA [23]. However, such a study for identifying MSW landfill sites in Pakistan has not been reported.

Integrated RS and GIS have been used in this study. Faisalabad city was selected for this study, and the area was divided into 9 polygons based on Thiessen polygons. Then, these were ranked based on four datasets: NDVI, NDWI, NDBI, and population. Among these datasets, NDBI has the highest weight, i.e., 35%. Average ranked indices and SDs of all 9 polygons were calculated. Based on these SDs, polygons were ranked among the most suitable and least suitable for the MSW landfill site development and expansion. Polygon 6 is considered the best landfill site, and the physical factors ranking for this polygon following the pattern of Waterbodies > Population > Roads and indices as NDBI > NDWI > NDVI was displayed. Due to well-established transportation networks, fewer people, more vegetated areas, and the surrounding environmental factors, it is the most desired location for the landfill site. In contrast, the central and northwest portions (which include the towns of Iqbal, Jinnah, Madina, and Lyall Pur) are deemed unsuitable for landfill growth. This is due to the dense population and larger water bodies in the area, making them unsuitable dump sites.

The south of the study area is the most suitable region for landfill expansion, whereas the northwest parts are the least favorable. Overall, the polygons consisting of Tandlian Wala, Dijkot, and Jaranwala towns are declared suitable and preferred for landfill site development and expansion. Furthermore, the polygons comprising Chak Jhumra and Summundari towns are declared fair enough for landfill site development and expansion. Combined, these polygons constitute 70% of the total area. Finally, 30% of the study area was deemed unsuitable for MSW landfill site development and expansion.

Overall, the current study uses a combination of RS and vector data to locate and assess the best and worst landfill sites. In previous studies, only favorable and unfavorable landfill sites have been reported, but no ranking of landfill sites was conducted for developing countries [40,43,47]. For the sake of comparison, Madi and Srour [48] conducted multiple GIS analyses for landfill site management. However, the ranking using a holistic approach adopted in the current study has not been performed. Similarly, Ali and Ahmad [65] conducted GIS and AHP analyses to investigate the suitability of landfill sites in India but did not perform any rankings. In this context, the current study presents its additional novelty by conducting the first-ever landfill study for Faisalabad, Pakistan, and ranking the sites in a developing country.

The current study has both practical and research implications. Practically, town, city, and regional planners, city governance teams, environmentalists, and policymakers can use the method proposed in this study to mark landfill sites and reduce environmental concerns. This will help move towards smarter and sustainable cities. Similarly, in terms of research potential, the factors included in this study can be expanded to include more indices and physical factors to enhance the currently proposed method. Furthermore, a similar study conducted in developed countries and compared with developing countries will yield holistic results to add more value to the body of knowledge.

#### **6. Conclusions**

Due to rapidly expanding global urbanization, associated lack of resources, and inadequate urban waste management, MSW issues and management concerns are on the rise. Over a third of total municipal waste out of two billion tons generated remains uncollected worldwide. MSW is collected and disposed of at certain locations or burnt down in most developing nations. Landfill sites for solid waste must be inspected in terms of all requirements to reduce economic and environmental expenses. In this research, GIS and RS were used to rank the area based on Thiessen polygons for identifying landfill expansion from the most suitable to the worst sites. Landsat-8 data has been used for studying the landfill sites in the Faisalabad region of Pakistan. Nine Thiessen polygons were created and studied using GIS and RS techniques. For rankings, the indices of NDVI, NDWI, and NDBI were used. Further physical factors, including water bodies, roads, and population, were also used in reaching a holistic ranking for marking landfill sites in the case study area.

In terms of the assigned weights, four datasets consisting of NDVI, NDWI, NDBI, and population have been used in this study. NDBI has the highest weight (35%) among all indices. This study calculated the average ranked means and SDs of all indices and factors and represented them graphically and numerically. Polygon 6 (Tandlian Wala) is declared the best (very good) zone for landfill site development. The physical factor ranking for this polygon followed the pattern of Waterbodies > Population > Roads. For the indices, the pattern of NDBI > NDWI > NDVI was displayed.

Tandlian Wala (located southwest of the study area) is ranked the most suitable polygon for landfill site development, expansion, and mining. It is the most preferred place for landfill growth due to well-established transportation networks, smaller populations, more vegetated land, and the surrounding suitable environmental features. In comparison, the middle and northwest areas (consisting of Iqbal, Jinnah, Madina, and Lyall Pur towns) are ranked least suitable for landfill expansion. This is due to the dense population and higher water bodies in the area, making the conditions for landfill sites unfavorable.

In terms of limitations, the study is not all-inclusive and has room for improvement. First, the seasonal effect and long-term variation of all data sets are not considered in this study. These should be considered for getting better and more accurate results. Accordingly, future studies can map regions based on seasonal products and long-term variations of all data sets. Similarly, the method used in this study is limited to GIS and RS tools. In the future, it is suggested that advanced statistical machine-learning models be used with the current model to improve the overall accuracy.

**Author Contributions:** Conceptualization, B.A., A.M., M.D.T. and F.U.; methodology, B.A., A.M. and M.D.T.; software, B.A., A.M. and M.D.T.; validation, B.A., A.M., M.D.T., F.U., M.S.U.R. and M.A.; formal analysis, B.A., A.M. and M.D.T.; investigation, B.A., A.M. and M.D.T.; resources, B.A., A.M. and F.U.; data curation, B.A., A.M., M.D.T., F.U., M.S.U.R. and M.A.; writing—original draft preparation, B.A., A.M., M.D.T. and F.U.; writing—review and editing, F.U.; visualization, B.A., A.M. and M.D.T.; supervision, B.A., A.M., F.U., M.S.U.R. and M.A.; project administration, B.A., A.M., F.U., M.D.T., M.S.U.R. and M.A; funding acquisition, F.U., M.S.U.R. and M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data is available with the first author and can be shared upon reasonable request.

**Acknowledgments:** The authors would like to acknowledge the support from the Office of the Associate Provost for Research, United Arab Emirates University.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Circular Economy in the Construction Industry: A Step towards Sustainable Development**

**Maria Ghufran 1, Khurram Iqbal Ahmad Khan 1,\*, Fahim Ullah 2, Abdur Rehman Nasir 1, Ahmad Aziz Al Alahmadi 3, Ali Nasser Alzaed <sup>4</sup> and Mamdooh Alwetaishi <sup>5</sup>**


**Abstract:** Construction is a resource-intensive industry where a circular economy (CE) is essential to minimize global impacts and conserve natural resources. A CE achieves long-term sustainability by enabling materials to circulate along the critical supply chains. Accordingly, recent research has proposed a paradigm shift towards CE-based sustainability. However, uncertainties caused by fluctuating raw material prices, scarce materials, increasing demand, consumers' expectations, lack of proper waste infrastructure, and the use of wrong recycling technologies all lead to complexities in the construction industry (CI). This research paper aims to determine the enablers of a CE for sustainable development in the CI. The system dynamics (SD) approach is utilized for modeling and simulation purposes to address the associated process complexity. First, using content analysis of pertinent literature, ten enablers of a CE for sustainable development in CI were identified. Then, causality among these enablers was identified via interviews and questionnaire surveys, leading to the development of the causal loop diagram (CLD) using systems thinking. The CLD for the 10 shortlisted enablers shows five reinforcing loops and one balancing loop. Furthermore, the CLD was used to develop an SD model with two stocks: "Organizational Incentive Schemes" and "Policy Support." An additional stock ("Sustainable Development") was created to determine the combined effect of all stocks. The model was simulated for five years. The findings show that policy support and organizational incentive schemes, among other enablers, are critical in implementing a CE for sustainable development in CI. The outcomes of this study can help CI practitioners to implement a CE in a way that drives innovation, boosts economic growth, and improves competitiveness.

**Keywords:** causal loop diagram; circular economy; construction industry; sustainable development; system dynamics

#### **1. Introduction**

The construction industry (CI) is the world's largest user of natural resources. Traditionally, the CI has utilized a non-sustainable, linear economic model based on the "take, make, dispose of" concept in the past and continues to do the same [1]. The linear approach does not allow constructed facilities to be dismantled and reused. Therefore, they become obsolete when the facility ends its useful life [2]. However, this must change in the era of focus on sustainability and global greening initiatives.

**Citation:** Ghufran, M.; Khan, K.I.A.; Ullah, F.; Nasir, A.R.; Al Alahmadi, A.A.; Alzaed, A.N.; Alwetaishi, M. Circular Economy in the Construction Industry: A Step towards Sustainable Development. *Buildings* **2022**, *12*, 1004. https:// doi.org/10.3390/buildings12071004

Academic Editor: Cinzia Buratti

Received: 20 June 2022 Accepted: 11 July 2022 Published: 13 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The Circular Economy (CE), which has captured the interest of researchers and practitioners in the last decade, is contrary to this ineffective and unsustainable linear economic paradigm [3]. Regardless of the various schools of thought and definitions, the CE aims to maintain resources flowing at their best value within boundaries. This ensures that no new natural resources are needed to manufacture materials and that waste is minimized [3]. Aside from resource circularity in closed-loop systems, the CE focuses on better resource management by rethinking and reducing unnecessary consumption. Product fragmentation, intensification of product use, and increased production efficiency are examples of projects in which better resource management is incorporated by rethinking and reducing unnecessary consumption [3]. The CE's foundation is built on better resource management through lower consumption and replacing the "end-of-life" concept with the reusing, recycling, and recovering materials and components [4].

The CI faces many problems, including a non-linear economy, the absence of financial assistance, lack of proper technology, non-supportive infrastructure, and lack of political will towards sustainable development [3]. Uncertainties caused by fluctuating raw material prices, shortages of materials, increasing demand, urbanization, climate change, absence of proper waste management infrastructure, and use of wrong recycling technologies all lead to complexities in the CI [5]. There seems to be controversy on the cause of problems and how to address them. However, complexity is beyond the scope of any single organization to comprehend and respond [6]. Implementing CE principles in CI will lower industry costs; reduce negative environmental effects; make urban areas more livable, productive, and convenient; and help deal with these process complexities [7]. Although the notion of the CE has gained traction in academia, business, and government, its widespread implementation remains limited [8]. This implementation is in its nascency in the CI. Regardless of its implementation status, the CE has emerged as one of the most crucial and contemporary approaches to addressing sustainability [9]. Accordingly, like its industrial counterparts, the CE's implementation in the CI must be investigated and facilitated to attain the global sustainability targets.

According to Kirchherr et al. [10], the CE aspires to contribute to sustainable development. Suárez-Eiroa et al. [11] claim that the CE operates within sustainable development. However, there has been little research on utilizing CE principles in the built environment holistically and systematically. For instance, Geissdoerfer et al. [12] proposed a framework indicating that the CE assists in realizing sustainability ambitions. Ritzén and Sandström [13] identified barriers to a CE for sustainable development in the CI and categorized them into financial, structural, operational, and technological categories. Schöggl et al. [14] concluded that the most commonly used R-strategies are recycling, remanufacturing, repair, and reuse. Walker et al. [15] provided important strategies for how organizations might use CE practices to attain sustainability.

Sparrevik et al. [16] concluded that assessing environmental performance for CE in the CI is a recursive structure for its successful implementation. Moreover, recent research has advocated an additional paradigm shift to a sustainability-based CE [17–19]. Bilal et al. [20] proposed that further research regarding implementing CE for CI in developing countries is needed. Accordingly, further empirical research is necessary to address the complexities of implementing CE principles for the sustainable development of the CI [21,22]. In addition, there is a need to quantitatively address the effects of CE on sustainable development [12,23]. From the perspective of the CI, studies addressing the complexity of implementing the CE for sustainable development are limited [23], which is predominantly evident in developing countries. Therefore, a larger research gap is presented in this context, requiring a holistic study focused on developing countries to address the complexity of CE implementation in the CI for sustainable development. Therefore, this study targets this research gap and quantitatively addresses the complexity of implementing a CE for sustainable development in developing countries.

The United Nations' (UN) 17 Sustainable Development Goals (SDGs) aim to make the world a better place for people and the environment by 2030 [24]. By adopting the SDGs in 2015, 193 countries agreed to address the world's most pressing issues [25]. The CE contributes to the achievement of the UN's SDGs [26]. This possibility abounds in the many ways in which individuals and corporations engage while enabling the CE. The CE can start with the basics and have a big impact, opening new avenues for collaboration to maintain and develop value through revitalized buildings, meaningful occupations, and enhanced mobility [27]. The SDGs provide a new prism through which global needs and objectives can be transformed into economic alternatives for the CI. The 17 SDGs can be divided into five categories or the 5Ps: people, planet, prosperity, peace, and partnership [28]. These 5Ps of the SDGs necessitate collaboration between several players, including governments, institutions, and corporations. Because the CI is a crucial stakeholder, it is critical that it establishes methods for aligning its business strategies with the SDGs. Since the introduction and approval of the UN 2030 agenda, academics and practitioners from various fields have been working hard to research possible SDGs implementation techniques [26]. The CI should be no exception to this, and uplift from the lens of UN SDGs is needed while striving to enable CE in the CI.

Based on the above discussion, construction is inferred as a resource-intensive industry where the existing economic practice may not be sustainable. The arising uncertainties in CI lead to complexities. The SDGs have been less focused on by researchers exploring CE implementation in developing countries [29,30]. Nevertheless, regarding implementation, studies have provided various strategies. However, specific research to address the complexity of implementing the CE for the sustainable development of the CI is minimal and particularly evident in the case of developing countries [31]. This presents a critical gap requiring the attention of the researchers targeted in the current study. Accordingly, the current study aims to address the complexity arising in implementing CE for the sustainable development of the CI with a focus on developing countries. The SD approach that has been widely used in dealing with complexities is utilized in this study [32,33]. The current study has the following objectives:


This study uses the systems thinking (ST) approach to determine the causality among key CE enablers. Furthermore, SD is used for model development and simulations. ST is a holistic method focused on the establishment and dynamic interrelationship of constituent parts of a system to address the inherent complexity [32]. SD is a methodology based on ST that addresses process complexity using simulation techniques [33,34]. The results of this study will help CI practitioners implement the CE in a way that drives innovation, boosts economic growth, and improves competitiveness. By implementing CE principles in CI, industry costs would be lowered, negative environmental impacts reduced, inherent complexities tackled, and resilience of urban areas enhanced to make them more livable, productive, and convenient.

The remaining paper is structured as follows. First, an overview of the existing literature on CE and sustainable development is presented in Section 2. Next, the study's methodology is discussed in Section 3. Next, Section 4 provides the study's findings and discussion. Finally, the paper concludes with Section 5, providing key takeaways, limitations, and future directions.

#### **2. Literature Review**

#### *2.1. Circular Economy (CE)*

The CE is defined as a system designed to be restorative and regenerative to decouple economic growth from resource utilization [7]. The concept of a CE was first proposed in the twentieth century to emphasize the development of the ecological industry [35]. The CE is considered a solution to diminish the dependency on resource extraction. It is a condition for preserving the current way of life by sustaining the value of resources and keeping them in circulation [36,37]. It is a business strategy that eliminates the "end of life" concept from the production, distribution, and consumption stages [38].

The CE focuses on minimizing waste, reuse, recycling, and recovery of resources. It is a novel approach to achieving sustainable development [39]. CE uses the product's end-of-life as a substantial economic material resource [40]. It encourages disruptive innovations in product-service systems, social and eco-innovations, efficient usage of resources, and sustainable consumption [41]. The transition from a linear economy (LE) to a CE is a real challenge. However, achieving this will help accomplish the long-term sustainability goals [42]. The CE is thought to have aided in transforming the conventional LE into a closed substance economy, which is required and beneficial for developing a sustainable society [12].

CE principles are becoming increasingly popular as a practical option for accomplishing sustainable goals [21,43]. New tools are needed to help practitioners, decisionmakers, and governments embrace more CE practices and reap the holistic benefits. Kirchherr et al. [10] discovered that of the 114 descriptions of CE, 35–40% deal with the waste hierarchies reduce, reuse, recycle (3R) framework. However, the CE concept does not always address all aspects of sustainability. Specifically, it is silent on the social aspect of sustainability, an emerging criticism of its association with sustainability. Nevertheless, academics, industry leaders, and politicians are interested in exploring the benefits of implementing the CE paradigm to improve the economic system's sustainability [3]. From this, it can be concluded that there is a crucial need to implement CE for sustainable development in the CI—a key contributor to the global economies.

#### *2.2. CE Concept for Sustainable Development in the CI*

The CE concept helps foster sustainable development [16]. The primary issue in sustainable growth with LE is that it pursues continual economic expansion at the cost of environmental degradation, with no clear grasp of whether this enhances social fairness [44]. As a result, global sustainability concerns are on the rise. If the CE is to serve as a model for sustainable development, it must address these concerns. Accordingly, environmental constraints, social equity, and economic prosperity should all be addressed in the CE criteria [45].

According to Xu et al. [28], the CE is the outcome of almost a decade of global economies' endeavors to promote sustainable development. According to Yaduvanshi et al. [46], the CE is widely recognized as a tool for achieving sustainable development. However, it is unclear how a CE concept that excludes a key facet of sustainable development (social consideration) may lead to a model that can be termed sustainable [10]. First, despite its promotion as a concept for sustainable development, it is unclear if the CE can promote economic growth while preserving the natural environment and increasing social fairness for future generations [47,48]. Second, as Merli et al. [49] pointed out, the lack of research on how the CE addresses social wellbeing indicates that this should be a top priority. Third, understanding how the CE will improve social fairness and developing new indicators to effectively evaluate these changes would necessitate a collaborative effort across disciplines. In this context, Mongsawad [50] concluded that greater emphasis must be paid to establishing innovative strategies for changing production and consumption patterns that allow the CE to reduce its reliance on virgin resources while increasing the consumption of secondary materials.

CE principles are incredibly relevant to the CI, with the building sector being a major worldwide consumer of commodities such as energy, materials, and resources [51]. For its high resource intensity, the CI draws the most attention throughout the CE transition [19]. Although there are numerous hurdles that the CI will face in a holistic implementation of a CE, these adjustments are both achievable and essential [17]. A CE in the CI is considered

one of the main priorities for national economic development, making its investigation critical and much needed [11].

The CE concept is becoming a well-known solution to some of the world's most urgent crosscutting sustainable development issues [16]. The CE offers an excellent platform for economic growth. It helps to create new, more sustainable jobs while reducing reliance on nonrenewable resources and the production of negative externalities [52]. However, a collective commitment from society is needed for widescale adoption. The CE transfers growth and opportunity from current consumption patterns to a continuous and long-term system [53].

#### *2.3. CE and United Nations' (UN's) Sustainable Development Goals (SDGs)*

The CE aims to achieve innovation in industries and infrastructure, enable economic growth, sustainable cities and communities, tackle climate change and reduce harmful life of materials which are priority areas of the United Nations Sustainable Development Goals (UNSDGs) [54]. After the Millennium Development Goals (MDGs), the UN set these goals in 2015 to protect the planet, eradicate poverty, and ensure that all people enjoy peace and prosperity by 2030.

United Nations set 169 targets in 2015 to track progress toward 17 SDGs [55]. These targets were developed through international and interdisciplinary collaboration, allowing countries to develop their context-specific tactics. Schroeder et al. [21] concluded that the CE could directly contribute to the achievement of several SDGs, including SDG6 (clean water and sanitation), SDG7 (affordable and clean energy), SDG8 (decent work and economic growth), SDG12 (responsible consumption and production), and SDG15 (Life on land). According to Howden-Chapman et al. [56], the CE can assist in achieving SDG 6 (provide universal access to water and sanitation) and SDG 11 (make cities inclusive, safe, resilient, and sustainable) by improving housing conditions in informal settlements. SDG 8 (inclusive and sustainable economic growth, employment, and decent work) and SDG 9 (resilient infrastructure, sustainable industry, and innovation) focus on providing opportunities to apply CE solutions, such as improving working conditions in unorganized sectors processing secondary resources or establishing industrial symbiosis networks for resource-efficient industrial development such as the case of CI [56,57].

#### *2.4. Implementation Complexities of the CE in the CI*

Uncertainties caused by fluctuating raw material prices, shortage of materials, increasing demand, urbanization, climate change, absence of proper waste infrastructure, and use of wrong recycling technologies all lead to complexities in the CI [33]. Industries perceive the CE as a method to integrate economic, societal, and environmental interests by transforming linear economies (LEs) into circular economies (CEs) to obtain the best product value [58]. Unfortunately, this integration comes at the cost of increased complexity in various ways [59].

The CI's current complexity management techniques focus on reducing complexity locally rather than acknowledging the importance of complexity for viable CE networks [60,61]. Complexity refers to elements that can be influenced directly as a control variable within the system's bounds and reach and items that cannot be controlled [62]. Therefore, the CE focuses on improving systems rather than components [63]. This system perspective and the reality that each player must have this perspective and understanding to contribute effectively to a CE opens new possibilities for collaboration but at the cost of increased complexity [33,64].

Circularity presents a whole new set of opportunities for creating and capturing value, but it comes at the cost of close interconnectedness between stakeholders and systems, increasing the management and process complexity [65]. Moreover, the lack of adequate modeling tools makes modeling CE challenging for researchers and practitioners [66]. For example, no key modeling techniques are used in designing a CE. Regardless, the CE is thought to be one of the finest alternative instruments for resolving the conflict between

economic growth and society's long-term development by designing a restorative and regenerative economy [67].

Complexity is beyond the comprehension and response capabilities of any single organization [68]. There is frequently dispute regarding what is causing the issues and how to solve them. As a result, employing CE concepts in the CU might cut industrial costs, eliminate negative environmental consequences, make metropolitan areas more livable, productive, and convenient, and assist in dealing with the process complications [69]. The complexity issues are addressed using the SD approach in this study, which is widely used to model the behaviors of complex systems [70].

#### *2.5. The SD Approach for Handling CE Implementation Complexities in the CI*

The SD methodology was established based on the system's feedback linkages. It is a method of researching and solving complex problems employing computers, focusing on policy analysis and design [34]. An SD technique is employed for CE implementation to link the system variables, which is complicated and may change over time [71]. Models of SD can be used to simulate various situations that dynamically capture complex relationships [72]. It is a useful method for evaluating a complex system in its entirety [34], based on an iterative modeling process [73].

A causal loop diagram (CLD) is created to determine the relationship between variables and to balance and reinforce feedback loops in the system [74]. SD models are developed for modeling and simulating the variables in CLDs. All SD models are composed of variables of three types: stock, flow, and auxiliary. The flows are of two types: physical/material and information, which could interact and respond to others [75]. Variables and the stocks and flows are essential to a stock–flow diagram. The feedback loops in the CLD play a crucial role in the simulation of the SD model. Variables and stocks and flows are part of the essential form of the stock–flow diagram, in which feedback loops play a crucial role in the simulation of the model [76]. The SD approach is notable for tracking and interpreting a given system through time, incorporating various ideas, philosophies, and techniques that assist in framing and understanding the management system's behavior [33,77].

In a previous study, the SD model was developed to study the causes of design productivity loss [78]. To maximize employee productivity, the SD model was used to model cost components that needed to be controlled and assess the changes in the supply chain of subcontractors and supervisors. In addition, the SD approach is used to describe the SC selection process, which is based on a complex interconnected structure of several elements that influence the SCs' work quality [79]. The CE is an economic concept that proposes ingenious ways to transform the current linear consumption system into a circular one. SD can help model nascent strategies such as the CE to achieve sustainable development [80].

#### **3. Research Methodology**

To achieve the objectives set in the current study, this research was divided into four stages, as demonstrated in Figure 1. In Stage I, the initial scrutiny of the literature was conducted. After scrutinizing the literature, the research gap was identified, which led to the development of the problem statement. Research objectives were formulated based on the problem statement as presented in the introduction section of the current study.

#### **Figure 1.** Research Methodology Flowchart.

Detailed scrutiny of the literature was performed to identify crucial enablers of CE for sustainable development in CI in Stage II. As per recent studies, Science Direct, Scopus, Web of Science, and IEEE Xplore are the four major databases chosen for paper collection [81,82]. The relevant studies' inclusion and exclusion criteria were used to ensure that the literature evaluation was complete and comprehensive. A total of 31 enablers of CE in the CI were shortlisted from 35 research articles published from 2010 and onwards. As previously mentioned, these enablers were ranked based on a normalized score from content analysis following the relevant studies. First, the influence of each enabler was measured as high, medium, and low through a careful literature review. After that, each enabler was assigned a number (1 for low, 3 for medium, and 5 for high) following [68,83]. The literature score (LS) calculation was done using the Relative Importance Index (RII), shown in Equation (1), where W represents the highest frequency, A is the maximum possible score, and N is the number of papers considered for detailed review. LS was then normalized to obtain the normalized literature score (NLS) by dividing each enabler's LS by the sum of LS, as shown in Equation (2). The identified enablers, references, and NLSs are shown in Table 1.

A preliminary survey was also conducted to identify the key enablers from the practitioners' perspective. This survey was conducted to shortlist the enablers. Literature score from content analysis and field score from this survey with a 60/40 ratio was utilized in the ranking of enablers following [82,84]. The respondents were asked to score each enabler's impact on a scale of 1 to 5, where 1 shows a low, 3 shows medium, and 5 shows high impact. This survey obtained 30 responses from developing countries, including Turkey, Iran, Morocco, the UAE, South Africa, and Pakistan, which resulted in a shortlist of 10 enablers.

$$\text{RII} = \frac{\sum \mathcal{W}}{\mathbf{A} \times \mathbf{N}} \tag{1}$$

$$\text{NLS} = \text{(LS)} / \text{(}\sum \text{LS)}\tag{2}$$


**Table 1.** Enabler's identification via Literature Review.

In Stage III, a comprehensive survey was conducted in which the respondents from the developing economies were asked about the existence of interrelationships and polarity among the shortlisted enablers. Data were collected using LinkedIn®, Facebook®, Gmail®, and ResearchGate®. The questionnaire was sent to 200 respondents. A total of 108 valid responses were obtained from this survey; thus, the response rate is 54%. The consistency and reliability of the data were assessed using Cronbach's coefficient alpha. The threshold value for Cronbach's alpha is 0.7. Any value of the data above 0.7 shows its reliability [108]. Moreover, the RII score values were less than 1, proving the validity of the data [109].

The Cronbach's alpha value for the data collected was 0.91, suggesting that the data are reliable and consistent. This led to 14 relationships among the enablers, which led to the development of the influence matrix, as illustrated in Section 4.1.

Stella Professional, AnyLogic, Vensim® PLE, and iThink are some of the software packages used to design CLDs and associated SD models. This research utilized Vensim® PLE for CLD and model development based upon the shortlisted enabler's interrelationships. This is because Vensim® is the most powerful package in terms of computing speed, capabilities, and flexibility [110]. A total of 10 enablers with 14 relationships helped develop the CLD.

Based on 14 relationships, 15 industry experts were contacted in Stage IV. These experts were CI professionals from developing countries, including Turkey, Iran, India, South Africa, and Pakistan, who had relevant experience in the CE domain. They each had experience of more than 20 years in the CI. This helped to find the impact of shortlisted relationships that further assisted in determining the values of the equations of the SD model. The CLD was fed into the SD model, which was further validated using expert opinion.

The demographic details of the respondents of the detailed survey (In Stage III) are shown in Table 2. The respondents include 17 (16%) Construction managers, 14 (13%) Assistant Managers, 10 (9%) Project Directors, 23 (21%) Project Managers, 14 (13%) Architects, 18 (17%) Planning Engineers, and 12(11%) Academics. In terms of the experience of respondents, 2 (1%) had 1 year of experience, 25 (24%) had 2–5 years' experience, 29 (29%) had 5–10 years of experience, 16 (15%) had 11–15, 16 (15%) had 16–20 years of experience, and 17 (16%) had experience of more than 20 years.


**Table 2.** Demographic details of respondents.

In relation to educational qualification, 22 (20%) of the respondents were graduate degree holders, 52 (48%) were post-graduate degree holders, and 34 (32%) were Ph.D. degree holders. Organization-wise, 38 (35%) were from government organizations, 16 (15%) were from semi-government organizations, and 54 (50%) were from private organizations. Respondents were also asked about their understanding of the topic. As a result, 5 (4%) of respondents had slight understanding, 23 (21%) had moderate, 32 (31%) had high, and 48 (44%) had an exceptional understanding. This was inquired through a question in the questionnaire.

Due to the lack of research on the developing economies, these countries were identified following [82,111]. The geographical distribution of the respondents is shown in Figure 2. Major respondents were from Pakistan, South Africa, Malaysia, Turkey, Vietnam, Nepal, Uganda, Nigeria, UAE, Brazil, and India.

**Figure 2.** Geographic distribution of respondents.

#### **4. Results and Discussions**

A total of 10 enablers of CE for sustainable development in CI were shortlisted from the preliminary survey, as shown in Table 3. The description of each of the enablers is given in Table 3.



#### *4.1. Causal Loop Diagram (CLD)*

Figure 3 represents the influence matrix, which shows the influence and polarities among the impacting and impacted enablers. The *x*-axis shows the impacted variable, whereas the *y*-axis represents the impacting variable. The value of +1 in the matrix shows a positive (direct) relationship, and the value of −1 indicates a negative (indirect) relationship

between the enablers following [68,84]. A total of 10 enablers with 14 relationships helped to develop the CLD.

**Figure 3.** Influence Matrix.

Vensim® was used to develop the CLD, which was constructed on the expert opinion of professionals with over 20 of experience. The CLD comprises VI loops, i.e., five reinforcing loops and one balancing. The description of each loop is given below, and the consolidated diagram is shown in Figure 4.

**Figure 4.** Causal Loop Diagram.

4.1.1. Balancing Loop B1—Regulatory Performance

The balancing loop (B1) shows that increased strict regulations can lead to decreased political priority. This will result in an increase in awareness through workshops and education programs, as illustrated in Figure 5. This loop predicts that if the strict regulations are implemented, some political parties will not take an interest in their enforcement, due to which awareness would also be decreased, which is a preliminary step towards the implementation of the CE. This will result in reduced performance of the CI due to the

strict implementation of CE principles on one hand and political influence on the other in developing countries.

**Figure 5.** Balancing Loop B1.

#### 4.1.2. Reinforcing Loop R1—Political Intervention

The reinforcing loop (R1) shows that an increase in policy support increases the implementation of strict regulations, leading to a decrease in political priority. Due to this, there is a decrease in awareness through workshops and education programs which leads to the need for increased government financial support, as shown in Figure 6. Hence, this loop shows that if strict regulations are implemented, some political parties will not take an interest in their enforcement, due to which awareness would also be decreased. Strict rules are critical in the implementation and enforcement of policies. A strict regulatory framework can help direct sustainable development to the appropriate level. Due to this decrease in awareness, an increase in government financial support would be required, further leading to an increase in policy support that enhances the interest in the CE paradigm [112,113]. R1 presents how CE implementation can foster CI in developing countries due to the support from the government (financial) and political parties through policies, strict regulations, and awareness programs.

**Figure 6.** Reinforcing Loop R1.

#### 4.1.3. Reinforcing Loop R2—Social Performance

The reinforcing loop (R2) shows that an increase in government financial support can lead to an increase in policy support, which helps to achieve organizational incentive schemes, as shown in Figure 7. The adoption of sustainable practices is sometimes accomplished by rewarding incentive schemes. Hence, this loop shows that strong financial support can lead to policymaking along with many of the organizational incentive schemes, which will ultimately help enforce the CE policies at the best level [114]. This leads to an increase in social performance, leading to sustainable development in the CI of developing countries via the inclusion of organizational incentive schemes and support via government and policies.

**Figure 7.** Reinforcing loop R2.

4.1.4. Reinforcing Loop R3—Economic Performance

The reinforcing loop (R3) shows that an increase in policy support can increase organizational incentive schemes, leading to an increase in innovative and smart technology adoption. Such adoption will ultimately reinforce the policy support, as illustrated in Figure 8. Hence, this loop predicts that if policy support is offered, it can lead to the availability of organizational incentive schemes for the CE's successful implementation. Due to the availability of such incentive schemes, the adoption of innovative and smart technologies would be enabled, resulting in stronger policy support [113,115]. R3 shows an increase in the economic performance of CI in developing countries due to the use of smart technologies, organizational incentive schemes, and policy support.

**Figure 8.** Reinforcing loop R3.

4.1.5. Reinforcing Loop R4—Environmental Performance

The reinforcing loop (R4) shows that an increase in the adoption of innovative and smart technologies can lead to material circularity, due to which there will be an increase in the product life cycle. This increase will lead to an increase in resource durability, leading to a further increase in organizational incentive schemes, as presented in Figure 9. Hence, this loop shows that due to the use of innovative and smart technologies, there would be an increase in material circularity, leading to an increase in the product life cycle. The increase in the product's life cycle shows that the resource durability will increase; in return, more organizational incentive schemes will be provided for the successful implementation

of the CE. Product life cycle extension aims to depart from create–use–dispose toward create–use–reuse. This persistent cycle immediately impacts society, ecology, economics, and the environment [116]. R4 presents how the environmental performance is enhanced due to the product lifecycle extension, resource durability, and material circularity, leading to sustainable development in developing countries.

**Figure 9.** Reinforcing loop R4.

4.1.6. Reinforcing Loop R5—Resource Management

The reinforcing loop (R5) shows that an increase in the material circularity will lead to an increase in the product life cycle's extension, leading to increased resource durability. Such increased durability will further lead to an increase in material circularity, illustrated in Figure 10. Hence, this loop predicts that if the materials remained in the loop (or more circulated and reused), an increase in the product's life cycle would be observed, leading to the efficient utilization and durability of the resources [117]. R5 depicts that resource durability, product lifecycle extension, and material circularity lead toward sustainable development of the CI in developing countries.

**Figure 10.** Reinforcing loop R5.

#### *4.2. Loop Analysis*

The magnitude and speed of influence on system outputs serve as a thorough criterion for loop classification. Table 4 summarizes the results for each feedback loop in the developed CLD. It predicts the speed, strength, and nature of the influence of the loop. The five reinforcing loops, R1, R2, R3, R4, and R5, strongly influence the system with a low speed. This indicates that these loops hold great potential due to their critical nature but will take time to occur and will be long-lasting.


**Table 4.** Overall loop analysis results.

On the contrary, B1 is fast, having a balancing effect. The magnitude and speed of influence on system outputs serve as a thorough criterion for loop classification. This category serves as a screening mechanism, making it easier to prioritize action points. Consider the loop R3, which is reinforcing in nature. All the three variables involved in this continually support each other. Reinforcing loops have a resonant effect that lasts for a long period, whereas balancing loops have a fading impact that lasts for a short time. The CLD's validity was qualitatively assured and verified through expert opinion. The experts were asked to describe the loop's speed, strength, and nature of influence. The results, as previously mentioned, are shown in Table 4.

#### *4.3. System Dynamics Model and Simulations*

Vensim® was used to create the SD model from CLD, as illustrated in Figure 11. The model is made up of three stocks: "Organizational Incentive Schemes," "Policy Support," and "Sustainable Development," all of which were influenced by inflows and outflows. The organizational incentive schemes and policy support were chosen as stocks because they showed accumulation and were the two enablers with the highest interrelationships with the other enablers. As a result, they are displaying the cumulative effect of variables in relation to them, affecting sustainable development, which is a new stock established to reflect the holistic influence of the system. The information gathered in the final survey also aided in formulating the model's equations as given in equations 3 to 5:

```
Organizational Incentive schemes = (outflow of policy support × 1) + (resource durability × 1)
                      + (Organizational Incentive Schemes) (3)
```

```
Policy Support = (government financial support × 0.076) + (innovative and smart technologies × 0.067)
                               + (Policy support) (4)
```
Sustainable Development = (outflow of organizational incentive schemes × 1) + (outflow of policy support × 1) + (Sustainable development) (5)

> The three stocks are simulated separately over the 5 years following [68,118]. The reason for the selection of time five years is to give enough time for implementation to take effect. CE is a new concept. Moreover, this study focuses on developing economies where the implementation will take additional time. Therefore, this is simulated for five years. The simulation graph in Figure 12, where the *x*-axis represents the time. The graph shows that organizational incentive schemes gradually increased linearly over 5 years due to various endogenous variables. This was due to the reinforcing loop in which variables such as innovative and smart technologies, material circularity, resource durability, and product life cycle extension were positively complementing the organizational incentive schemes. This predicts that reinforcing loops R1, R2, R3, and R4 create social, economic, and environmental performance along with the political intervention leading to the sustainable development of the CI in developing countries.

**Figure 11.** System Dynamics Model.

The increase in the simulation graph's curve with time in Figure 13 depicts the influence of numerous endogenous variables on policy support. This was due to the reinforcing loops R2, R3, and R4, in which variables such as government financial support, political priority, strict regulations, and awareness through workshops and education programs reinforce each other. The three loops affecting this stock show an increase in the social, economic, and environmental performance of CI, leading to sustainable development due to the enablers of CE in developing countries.

**Figure 13.** Simulation graph for Organizational Incentive Schemes.

The simulation graph in Figure 14 signifies that sustainable development gradually increases over five years due to the increase in organizational incentive schemes and policy support.

The overall simulation results predict that due to the increase in organizational incentive schemes, there would be an increase in policy support, ultimately leading to the sustainable development of the CI in developing countries, as illustrated by the simulation graph in Figure 15. This shows that there is a need to incorporate various enablers, especially the organizational incentive schemes and policy support, to achieve sustainable development in the CI. Therefore, the model justified that sustainable development would increase over time. The reinforcing loops contribute to the sustainable development of the CI, incorporating the enablers of a CE in developing countries. As a result, the social, environmental, and economic performance of the CI is improved, as illustrated by the graphs.

This will lead the CI to more sustainable choices. As a result, the overall performance of the CI will be improved over time.

#### **Figure 15.** Combined Simulation graph.

#### *4.4. Model Validation*

The confidence in adopting an SD model to help evaluate a specific problem should not be predicated on the model's capacity to tackle the other problems. In this aspect, the model's validity is determined by its intended use. The purpose of the established SD model, as stated above, is to assist in the resolution of complications in implementing a CE for sustainable development in the CI of developing countries.

The first step in verifying an SD model is to validate its structure. Boundary adequacy, structure verification, and parametric verification tests are used to determine the structural validity of an SD model [119]. A boundary adequacy test examines if the model is endogenous to all the relevant ideas in solving the problem. Furthermore, it assesses if the model's behaviors vary substantially when boundary assumptions are relaxed and if the policy recommendations change when the model boundary is extended [120]. After looking at all the variables in the SD model, each one is critical, as all the variables have been documented in the literature. Therefore, the model is validated through this test.

The structure verification test examines whether the model structure corresponds to the model's requisite descriptive knowledge [121]. The produced CLD is based on factors found in the literature, which are subsequently delivered to field professionals, and the impacting interrelationships between all variables. As a result, the model structure is logical and closely resembles the actual industry system. This is consistent with the methods used by Qudrat-Ullah [119].

As per the parameter verification test, the mathematical functions used to connect the variables are centered on feedback from field experts, ensuring empirical and theoretical underpinnings. Hence, the model is verified through this test.

#### **5. Discussions**

In this study, detailed scrutiny of 35 relevant research articles resulted in 31 CE enablers for sustainable development in the CI of developing countries, as shown in Table 1. After content analysis and a preliminary survey, a total of 10 enablers of a CE for sustainable development were shortlisted, which include Government financial support, Extension of the product life cycle, Organizational incentive Schemes, Innovative and smart technologies, Strict regulations, Policy support, Awareness through workshops and

education programs, Resource durability, Political priority, and Material circularity, as presented in Table 3. Government support plays a crucial role in implementing CE by providing relevant guidelines. Furthermore, the government's strict regulations are a major enabler of the transition to sustainable development in CI [122]. Such policy support serves as a powerful enabler for sustainable development. The first step in CE implementation is raising awareness through workshops and educational programs [123]. Political priority is a key enabler since it involves the essential plan that the government will utilize to implement CE. Incentive programs are arrangements in which a company pays its employees extra money depending on their best performance. Accordingly, it has been noted that having these schemes result in improved performance [124]. The use of smart technology is a critical enabler in the move to circularity. The use of smart technologies allows for openness and exchange of information regarding re-usable components, thus making it a critical enabler of CE for the sustainable development of CI [125]. The product life cycle extension is also important in CE implementation [126].

The CLD developed in this study comprised four reinforcing loops and one balancing loop. Figure 3 represents the polarities as direct or indirect among the shortlisted enablers. Figure 5 predicts that if strict regulations are implemented, some political parties will not take an interest in their enforcement, which would also decrease awareness. Such decreased awareness decreases the chances of the CE implementation for sustainable development in CI. This is in line with Sachs et al. [122].

Figure 6 shows that if strict regulations are implemented, some political parties will not take an interest in their enforcement, due to which awareness would also be decreased. On the other hand, Figure 7 shows that strong government financial support can lead to policymaking along with many of the incentive schemes, which will ultimately help enforce the CE policies at the national level. This is in line with Walker et al. [15].

Figure 8 predicts that if policy support is offered, it can lead to the availability of organizational incentive schemes for the CE's successful implementation in CI. Figure 9 elaborates that due to the use of innovative and smart technologies, the material would remain in the loop, leading to an increase in the product life cycle. Figure 10 predicts that if the materials remained in the loop, an increase in the life cycle of the product would be observed, leading to the efficient usage of the resources [117].

The model was made up of three stocks: "Organizational Incentive Schemes," "Policy Support," and "Sustainable Development." The simulation graph in Figure 12 shows that organizational incentive schemes gradually increased linearly over five years due to various endogenous variables. The increase in the simulation graph's curve with time in Figure 13 depicts the influence of numerous endogenous variables on policy support. Lastly, the Figure 14 simulation graph signifies that due to the increase in incentive schemes and policy support, sustainable development gradually increases over five years in the CI of developing countries. The overall simulation results predict that due to the increase in incentive schemes, there would be an increase in policy support, ultimately leading to sustainable development, as illustrated by the simulation graph in Figure 15. This shows a need to incorporate various enablers, especially organizational incentive schemes and policy support, to achieve sustainable development.

This study aimed to address the complexity of implementing a CE for the sustainable development of the CI in developing countries. The results displayed how the CE enablers interrelate and help to promote the sustainable development of the CI. If the government and policy support continues at a national level, then sustainable development will be fostered in this sector [35]. Considering the construction sector of the developing economies, CE implementation is an immediate need [127]. The CI of such economies usually follows a non-sustainable mechanism and hence does not contribute to the sustainable development of global CI [23]. When the enlightened enablers in the study are considered particularly, in developing countries, CE implementation can be achieved with time. Such implementation at the national and international levels will increase the productivity and performance of the global CI [123]. Moreover, it will contribute to the economic growth of the host

country [7]. The results of this study will help to achieve sustainable development of CI in developing countries.

#### *5.1. Theoretical Implications*

In terms of the humble theoretical contributions, it is the first study addressing the complexity of implementing a CE for sustainable development from the perspective of the CI using the SD approach in developing countries. This study contributes to the existing literature by identifying the enablers that aid in the sustainable development of the CI. It humbly bridges the research gap articulated by [12,23], who suggested quantitatively addressing the effects of the CE for sustainable development. The study's findings suggest that incentive schemes, policy support, and associated enablers lead to the sustainable development of the CI in developing countries. Researchers from developing countries can further explore these to develop relevant policies and legislations.

#### *5.2. Practical Implications*

From the perspective of managers and practitioners, the study's findings indicate that policy support and incentive schemes, among other enablers, are critical in implementing CE for the sustainable development of CI in developing countries. Through a CE, materials management will be improved through reusing products and materials, encouraging the use of renewable resources, and maintaining sustainable practices. In addition, benefits include reducing pressure on the environment and improving the supply of raw materials, which environmentalists and environmental officers can further investigate. Furthermore, the study's findings will assist CI professionals in incorporating sustainable concepts throughout the production chain, improving this sector's efficiency and performance. This will ultimately help minimize delays, promote long-term relationships, and reduce communication gaps and project complexities. Moreover, the results of this study will enable CI practitioners to implement the CE in a way that drives innovation, boosts economic growth, and improves local and global competitiveness.

#### **6. Conclusions**

The current study explores the key enablers of the CE's implementation for the sustainable development of the CI in developing countries. In addition, it addresses the complexity of implementing the CE for sustainable development using an SD approach. For this purpose, a total of 35 research articles were scrutinized, which resulted in 31 crucial enablers of a CE for the sustainable development of CI. These 31 enablers were then shortlisted to 10 enablers based on combined literature and field survey scores. A detailed survey was further conducted, which helped develop a CLD.

Among the 10 shortlisted enablers, the CLD has five reinforcing loops and one balancing loop. Furthermore, the CLD was used to build an SD model with two stocks named "Organizational Incentive Schemes" and "Policy Support"; an additional stock named "Sustainable Development" was created to determine the combined effect of the system. The model was simulated for five years. The findings show that policy support and incentive schemes, among other enablers, are critical in implementing a CE for the sustainable development of the CI in developing countries. The developed model was tested using boundary adequacy, structure, and parametric verification tests, which revealed that it is logical and closely reflects the industry's actual system.

Overall, the implementation of sustainable development in the CI is influenced by policies in the economic context. It can be accomplished in certain cases by promoting inventive techniques or enforcing restrictions. Policies for capacity building, effective urban planning, asset management, and legislation and regulations are important components of such a system. Whether it is supply chain management or energy efficiency, regulations and policies govern how businesses are run. Policies create an ecosystem where businesses work together to form new alliances and promote long-term sustainability. The same applies to a CE for the sustainable development of the CI in developing countries. Hence, this study fills the gap articulated by Geissdoerfer et al. [12] and Dantas et al. [23].

This study contributes to the body of knowledge by assisting CI practitioners in implementing a CE in the CI in a way that drives innovation, boosts economic growth, and improves competitiveness. The CE seeks to shift the paradigm away from the LE by reducing environmental impact and resource consumption while enhancing efficiency throughout the industry. Implementing CE principles in the CI could lower industry costs; reduce negative environmental impacts; make urban areas more livable, productive, and convenient; and deal with the inherent complexities.

The limitation of this study consists of including respondents only from developing countries. Moreover, this study only considered limited enablers based on the literature review, which may not be exhaustive in the future. A further study involving participants from all (other) developed countries would be more beneficial and will add value to the body of knowledge. Compared with the current study, such a study will help highlight the holistic enablers of a CE for sustainable development in the CI of developing countries. A follow-up study could focus on the current model's practical implementation and use case studies for validation. Furthermore, a similar study conducted in both developing and developed countries will provide more holistic evidence for performance comparison of a CE implementation in the global CI.

**Author Contributions:** Conceptualization, M.G. and K.I.A.K.; methodology, M.G., K.I.A.K. and F.U.; software, M.G. and K.I.A.K.; validation, M.G., K.I.A.K., F.U. and A.R.N.; formal analysis, M.G., K.I.A.K. and F.U.; investigation, M.G. and K.I.A.K.; resources, F.U., A.R.N., A.A.A.A., A.N.A. and M.A.; data curation, M.G., K.I.A.K., F.U. and A.R.N.; writing—original draft preparation, M.G. and K.I.A.K.; writing—review and editing, K.I.A.K. and F.U.; visualization, M.G. and K.I.A.K.; supervision, K.I.A.K., F.U. and A.R.N.; project administration, M.G., K.I.A.K., F.U., A.R.N., A.A.A.A., A.N.A. and M.A.; funding acquisition, A.A.A.A., A.N.A. and M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Data are available from the first author and can be shared upon reasonable request.

**Acknowledgments:** The authors appreciate Taif University Researchers Supporting Project TURSP 2020/240, Taif University, Taif, Saudi Arabia, for supporting this work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Buildings* Editorial Office E-mail: buildings@mdpi.com www.mdpi.com/journal/buildings

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com ISBN 978-3-0365-7355-7