**Cataloguing of the Defects Existing in Aluminium Window Frames and Their Recurrence According to Pluvio-Climatic Zones**

**Manuel J. Carretero-Ayuso 1, Carlos E. Rodríguez-Jiménez 2,\*, David Bienvenido-Huertas <sup>2</sup> and Juan Moyano <sup>3</sup>**


Received: 5 August 2020; Accepted: 6 September 2020; Published: 9 September 2020

**Abstract:** The sustainability of building envelopes is affected by its windows, since these establish the connection/separation between the indoor rooms and the external environment. They can also lead to problems if they do not offer sufficient protection against external agents. The data source in this research is unprecedented, as it is based on records of court sentences. There is a significant number of cases (1615), which provides high representativeness for the functional reality of windows. The methodology that was developed classifies the defects and the causes that were found, also analysing correspondence with their recurrence according to aspects of climatological location. In the results, the cases pertaining to water infiltration, air permeability and humidity by condensation are highlighted. This study provides a vision that categorizes problems related to aluminium windows that may be useful for future interventions by agents participating in the construction process.

**Keywords:** air permeability; watertightness; airtightness; infiltration; aluminium window frames

## **1. Introduction**

Windows are indispensable construction units for building facades, given their implications for basic aspects related to habitability and comfort such as watertightness [1], air permeability [2], lighting [3], etc. At the same time, these openings interrupt the continuity of external walls and amount to numerous singularities in the construction solution that still need to meet all requirements of the envelope, without decreasing the performance of the whole. In fact, when characterising windows, one must note both their own elements (the frame and the glazing) and all the perimeter elements that are part of the opening (external windowsill, lintels, jambs and blinds, as well as the assembly and sealants) that share the same functions. As such, windows constitute one of the most problematic elements in the study of the envelopes of buildings, these being determinant for attaining suitable parameters for their performance [4].

In this way, the interrelation between the sustainability of the property and that of its windows is quite significant [5]. Windows exert an influence on basic aspects of building functionality, both at the level of internal comfort and with regard to maintenance, construction quality and repairs throughout buildings' service lives. This is shown by the direct effects that dysfunctions related to windows have on the increase in the work and maintenance costs of the building—all key factors in their sustainability [6,7]. Equally, in the construction phase for buildings, windows are decisive for both quality and costs, featuring among some of the common non-conformities that arise during execution, and lead to significant deviations from the initial budgets for the works [8]. Windows are also usually a part of renovations that take place during the service lives of buildings and are key to evaluating the perception of the quality of houses, as can be observed in the efficiency indicators established subsequent to repair and rehabilitation works [9].

In the studies on the openings of windows, there are numerous lines of enquiry, with a recent focus on energy efficiency, which constitutes one of the critical points in the analysis that was carried out. As for energy savings, windows have different considerations. On the one hand, when aspects related to the air permeability of the thermal envelope are addressed, these are responsible for most of the volume infiltrated through the envelope [10,11]. On the other hand, one must remember that the transmittance of windows is one of the main points of the facades through which internal heat and cold are lost. This is the case because their values are usually clearly lower toward the opaque parts—due to the characteristic values of the materials usually employed in the frames and in the glazing [12,13], as well as to the probability of the appearance of thermal bridges in the most common construction solutions [14]. In this line of research, there are numerous publications that examine the insulation provided by windows, its impact on building energy demands and interventions for improving it [15,16]. From this energy perspective, one should also note the effect of solar radiation entering the building through the glazing of windows, the consequences of which are not always easy to mitigate [17].

Construction aspects are also relevant to the study of windows, since these influence numerous building defects. The problems related to humidity make up a large portion of the damage related to traditional facades [18]. Based on this assumption, the existence of numerous joints and operable parts leads to windows and the contours of their respective openings being especially vulnerable to rainwater infiltration [19]. Equally, the flows of temperature and moist air that go through windows are reflected in the appearance of humidity by condensation [20]. Additionally, despite its lesser impact, one must also point to the roles played by windows and their perimeter elements in the causes of some problems related to fissurations and to the stability walls, especially those made with bricks [21].

Given the amplitude of this set of factors related to defects in windows and their environment, inspection and diagnosis activities that can effectively detect problems and dysfunctions in these construction elements become very important. In the last few years, there has been, in this field, an important development in the methods used for the testing of windows and other elements of the building envelope, largely due to the needs of energy evaluations for buildings [22]. In this way, permeability measurement techniques such as the test blower door [23,24], thermometric tests for the measurement of the transmittance [25], infrared thermography [26] and tests of watertightness against the effects of wind-driven rain [27], among others, are increasingly present (both in the scientific literature and in their spread in the industry).

As a result, the knowledge as to construction defects in windows shows a panorama with multiple variables [28]. In the face of this situation, it is especially useful to have carefully categorised databases for a clear definition of critical points to be able to avoid the repetition of problems in future construction interventions, actively contributing to their sustainability. It is not very common to find publications that incorporate any type of cataloguing of construction problems. Nevertheless, certain examples are found to be focused on the typologies of defects related to the building in general [29] and on the reasons behind repairs and renovations [30]. Of particular interest to this research is the work focusing on window frames, including studies on the influence of frame components [31,32], on degradation [33] and on modelling the inspection for window defects [34] or on predicting their service lives [35,36].

The current study provides a novel analysis of defects related to windows (as well as their causes), since it was carried out on a large database of judicial complaints filed by users. It is thus intended to provide a classification and grading of defects that were actually complained about by building users, as well as evaluating their recurrence according to climatic location. In this regard, the influence of rainfall, climate and latitude on the appearance of defects was also examined.

## **2. Methodology**

## *2.1. Scope of Study*

The data for this research were obtained from the records of complaints for damages of the civil responsibility insurance company of building engineers in Spain [37]. Each record originated from the observation of a construction defect between the years 2008 and 2017 [38] that was subsequently the target of a judicial complaint and was definitively resolved before its inclusion in this paper.

These records (that were handled directly by the authors) contain not only data on the contract between the insurance company and the insured party but also the sentences of the courts of law, according to the complaints filed by the users of the buildings in which the construction defects appeared.

These sentences, issued after the verification by the courts of law that the defects in question were indeed present, detailed the characteristics of the defects, indicating that the problems in question should be resolved. The initial sentences could be appealed to higher courts a number of times until they were no longer appealable and were considered final. That is the point at which the authors proceeded with including the data as part of this research. A copy of all those judicial documents is held by the insurance company of the participating building engineers. It was necessary, as such, to review thousands of pages in these records to extract the technical data and separate them from administrative or contractual information.

No precedents were found for research carried out by other authors on window defects based on this type of database, nor was a relevant data set with such a large number of cases (1615) found.

## *2.2. Characterisation*

The parameters characterised in the research were the following— "effects" (d) and "originating cause" (OC)—both for windows (W), as the construction unit under study. Windows are classified into two types: normal windows (NW) and bay windows (BW); all of the windows had aluminium frames.

Table 1 shows the different types of defects and originating causes, as well as the codes assigned to them (this is the material that is most predominant in Spain, whereas other materials, such as wood, are seldom used in window frames).


**Table 1.** Types of defects and originating causes that were analysed.

Likewise, the 'construction typology' of the windows with defects was also studied, being divided into 'flats', 'houses' and 'other buildings'. Below are shown some photographic examples of some of the defects (Figure 1).

## *2.3. Factors of Study*

In addition to the parameters indicated above, in this research were also used other concepts that enabled ascribing each one of the defects to a climatological factor. There were three factors, indicated in Table 2. This table also indicates the categories into which each factor is broken down.


**Table 2.** Types of factors and their categories, as included in the research.

The information related to the factors established for each geographical area of the territory in question (Spain) is referred to in this study as 'location strips'. Their classification was carried out based on the indications of the Spanish Meteorology Agency [39]:


Using said factors, a percentage study was carried out for the recurrence of all defects, applying each factor individually, combining two at a time, and combining the three factors together. In this way, one obtains the 'percentage of number of cases' (%NC), which corresponds to the different cases (location strips) when applying the categories into which Factors 1, 2 and 3 are sub-divided.

Subsequently, using these %NC values and simultaneously applying the three factors (a closer approximation of reality), the percentages are sorted to subsequently establish the 'ranks of concentration of defects'.

In order to homogenise the values obtained, the following was carried out:


• With the objective of better handling and visualising these last values, the relative frequencies were divided by the largest value, thus obtaining a 'normalised relative frequency' for each pluvio-climatic zone.

The eleven results by 'location strips' and by three factors were characterised to obtain the 'ranks of concentration of defects' according to the percentages obtained. From the values of these ranks, 'zones pluvio-climatic' were configured in such a way that all those eleven combinations were simplified into just three zones (ZONE A, ZONE B and ZONE C).

## **3. Results**

## *3.1. Results by Type of Element and Defect*

Figure 2a shows the number of cases by type of element. Normal windows amount to 95% of the total (NW = 1538 cases), while bay windows constitute the remaining 5% (BW = 77 cases). In turn, Figure 2b represents the percentage distribution according to the types of defects, it being shown that 6 out of every 10 cases belong to 'water infiltrations' (dWI = 60.8%). The following defects—'air permeability' (dAP) and 'humidities by condensation' (dHC)—are practically the same.

**Figure 2.** Numbers of cases in the research (**a**) and percentages by types of defects (**b**).

If we break down the number of cases for each type of defect and according to the element in which it occurs (normal window or bay window), we obtain the values that are shown in Figure 3.

**Figure 3.** Number of cases by type of defect and by type of element.

Based on Figures 2 and 3, it is evident that problems related to the performance of windows in terms of hermeticity (in terms of permeability to air as well as water) represent nearly the totality of the impact on building users. This implies that the construction configuration and the commissioning of windows must mainly satisfy the requirements on watertightness (absence of infiltrations) and air permeability.

## *3.2. Results by Type of Originating Cause*

As indicated in the methodology section, four different types of originating causes were found in this research. It can be noted that the 'absence/deficiency of sealant' occurs practically 2/3 of the time (ocAS = 66.4%). The second position (ocIC = inadequate construction material and/or placement) and the third position (ocTB = existence of thermal bridges) have quite similar values, differing by less than one percentage point.

Consequently, with the distribution of the originating causes shown in Figure 4, one can observe a majority of issues pertaining to sealing and to the placement during construction. These aspects are directly related to water infiltration and air permeability (dWI and dAP). Complaints related to thermal parameters (ocTB and ocIA) appear to be far less relevant.

**Figure 4.** Percentage of recurrence according to the type of originating cause.

#### *3.3. Determination of the Pathology Trinomial Sets*

The term 'pathology trinomial set' will be used to refer to construction interrelations that lead a certain type of originating cause to produce a type of defect in one of the two types of windows studied. The data analysed yield 16 different combinations.

Figure 5 shows the number of cases for each one of the 16 pathology trinomial sets found. For a simpler conceptual association, the colours employed for the originating causes are the same as those employed in Figure 4, and the identifying colours for the defects are those used in Figure 3. In addition, Figure 5 indicates the number of cases for each one of the types of defects and originating causes so that they can be evaluated as part of the whole set of the results found.

The most frequent trinomial set is 'absence/deficiency of sealant' that leads to 'water infiltrations' in 'normal windows' (ocAS-dWI-NW = 779). It is followed in second place by the 'existence of thermal bridges' that leads to 'humidity by condensation' in 'normal windows' (ocTB-dHC-NW = 247). In third place is the 'absence/deficiency of sealant' that leads to 'air permeability' in 'normal windows' (ocAS-dAP-NW = 246). The fourth position is held by 'inadequate construction material and/or placement' that leads to 'water infiltrations' in 'normal windows' (ocIC-dWI-NW = 155).

The sum of cases of these first four pathology trinomial sets (from among the 16 sets) equals 1427 cases, or 93% of all the cases. As such, there is a Pareto relation of 25–93; 25% of the pathology trinomial sets lead to 93% of the cases researched.

Some other aspects can also be highlighted: defect dOC is only originated by cause ocIC, cause ocTB only leads to defect dHC, cause ocIA only leads to defect dAP, cause ocAS leads to two defects, and cause ocIC leads to all the types of existing defects.

These pathology trinomial sets confirm that, to reduce most of the problems found and to reduce most of the problems found during the service life, windows should be constructed with the utmost care for their hermeticity (air permeability and watertightness). It is highly recommended to carry out verifications in situ ofn these aspects before the commissioning of the building.

**Figure 5.** Number of cases according to the type of pathology trinomial set found.

## *3.4. Results by Construction Typologies*

As indicated, during the process of data collection, the typology construction wherein each of the defects occurred was noted. The higher percentages were found in 'flats' (63.03%). See Figure 6.

**Figure 6.** Percentage of defects according to the construction type.

From this distribution, it can be deduced that the greater height of multi-storey buildings, which increases their exposure to weather, may explain the frequency of complaints in this type of building.

## *3.5. Pluvio-Climatic Study of Defects*

Given that the defects collected in the database are mainly related to environmental parameters (humidity, air infiltration, etc.), their relationship with the factors of the different climatic areas will be of interest.

The analysis that is carried out in this section explores the recurrence of these problems according to the pluvio-climatic location of the cases in a way that establishes a quantitative association between defects and weather conditions.

3.5.1. Determination of the Location Strips

Table 2 establishes the three main factors of the environment: rainfall, climate and latitude, as well as their respective categories. Their details are shown in Table 3, which contains the different combinations of these categories, resulting in 'location strips'.


**Table 3.** Percentages of recurrence of the defects according to each of the factors analysed.

%NC = percentage of the number of cases.

As shown in the upper section of Table 3, the location strip with the highest percentage by rainfall is the 'medium' (54.86%); by climate, it is the 'Mediterranean' (37.15%), and by latitude, it is the 'north' (51.27%).

As indicated in the middle section of Table 3, it can be noted that there is one location strip that shows a higher result than the others by the percentage of recurrence of construction defects: 'medium-Mediterranean' (31.27%). Following it, there is a triple tie between three location strips: 'high-oceanic', 'high-north' and 'oceanic-north'.

When combining Factors 1, 2 and 3 simultaneously (lower section of the same table), the location strips with the most cases are 'high-oceanic-north' (28.92%) and 'medium-continental-central' (13.13%).

## 3.5.2. Determination of Zones by Ranks of Normalized Frequencies

Table 4 establishes a quadrant in which 'pluvio-climatic zones' are defined according to the 'rank of concentration of defects'. Thus, according to the percentage of the number of cases (%NC), if it is greater than 15, one will be in ZONE A; if %NC is between 7 and 15, one will be in ZONE B; and if %NC is less than or equal to 7, one will be in ZONE C.


**Table 4.** Classification by ranks of the pluvio-climatic zones sorted by intensity of recurrence.

The above-mentioned Table 4 also has an intermediate column (relative frequency) that includes the percentages of defects as a function of the numbers of homes in Spain in each of the location strips. Given that the relative frequencies were low, they were standardised: the highest relative frequency was assigned a value of 100%. The remaining values referred to this normalised value (column named 'normalised relative frequency').

Based on Table 4, it can be noted that the regions of Spain situated to the North, with an oceanic climate and high rainfall, show higher standardised proportions of defects. In other words, windows located in buildings in these areas have a much greater risk of having defects that are complained about judicially. This is explained by the fact that these zones have a very moist environment and infiltration defects lead to frequent complaints. The exception is the last row of Table 4 ('*Medium-Continental-South*' strip) that possesses considerable rainfall and a low percentage. In this case, however, one must note the influence of the location to the South and the continental climate, which produces high temperatures, limiting the effect of humidity.

#### 3.5.3. Individual Analysis According to Each Factor and Type of Defect

An analysis was carried to determine in which climate location strips each of the four types of defects found in windows occurs with a higher percentage of recurrence.

• Water infiltrations (dWI)

This defect occurs more with medium rainfall (54.28%) than with high (29.12%) or low rainfall (16.60%). It was shown that there is a greater presence in the Mediterranean climate (39.82%) than in the oceanic (29.94%) or continental (26.17%) climates. In addition, this defect occurs quite rarely in the subtropical climate (4.07%). As for latitude, it is more common in the North (48.37%) than in the Central part (28.41%) or in the South (23.22%).

• Air permeability (dAP)

This type of defect occurs more with medium rainfall (61.98%) than with high (23.96%) or low rainfall (14.06%). It was shown to occur with similar frequency in the Mediterranean (37.08%) and continental climates (36.42%), while in third place is the oceanic climate (24.90%), and in last position is the subtropical climate (1.60%). As for latitude, more than half of the time, dAP occurs in the North (54.95%), occurring in the Central part almost one third of the time (31.31%), while the South comes last (13.74%).

• Humidity by condensation (dHC)

There is medium rainfall on approximately half of the occasions (49.19%), followed by high (28.34%) and low rainfall (22.47%). Presence in the continental climate (39.82%) was observed in the majority of cases, followed closely by the oceanic (29.32%) and the Mediterranean (28.01%) climates. As with previous defects, the subtropical climate was the least recurrent (3.58%). As for latitude, the North clearly remains as the climate strip with the most cases (56.68%), followed by the Central part (30.29%) and the South (13.03%).

• Oxidation or corrosion (dOC)

There are no cases of this type of defect with low rainfall, while more than one third of the time, it occurs with high rainfall (38.46%), and nearly two thirds of the time, it occurs with medium rainfall (61.54%). This defect also does not occur in the subtropical climate and has quite a low presence in the continental climate (7.69%). It appears occasionally in the oceanic climate (38.51%) and frequently in the Mediterranean climate (53.80%). Lastly, this type of problem does not occur in the strips to the South of the country, and there is not a significant difference between the Central part (46.15%) and the North (53.85%).

The individual analysis by each factor and type of defect confirms the information presented in Table 4, in the sense that the most adverse climatological locations exhibited higher shares of defects. It can be highlighted that, considering only the phenomenon of rainfall, the places with medium rainfall have more cases than those with high rainfall. This must be explained by the higher performance of windows situated in areas where heavy rain leads to more careful construction practices.

3.5.4. Breakdown of Defects in the Strips with Higher %NC

To have a more detailed perspective of how each of the types of defects were distributed in the most problematic location strips, Table 5 was produced. The criterion used was to select the four strips with %NC > 10%.


**Table 5.** Number of cases according to each type of defect in the four most problematic location strips.

If said table is analysed, it can be seen that the distributions for each type of defect are quite similar with respect to the percentage of cases they represent in each of the location strips. Thus, dWI has a range of presence between 54% and 65% (with 60% as the average), dAP has a range of presence between 17% and 25% (with 21% as the average), dHC has a range of presence between 15% and 20% (with 17% as the average) and dOC has a range of presence between 0.5% and 3% (with 1% as the average).

## **4. Discussion**

## *4.1. Reflections on the Database*

Based on Section 2 above, the method used in this research is not a 'survey' and is certainly not an 'experiment'. It is not a 'simulation', and it is also not a 'case study' (it does not focus on a specific case of a building or situation, or on a limited and unique area with previously selected characteristics). The analysis was carried out over the 'general census of cases' (taking into account and studying the global record of existing judicial records) for the entirety of a country, under the principles of generality and simultaneity.

As such, all the cases analysed correspond to the total set existing in Spain in the period of the study—in other words, 100% of all the emerging cases were collected herein (no case was left out). As such, they do not only constitute a more-or-less characteristic sample whose representativeness needs verifying; rather, they encompass the totality of all the defects found between the years 2008 and 2017 (there is no error or uncertainty, as it is not a partial sample). This constitutes another novel contribution of this research, given that it is generally not possible to collect the entirety of relevant data in a nation.

It should be pointed out that the set that was researched is quite homogeneous regarding building age. All the buildings, and all the windows analysed, had been built relatively recently. Spanish legislation envisions a three-year warranty period for this type of construction element, starting from the time the works are completed. As such, if any type of defect exists, owners should file judicial complaints in this period. This is why there is no dispersion or distortion of the data resulting from the possible deterioration of window frames caused by the passage of time.

As can be seen, the methodology did not include a number of other objectives that could also have been of interest but are out of the scope and fundamental ideas of this study. These could include trying to ascertain the possible responsibilities of the different participants. This aspect was explicitly left out of the permission given to the authors for accessing the data source, for which reason this aspect could not be assessed.

In future studies, it would be of interest to try to correlate defects with other climatological aspects. This analysis must be based on the exact location of each building—an aspect that was not possible to obtain in the data collection process, given the limitation that was imposed to maintain the confidentiality and protection of certain information. Should this sensitive information be accessible, one would correlate it with other external parameters not contained in the judicial process. All these parameters should be processed for each of the 1615 cases in question. This gives an idea of the difficulty of obtaining and quantifying all these variables.

## *4.2. Several Considerations*

The knowledge of the most recurrent construction defects in different construction units is very important for trying to minimise them in subsequent technical interventions, be it during the design [41] or execution stages. There are also two additional advantages: possessing a good maintenance strategy during the service life and having a governmental public catalogue of construction defects. In order to achieve this goal, it would be necessary for a prior compilation to be produced of the most problematic construction points, with the highest numbers of cases, based on (for example) the actual/judicial records, as presented in this research. Furthermore, national regulations could include specific construction details on how these problematic points could be more effectively addressed. This aspect would enable collaboration to minimise future construction costs and improve maintenance during the use phase—a matter, indeed, that is being given significant attention by a number of publications [42–44].

## *4.3. Teaching These Critical Aspects in Universities*

Students of architecture and engineering should have a greater understanding of building defects, so as to have knowledge on the main critical points and avoid making the same mistakes in the future (both in design and execution stages). This aspect is often not given sufficient attention in university degrees.

As a way of facilitating their understanding, and in order to deliver this knowledge in an eminently visual manner (given that the generation they are a part of often works with visual information), the authors have produced a number of infographics on different construction points (starting from the facade openings where windows are inserted). Based on their teaching experience, a number of encouraging results are being obtained. One example of these infographics is shown in Figure 7.

**Figure 7.** Examples of infographics of the singularities between facades and windows as a learning tool in university classes.

## **5. Conclusions**

The current study considers a database of 1615 cases of judicial complaints filed in Spain regarding defects in windows. In the literature review that was carried out, there were no references to similar work based on this type of source data, the extension of which provides quite a complete view on the reality of the problems that users complain about in these construction units (100% of the cases existing in Spain over 10 years).

By the analysis of said data, it can be noted that there is a clear majority of complaints related to 'water infiltrations' (dWI = 60.8%). Far behind, as to their frequency, and with nearly identical percentages between them are the defects related to 'air permeability' (dAP = 19.4%) and 'humidity by condensation' (dAP = 19.0%). As for the originating causes that lead to the defects, the 'absence/deficiency of sealant' and 'inadequate construction material and/or placement' together result in 83.4% of the cases, it being thus shown that problems are usually focused on aspects related to the placement of windows.

The influence of three climatological factors on the presence of the defects was also studied: rainfall, climate and latitude. As to the influence of the last (latitude), it is clear that there is a greater percentage in location strips situated in the Central part and North of the country (80.75% of cases according to the sum of the %NC values of Rows 1, 2, 3, 4, 6, 8 and 10 of Table 4). From the ranks of the concentrations of the defects analysed, three zones (A, B and C) were created/catalogued, it being noted that there is a greater recurrence in Zone A: locations with high rainfall/oceanic climate/North latitude (in which is obtained a relative frequency of 7.05 <sup>×</sup> <sup>10</sup><sup>−</sup>5).

The results presented herein can be of great interest for researchers of other countries who wish to know the probability of judicial complaints by building users. The values that were handled are highly representative, since they correspond to the totality of cases complained about in the span of time that was indicated.

This information can also be of great assistance for reducing the impact of low-quality processes [45]. The defects and the originating causes found in this research enable the different agents participating in the construction process to possess specific and highly useful information to minimise errors in the design and execution stages [41]. This will significantly lower conflicts and will make the construction

process more sustainable, given that there will be less litigation during the stage of the use of the buildings, and repair costs will be significantly reduced.

**Author Contributions:** Conceptualization, M.J.C.-A.; data curation, M.J.C.-A.; formal analysis, C.E.R.-J. and D.B.-H.; investigation, M.J.C.-A.; methodology, M.J.C.-A.; validation, J.M.; writing—original draft, M.J.C.-A. and C.E.R.-J.; writing—review and editing, D.B.-H. and J.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The current work was carried out within the MUSAAT Foundation's Action Plan, which envisaged carrying out national research on anomalies in buildings [46].

**Conflicts of Interest:** The authors declare no conflict of interest.

## **Nomenclature**


## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Cost of Climate Change: Risk of Building Loss from Typhoon in South Korea**

## **Ji-Myong Kim 1, Seunghyun Son 2, Sungho Lee <sup>3</sup> and Kiyoung Son 4,\***


Received: 30 June 2020; Accepted: 26 August 2020; Published: 31 August 2020

**Abstract:** In recent years, natural disasters and climate abnormalities have increased worldwide. The Fifth Assessment Report (2014) of the Intergovernmental Panel on Climate Change warned of extreme rainfall events, warming and acidification, global mean temperature rises, and average sea level rises. In many countries, changes in weather disaster patterns, such as typhoons and heavy rains, have already led to increased damage to buildings. However, the empirical quantification of typhoon risk and building damage due to climate change is insufficient. The purpose of this study was to quantify the risk of building loss from typhoon pattern change caused by climate change. To this end, the intensity and frequency of typhoons affecting Korea were analyzed to examine typhoon patterns. In addition, typhoon risk was quantified using the Korean typhoon vulnerability function utilized by insurers, reinsurers, and vendors, the major users of catastrophe modeling. Hence, through this study, it is possible to generate various risk management strategies, which can be used by governments when establishing climate change policies and help insurers to improve their business models through climate risk assessment based on reasonable quantitative typhoon damage scenarios.

**Keywords:** climate change; typhoon; catastrophe model; typhoon vulnerability function; risk analysis

## **1. Introduction**

Climate change is expected to have serious consequences in a wide range of areas. It is expected to affect extreme weather events in the short term, as well as generating long-term effects such as disease spread and rising sea levels. Extreme weather events could include heat waves, cold waves, windstorms such as hurricanes, heavy rains, floods, a lack of precipitation, and drought. Many regions have suffered from the fatal effects of recent extreme weather events. Such extreme weather events have, of course, always been part of human history. However, recent extreme weather events have become greater in frequency and intensity than those in the past, and the potential for damage has increased rapidly.

Additionally, the current pattern of tropic cyclones is so different from past patterns that they are called super typhoons or super hurricanes. For instance, Typhoon Haiyan occurred in 2013 and became known as Super Typhoon Yolanda, as it was the most extreme tropical cyclone recorded on land. Its severe rain and winds made it difficult for South Asian nations to recover from the shattering damage of about USD 300 billion [1]. In the United States in 2017, three powerful hurricanes (Hurricanes Harvey, Maria, and Irma) caused tremendous damage. The total damage from these hurricanes was about USD 293 billion, with Harvey causing USD 125 billion in damage, Maria causing

USD 90 billion, and Irma USD 77.6 billion worth of damage [2]. In addition, Hurricane Katrina, which occurred in 2005, was one of the most damaging natural disasters in United States history. The heavy rain and strong winds generated by hurricanes have caused US Gulf Coast cities to suffer about USD 180 billion in direct and indirect damage [2]. In Europe, economic losses of around EUR 13 billion were incurred in 1999 due to the record rain and winds of the European storms Anatol, Lothar, and Martin [3].

However, despite these historic events and record damage, there are still debates about climate change and tropical cyclone patterns. Even though many studies have argued that climate change has affected tropical cyclones, other research argues that the evidence for this is poor. For instance, though some have asserted that the intensity of tropical cyclones gradually increases as the climate warms up [4–6], others argue that this increase is within the natural range of fluctuation in the frequency or severity of tropical cyclones in long-term climate observations [7]. Depending on the region, long-term climate observations may not be of sufficient duration to determine how climate change affects tropical cyclones, or the effects may not be clear. It is also difficult to predict how future activities will impact climate change. However, there is evidence that the damage caused by extreme weather events, especially tropical cyclones, is increasing every year [8]. Other studies have shown that this trend is damaging more people and assets, and the damage will be even greater given the high coastal population and property density of many cities, and reduced woodlands [9,10]. While these studies do not adequately rule out damage due to increased social vulnerability (e.g., income and population), it is difficult to view this as only an increase in extreme weather events and tropical cyclones. The trend is clear [11]. Therefore, we will analyze the intensity and frequency of typhoons that have affected Korea for a scientific and quantitative examination of the impact of climate change on typhoons. In addition, this study will assess the risk of building loss to quantify the damage caused by changes in typhoon patterns due to climate change.

## **2. Literature Review**

## *2.1. Climate Change and Economic Impact*

The United Nations Intergovernmental Panel on Climate Change (IPCC) warns against climate change in its 5th Assessment Report (AR5). Compared to pre-industrial levels, the report estimates that global average temperatures will rise by more than 1.5 ◦C in all scenarios by 2100. In addition, warming will continue as greenhouse gas emissions continue, and moreover, it is likely to exceed 2.0 ◦C in many scenarios. Additionally, the World Bank (2014) has a similar outlook. Global warming is inevitable due to greenhouse gases in the Earth's atmosphere, and the temperature will be 1.5 ◦C higher than before industrialization. Without reasonable steps to reduce greenhouse gas emissions, the planet is expected to warm up by up to 2 ◦C by the middle of the century and up to 4 ◦C by the end of the century [12]. Furthermore, Stern (2006) reported that in the absence of measures to reduce emissions, greenhouse gas concentrations would reach twice the pre-industrial levels in early 2035, raising the Earth's temperature by nearly 2 ◦C. This warming is expected to change the water cycle around the world, increasing the difference between wet and dry regions. As the heat expands into the deeper oceans, the ocean's circulation pattern will change and continue to warm, and the Earth's glaciers will decrease. Due to the reduced glaciers, the global average sea level is likely to rise more quickly than the rate of rise over the last 40 years (IPCC 2004). As mentioned above, many studies and research papers show that global climate change is certain and will increase, and warn against the side effects of warming [13].

The literature on the economic impact of climate change is as follows. The IPCC Fifth Assessment (2014) reports that increasing warming above 3 ◦C will result in a loss of 0.2% to 2.0% of annual GDP (gross domestic product), although estimates of damage vary widely from country to country [14]. The IPCC expects further acceleration of the occurrence of damage if warming exceeds 2 ◦C, but these effects will be difficult to realize over the next 30 years. Moreover, if warming exceeds 2 ◦C,

negative returns are expected from various portfolios [15]. Dietz and Stern (2014) estimate that when global warming reaches the 4 ◦C level, annual economic output will decrease by 50% compared to that without warming. They estimated a warming of around 3.5 ◦C by 2100 [16]. Stern (2006) estimated that over the next two centuries, global warming scenarios between 2.4 and 5.8 ◦C would result in a mean loss of about 5% (up to 20% in some regions) of global yearly GDP by 2100. These calculations indicate that no action has been taken on global warming, and the costs are expected to increase by more than 20% of GDP, given the wide range of risks and impacts. In addition, with simple extrapolation, it was estimated that extreme weather could damage 0.5% to 1% of global GDP by the middle of the century [13]. Mendelsohn et al. (2000) studied the potential damage using a global warming scenario (an increase of 2.5 ◦C by 2010), and estimated that the total market impact cost would not exceed 0.1% of GDP in 2100. Market impact may vary based on latitude. For example, in low latitude countries, warming increases damage. On the other hand, income is expected to increase at higher latitudes. However, if global warming is above 2.0 ◦C, it is expected that the benefits will decrease and the damage will increase. They also found that damage in a global warming scenario (2.0 ◦C increase by 2060) would be expected to have an aggregate impact of 0.3% damage to GDP in 2060. The study estimated that with warming of 2.0 ◦C by 2060, most of the damage would occur in agriculture, and the damage would vary widely from country to country [17]. As these studies show, climate change is predicted to have a significant impact on future economic growth and living standards. Losses may vary by region, but the damage is expected to increase globally. In addition, severe weather phenomena are expected to add to the damage.

## *2.2. Climate Change and Losses from Natural Disasters*

The increase in damage caused by natural disasters is closely related to the growth of population and wealth. This is because the world's population is increasing every year, and wealth is also growing. The annual damage caused by natural disasters may be linked to these increases in wealth and population. Therefore, to objectively quantify climate change and the increase in damage from natural disasters, increases in wealth and population must also be considered [18]. To this end, many studies have examined climate change and damage after normalization for population and wealth changes. Nordhaus (2010) argued that since 1900, losses from hurricanes in the United States have increased significantly according to revised data only for GDP [19]. Changnon (2009) argued that insurance losses from hurricanes in the United States increased between 1952 and 2006 but that the growth was concentrated in the western United States and is believed to be due to recent increases in population and wealth in this region [20]. He also examined a study of insurance losses due to hail in the United States since 1992. The amount of insurance losses due to hail has increased, but this is attributed to increased exposure and vulnerability to hail due to the expansion of urban areas. There was no change in the frequency of major hail storms [9]. Chang et al. (2009) detailed a rise in flood loss through a flood loss survey (since 1971) in six cities in Korea. The cause of the increase in flood damage was found to be related to the increase in population, as well as summer rainfall and deforestation [10]. Schmid et al. (2009) found that there was a clear trend in US hurricane losses. However, this trend appeared after 1970 (it was not seen in the entire record dating back to 1950) and was found only after adjustments for wealth and population [21]. Fengqing et al. (2005) investigated flood damage in the Xinjiang Autonomous Region of China and determined that flood damage had increased since 1987. However, he pointed out that reserves and flood control structures, not heavy rains caused by climate change, were responsible for the increase in flood damage [22]. Changnon (2001) reported increased damage according to normalized data due to strong winds, rainfall, lightning, hail, and tornadoes since 1974 in the western United States. Nevertheless, the study also showed increased losses according to normalized data even in areas with reduced thunderstorm activity, suggesting that socioeconomic factors contributed to this trend [23]. Miller et al. (2008) analyzed the loss data for climate disasters around the world after revising them, taking into account wealth and population growth. Their main findings were that since 1970, losses from climate disasters have increased, but this trend does not

extend back to 1950. In addition, the authors believe that the increase in losses from climate disasters is due to the hurricane damage in the United States in 2004 and 2005 [24].

As with previous studies, wealth and population are important considerations in the study of the relationship between climate change and losses from natural disasters. This is because increases in population and wealth are important contributors to increased natural disaster damage. It is true that studies of natural disaster damage caused by climate change are difficult due to the close relationship between wealth, population, and natural disaster damage. Therefore, in this study, the vulnerability function was used to exclude the interference of wealth and population for the quantitative study of climate change and losses from natural disasters only. In addition, the existing studies judge the increase and decrease in damage amounts from natural disasters, and thus, it is difficult to quantitatively study building damage due to climate change. Therefore, this study divided buildings into three groups by occupancy (commercial, industrial, and residential) and considered the risk of building loss due to climate change for each category.

## **3. Framework of Study**

The purpose of this study was to quantitatively prove climate change and typhoon changes. To achieve this, the study consisted of two parts. First, this study investigated typhoons that have affected Korea and analyzed the intensity and frequency of typhoons and changes in typhoon patterns.

Second, this study quantified the risk of building loss due to changes in typhoon patterns as a result of climate change. To quantify typhoon risk, this study used the Korean typhoon vulnerability function of major users of catastrophic (CAT) modeling: insurers, reinsurers, and suppliers. The buildings were divided into commercial, industrial, and residential types for analysis. As shown in Figure 1, this study was limited to typhoons that affected S. Korea from the 1970s to 2010s. Therefore, the research scope of this study was limited to S. Korea and reflected the architectural design standards and planning strategies of S. Korea. The results may be different in countries with different geographic or architectural design standards and planning strategies from S. Korea.

**Figure 1.** The typhoon that affected S. Korea (1970s to 2010s).

## **4. Typhoon Patterns**

This part will discuss the frequency and severity of typhoons. Since the amount of risk is determined by the product of frequency and severity, both frequency and severity play an important role in risk determination. Therefore, frequency and severity were examined separately for detailed investigation.

Data on typhoons were obtained from the Korea Meteorological Administration (KMA, Seoul, Korea). The KMA was established in 1949 as a Korean government agency providing meteorological services and is responsible for monitoring the weather system and distributing and storing related information. This study collected data on the number of typhoons and the wind speeds that affected Korea between 1973 and 2019 from the KMA.

## *Frequency and Severity of Typhoons*

The frequency of typhoons by year is shown in Figure 2. Korea was affected by an average of 3.3 typhoons annually, with a standard deviation of 1.5. The minimum number of typhoons generated during the survey was zero, and the largest number of typhoons affecting Korea was seven. The linear regression model of typhoon frequency is y <sup>=</sup> 0.0036 <sup>×</sup> Year – 3.8831. The R2 value is 0.0012. The slope of this model shows that the relationship between the year and number of typhoons is positive, indicating that the number of typhoons has increased slightly each year. However, the R2 value shows that the relationship between the year and number of typhoons is very weak, since the value is extremely low. Hence, it would be difficult to conclude that climate change has a clear impact on the number of typhoons.

**Figure 2.** Number of typhoons by year.

The maximum wind speed of typhoons by year is shown in Figure 3. The wind speed was based on 10 min sustained wind speed. When a typhoon affected Korea, the highest wind speed recorded by 96 meteorological stations was considered as the maximum wind speed. After collecting the maximum wind speed by typhoon, the maximum wind speed was determined by year. The average maximum wind speed was 27.6 m/s, with a standard deviation of 8.4 m/s. The highest maximum wind speed was 51.1 m/s, and the lowest was 17.3 m/s.

**Figure 3.** Maximum wind speed of typhoons by year.

The linear regression model of typhoon severity is y = 0.02364 <sup>×</sup> Year <sup>−</sup> 444.18. The R2 value is 0.146. The slope of this model shows that the relationship between the year and maximum wind speed is positive, indicating that the maximum wind speed has increased each year. Furthermore, the R2 value shows that the year and the maximum wind speed have a weak quantitative linear relationship.

Based on the typhoon data from the KMA, this study investigated the frequency and severity (max. wind speed) of typhoons by year. The frequency of typhoons was found to increase only slightly from year to year, but due to the low R<sup>2</sup> value, the explanatory power is low and not significant. However, the severity of typhoons appears to have increased year by year. Although the R<sup>2</sup> value is relatively small, it is a weak positive relationship and is sufficient to show a potential trend. This suggests that the frequency of typhoons does not increase every year, but the severity does. Thus, it is possible that the risk from typhoons has increased due to increasing severity.

However, the data of the KMA used in this study were recorded for about 50 years from 1973 to 2019. Although this period is a period for which the KMA's current national and regional data are available, it is considered to be a short period for concluding that the severity of typhoons is gradually increasing. Therefore, it is necessary to keep an eye on the trend through additional data collection.

#### **5. Calculation of Increased Typhoon Risk**

This section quantifies the risk of building loss due to changes in typhoon patterns resulting from climate change. This study adopted the Korean typhoon vulnerability function used by insurers, reinsurers, and vendors to quantify typhoon risk. The assessment of vulnerability to typhoons is a significant part of the typhoon risk assessment model. It is the vulnerability curve or vulnerability function that is used for this vulnerability assessment. The vulnerability function is expressed by quantifying the vulnerability of the building. The vulnerability function for typhoons explains the correlation between the average loss ratio, wind speed, and inventory information of various buildings, and determines the loss scale. The average damage ratio is the total amount of damage incurred by a building due to a typhoon divided by the total cost of the building. Therefore, the average damage ratio is used as a measure of a building's vulnerability to typhoons. For example, a high average damage ratio indicates that the damage is large due to a high vulnerability to typhoons. The vulnerability function was used because it quantifies the loss ratio for typhoons through various damage indicators such as the inventory information of buildings and wind speed, thus preventing the distortion of damage by wealth and population found in previous studies [25]. The catastrophic (CAT) model has been developed and used by a number of initiatives, global administrations, and private interests as a risk assessment tool for

scientifically assessing, responding to, or mitigating natural disaster risk. For example, public models include HAZUS Multi-Hazard in the United States, RiskScape in New Zealand, New Multi-Hazard and Multi-Risk Assessment Method (MATRIX) in Europe, and Central America Probabilistic Risk Assessment in South America. Vendor models include Risk Management Solutions, Applied Insurance Research, and Risk Quantification and Engineering, which develop and use models for natural disasters and other risks as business models. Primary and reinsurance companies quantify risks from natural disasters by actively using in-house or vendor models. They use the CAT model to manage their portfolios, capital, business preferences, and holding strategies and for capacity monitoring based on the quantified risks of natural disasters [25,26]. The CAT model generally consists of four parts (i.e., a hazard module, exposure module, vulnerability module, and financial module). Each module has an independent function, and the operation of the modules proceeds sequentially. First, the hazard module generates events and calculates local intensity to physically define the events and describe the severity and frequency of natural disasters. Second, the exposure module embodies inventory and geographic information for a building. Third, the vulnerability module provides the loss ratio based on the vulnerability function, which determines the average loss ratio based on wind speed and building inventory information. Lastly, the financial module applies certain insurance factors, such as deductibles and liability limits, to calculate financial losses [25]. For instance, in the hazard module, the severity and frequency of typhoons are defined through simulation according to the characteristics of past typhoons in a specific area. In the exposure module, wind speed is determined according to the inventory characteristics and geographic characteristics of buildings in that specific area. Depending on the determined wind speed, the vulnerability module computes the loss amount through the vulnerability function. The calculated loss amount is calculated by considering insurance conditions in the financial module.

Figure 4 illustrates the vulnerability functions for each model. This study used the vulnerability function of loss ratio for wind speed and occupancy. Occupancy, a representative relative vulnerability factor, was used to reflect the vulnerability of building inventory. Occupancy is used in risk management and risk assessment models among other building inventory information. Occupancy also refers to a similar accounting policy in insurance that categorizes buildings as industrial, residential, and commercial. This classification of buildings according to occupancy refers to building units with similar physical and financial characteristics. This study also adopted the occupancy classification and divided buildings into industrial, residential, and commercial groups.

**Figure 4.** Vulnerability functions for each model.

## *Results of Analysis*

The analysis results are shown in Tables 1–3. To clearly show the increase rate, the five decades from the 1970s, when typhoon data began to be recorded, until the recent 2010s were compared. Additionally, as seen in Equation (1), each decade showed a numerical increase or decrease compared with the 1970s. The period in Equation (1) refers to the 1980s, 1990s, 2000s, or 2010s.

$$\text{The rate of increase each decade (\%)} = \text{(period - 1970s)} \text{(1970s)} \tag{1}$$

**Table 1.** Result (average) summary of each model for industrial buildings.


**Table 2.** Result (average) summary of each model for commercial buildings.


**Table 3.** Result (average) summary of each model for residential buildings.


The differences in the average loss rates for each model ranged from 0.38% to 6.72%. The average loss ratio for each model is as follows. In the case of the Vendor model, industrial buildings have 3.05%, commercial buildings have 1.96%, and residential buildings have 2.45%. In the case of the Reinsurer model, they are 0.40% for industrial buildings, 3.80% for commercial buildings, and 0.28% for residential buildings. In the case of the Insurer model, they are 0.86% for industrial buildings, 2.95% for commercial buildings, and 0.43% for residential buildings.

The vulnerability function is generally developed based on the available or actual losses of the model developers. However, losses may differ due to the differing capital, business preferences, and portfolios of each model developer. These factors will make a difference, even if Korea's vulnerability function is the same. To compensate, the insurance industry typically compares and validates the results of two or more models. Therefore, this study improved the reliability of the results by comparing multiple models from three different fields. This study averaged the results of three models and compared the increase and decrease by decade. Table 1 describes the result (average) summary of each model for industrial buildings. The increase rate has grown gradually over the decades compared with the 1970s, increasing 307% on average and 434% in the recent 2010s. In the 2000s, it ballooned to 750%. This was due to Typhoons Maemi (2001) and Rusa (2002), which caused the most damage in history, with the highest wind speeds in Korea. The coefficient of variation (CV) is used to compare data with

different units of measurement. The CV is the standard deviation divided by the arithmetic mean. The larger the CV, the larger the relative difference. The CV also steadily increased every 10 years, indicating that the difference in the intensity of the typhoons (maximum wind speed) increased. Compared to the past, typhoons of various intensity are occurring now, indicating that the current typhoons can exhibit more diverse damage categories than the past typhoons. These values (i.e., average, increase rate, and CV) prove that the loss incurred by buildings is greater than that in the 1970s, and this proves that typhoons are more serious than in the past. Table 2 represents the result (average) summary of each model for commercial buildings. The increase rate has grown progressively over the decades since the 1970s, reaching 455% on average and 634% in the recent 2010s. During the 2000s, it intensified to 1133% due to Typhoons Maemi (2001) and Rusa (2002). The CV rose steadily every period and was projected to vary as the maximum wind speed escalated each decade. This shows that current typhoons produce more varied intensity than past typhoons. Table 3 shows the result (average) summary of each model for residential buildings. The increase rate has grown over the decades since the 1970s, reaching 337% on average and 476% in the recent 2010s. During the 2000s, it increased dramatically to 819% owing to Typhoons Maemi (2001) and Rusa (2002). The CV rose increasingly each decade and was predicted to vary as the maximum wind speed rose. These demonstrate that current typhoons are more severe than past typhoons.

## **6. Discussion**

This study was a quantitative study of the changes in tropical cyclones caused by climate change. It investigated the typhoons affecting Korea in the past and analyzed their intensity and frequency, as well as changes in typhoon patterns. The analysis results showed that the frequency of typhoons did not increase, but their severity did increase. This indicates that the risk from typhoons is growing due to their increasing severity.

To quantify the increased typhoon risk, this study used the vulnerability function of the CAT model, which is not affected by wealth and population. As a result, it was shown that the risk from typhoons increased gradually during the 1970s, 1980s, 1990s, 2000s, and 2010s. On average, the risk to industrial buildings increased by 307%, the risk to commercial buildings by 455%, and the risk to industrial buildings by 337%. The increase rate for commercial buildings was the largest, which is attributed to the fact that commercial buildings consist of more diverse buildings than other building types. In addition, the 2000s displayed the largest increase rate due to the influence of Typhoons Maemi and Rusa. These two typhoons were the largest typhoons in Korea, but they were included in the analysis because they are generally considered to be 15–30 return period typhoons. The analysis results show that the risk from typhoons has increased significantly each year.

For this reason, new strategies are required to respond to changing circumstances and increasing risks. The insurance industry is critically sensitive to hurricanes. For example, 11 insurers in the US were bankrupted by Hurricane Andrew (1992). Therefore, a review of pricing, policy conditions, and reinsurance for increased loss risk is essential. In terms of pricing, there is a need to raise current premiums and modify the current probable maximum loss and limit of liability. Moreover, changes in acquisition, retention, and accumulation management strategies are inevitable due to the changed pricing. Policy conditions require a review of the scope of coverage of existing insurance policies. In facultative and treaty reinsurance arrangements, new strategies for excess loss and layering for enlarged risk are needed. Furthermore, it is crucial to calculate premiums through accurate quantification of the weighted risks, which can be performed through appropriate CAT models. Increased risk, on the other hand, may also be a new opportunity because it requires active risk transfer from the government or private sector, leading to a boom in insurance coverage. The introduction of CAT bonds is also desirable to hedge losses from catastrophic events in developing countries such as Korea. CAT bonds are used in developed countries such as the US and Germany to distribute reinsurance functions through the generation of bonds when the risk for insurance companies exceeds their acquisition capacity.

Governments also need to strengthen architectural design standards and codes to create a sustainable building environment that can withstand extreme weather disasters. Additionally, a separate management guide is required for older buildings constructed with past building codes. Maintaining infrastructure, lifelines, and transportation systems during hurricanes is critical to reducing human and property damage, requiring a more advanced management system. Storm and flood hazard areas should also be mandated to actively transfer the risk of loss through mandatory subscription to storm and flood insurance. This should enable local communities and residents to respond appropriately to changing circumstances.

## **7. Conclusions**

Many historical event and damage analysis studies continue to debate climate change and its effect on tropical cyclones. Therefore, this study aimed to analyze the intensity and frequency of typhoons by examining typhoons affecting Korea for scientific and quantitative research on climate change and typhoon changes and to quantify the damage caused by these changes. In addition, the risk of building loss was quantified. The results show that the severity of typhoons increases year by year, and thus the risk from typhoons also increases year by year. This suggests that climate change is affecting typhoons. Hence, it is necessary to consider government and industry responses to climate change and risk reduction.

On the other hand, since this study only considers wind speed and occupancy, the results may vary when additional inventory information is used. Further research using inventory information from multiple buildings is needed. Because different results can be obtained for different geographic regions of the Korean Peninsula, further research is needed to analyze the results. In particular, the results will differ in the southern part of the main typhoon route. Moreover, this study may not represent other countries, as the research area is limited to Korea. Countries with more coastal development than Korea may be more vulnerable to typhoons, and countries with greater building capacity or maintenance management systems may be better able to adapt to storms. Adaptation policies for climate change and risk reduction have not been modeled. Damage can be reduced through active climate change adaptation policies and programs by individuals or institutions, and comprehensive research that includes this is needed.

**Author Contributions:** Conceptualization, J.-M.K.; Data curation, J.-M.K.; Funding acquisition, K.S.; Investigation, S.S., S.L.; Methodology, J.-M.K.; Software, S.S., S.L.; Validation, J.-M.K., K.S.; Writing original draft, J.-M.K.; Writing review and editing, J.-M.K., K.S., S.S., and S.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the 2020 Research Fund of University of Ulsan.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Development of Design Considerations as a Sustainability Approach for Military Protective Structures: A Case Study of Artillery Fighting Position in South Korea**

**Kukjoo Kim 1,2 and Youngjun Park 1,\***


Received: 29 June 2020; Accepted: 5 August 2020; Published: 11 August 2020

**Abstract:** Republic of Korea (ROK) military installations are scattered across South Korea, but there is a higher concentration of fortifications in the demilitarized zone (DMZ) and eastern and western coastlines. These facilities range from relatively small structures, such as individual and artillery fighting positions, to large buildings, such as ammunition depots and command posts. These military installations have a significant thickness of concrete members to provide a high degree of protection against bombs and projectiles. The Korean military will carry out the integration and dismantling of these protection facilities over the next ten years through the Army transformation plan. Such large-scale construction projects have an impact on the environment in terms of the carbon footprint, because building construction and operations account for 36% of the world's energy use and 40% of energy-related carbon dioxide (CO2) emissions. It is very important to reduce the concrete materials and reinforcement steel during protective structure construction near the DMZ, which is now recognized as one of the most well-preserved areas in the world. In this study, new sustainable design considerations that allow elasto-plastic or plastic design of concrete elements were evaluated using a case study of an artillery fighting position. The new sustainable design considerations were developed on the basis of mission, enemy, terrain and weather, troops and support available, time and civil considerations (METT + TC) within the context of the current battle situation, as well as protection against near misses. From this study, it was found that new sustainable design considerations provide a reasonable degree of protection that permits good construction practices and maximum structural stability with minimum amount of materials. It was also found that if the new design procedure is used to replace 1000 artillery positions through the Army transformation plan, the CO2 emissions can be reduced by 476,582.4 tons and the cost reduced by USD 23,829,120.

**Keywords:** degree of protection; impact damage; blast wave; sustainable design consideration; elasto-plastic design; CO2 emission

## **1. Introduction**

## *1.1. Background*

The design of protective structures is an important factor not only for military construction but also for civilian sectors. As the threat of enemy's weapons of mass destruction increase, protective structure design becomes a common problem for military, civil, and industrial facilities. Currently, there is little information (including experimental data regarding bombs, projectiles, and atomic bomb blasts, etc.) and design procedures available to serve as a design guideline for such protection structures. In conventional works, the maximum degree of protection has been used on the basis of a 00-pound GP bomb detonating at a distance of 00 m (restriction on disclosure due to military secrets). These design criteria produce a structure which is able to sustain a given loading condition within the limits of elastic strain, which requires a significant amount of concrete. Reducing the amount of concrete used in a construction project is very important in terms of sustainability awareness and green planning [1]. The International Energy Agency and United Nations (UN) Environment Programme stated that building construction and operations accounted for 36% of the world's energy use and 40% of energy-related carbon dioxide (CO2) emissions in 2017 [2].

More specifically, Pacheco-Torgal et al. described that concrete and reinforcement steel account for about 65% of building greenhouse gas (GHG) emissions, 40% of which is CO2 emissions from concrete [3]. It is noted that the mean embodied carbon dioxide (ECO2) for building is 340 kg-CO2/m2, of which the structure accounts for about 60% [4]. This means that reducing the ECO2 in the structure frame directly reduces the GHG emissions. Additionally, in terms of the carbon footprint, it is very important to reduce the concrete materials and reinforcement steel during construction projects [5–9].

Recently, the Korean military has formulated plans to carry out the integration and dismantling of these protection facilities through the Army transformation plan over the next ten years. As military protective structures are concentrated at the border, these enormous concrete structures adversely affect the environment, particularly in the demilitarized zone (DMZ), which is now recognized as one of the most well-preserved areas in the world.

To identify the appropriate degree of protection, a design process must consider the weapon effects and dynamic factors pertaining to mission, enemy, terrain and weather, troops and support available, time and civil considerations (METT + TC) [10] within the context of the current battle situation. It must be considered that structure members can resist dynamic loads under relatively large plastic deformation. Such local overstresses in the member, or even some failures, should not seriously impair the overall structure. Some protective structures, such as artillery fighting positions, require protective ability only once. If the protective structure design process ignores the METT + TC factors, it produces structural members with massive thickness. As a large amount of concrete materials is consumed, the construction of these structures has a direct impact on the natural environment. Therefore, it is important to reduce the use of concrete and non-renewable materials during construction works.

In this study, new protective structure design considerations were developed to improve the resistance of structure members, resulting in large plastic deformation. The new design considerations evaluate the amount of concrete that was saved while providing an appropriate degree of protection by using finite element (FE) analysis as a case study.

#### *1.2. Objectives and Scope*

The primary objective of this study was to develop new sustainable design considerations for protective structures, using the Delphi technique on the basis of METT + TC factors within the context of the current battle situation. Then, after applying the proposed design consideration to the case project, the CO2 emission and cost reduction corresponding to the concrete savings were analyzed. To do this, a three-dimensional FE analysis was conducted to assess the potential performance of the artillery fighting position as a case study in South Korea.

#### **2. Protection against Conventional Weapons**

For the purpose of protection against weapons, the protective structures may be classified into two general groups: those which provide protection against (1) the impact of a weapon's penetration and (2) the blast of a weapon's explosion. Penetration is caused by weapons such as projectiles fired from guns, conventional bombs with a charge-to-weight ratio smaller than 20%, rockets, and guided missiles. Explosion blast is caused by weapons such as high explosive or conventional bombs with a charge-to-weight ratio higher than 20%. For the purpose of structural analysis, a weapon's impact causes severe local damage, while the weapon's blast causes overall damage of relatively less severity [11,12].

When a bomb or projectile strikes a concrete member, there is the formation of an irregularly shaped crater and considerable cracking in the opposite side of the slab. The severity of such cracking decreases as the concrete thickness increases. Because of the inherent low tensile strength of concrete, both faces of the slab tend to rupture with the reflected shock wave in the impact face and the propagated wave in the opposite face. Design information about the weapon system and condition of protection to be provided is necessary. In most cases, the desired level of protection of a structure differs [13–15]. For example, if the building is to be located near the border between nations, within the range of army artillery, the required protection of the exposed walls would be the loading due to an armor piercing (A.P.) type projectile. In contrast, the design of the roof of the building would consider the loading of and A.P. type bomb released from a carrier plane.

In many cases, the functional importance of a protective building, its size and the thickness of structural members is larger so as to provide a certain degree of lateral and overhead protection against blast and fragments of a bomb. In the South Korea Army, a reasonable degree of protection has been developed on the basis of a 00-pound GP bomb detonating at a distance of 00 m. The thickness of structural members resulting from this consideration only permits the induced stresses of structure elements to remain in the elastic range. However, the blast loading on a protective building caused by a high explosive detonation in a bomb depends on the peak pressure and the impulse of the incident and dynamic pressures. For the analysis of structures under dynamic loading, such as blast loading, the analysis of inertial force and kinetic energy is required, as the applied load changes rapidly with time, as shown in Figure 1 [16].

**Figure 1.** Idealized pressure-time curve of a blast wave.

For design purposes, the effects of the inertial force in the equation of dynamic equilibrium and kinetic energy in the equation of energy conservation related to the mass of the structure must be considered. The response of a concrete element in a protective structure can be defined as ductile or brittle structural behavior. In the ductile mode of response, large inelastic deflections without complete collapse occur in the structure element, while partial failure or total collapse of the element occurs in the brittle mode [17]. If the ductile behavior is selected for an element of protective structures in the current design consideration, there can be savings in concrete materials within the desired level of protection. The flexural action of a reinforced concrete member was demonstrated by the resistance–deflection curve shown in Figure 2 [16].

**Figure 2.** Resistance-deflection curve for flexural response of concrete elements.

The magnitude of stresses produced in the protective structure responding in the plastic range cannot be directly related to the strain. The average stress over portions of the plastic range can be determined by relating this average stress to the deflection of the element defined in terms of the angular rotation at the supports. Therefore, the elasto-plastic or plastic design considerations of concrete protective structures must be considered [18] in the current Army protective structure design standard to use maximum protection capability of concrete elements, and sustainable and economical design approaches.

## **3. Development of New Design Considerations as a Sustainable Approach**

The protection standard of the Republic of Korea (ROK) armed forces comprises four stages, each of which is determined based on comprehensive considerations for the threat of the enemy and the protective capability from the aspect of military operations and for the purposes from the aspect of facilities. Once a degree of protection is set, the corresponding protection level of a structure is determined based on blast loads. In general, protection levels according to protection degrees represent the thresholds of the displacement ductility factor and rotation angle that are proposed by the Unified Facility Consideration (UFC) 3-340-02 [16]. Table 1 presents the permissible limits of rotation angle for brittle materials such as concrete at each protection level.


**Table 1.** Design consideration of the protective facilities in UFC 3-340-02.

The protection levels of Table 1 are different concepts from the degrees of protection. The protection levels are distinguished based on design concepts. In the case of the ROK armed forces, when the protection degree of a protective facility is determined, an elastic design corresponding to the protection level A is adopted.

For the design of a protective structure, it is necessary not only to analyze severe dynamic loads comprising blended impacts of blast waves and fragments, but also to examine various and complex battlefield conditions where projectiles might directly blast and penetrate the structure. However, the protective degrees currently in use in the ROK armed forces are still grounded on the dated concept of protection focusing on the thickness of heavy-weight structures.

This study aims to propose guidelines for determining bullet/explosion-proof degree of protection, whose application ranges from high-tech precision guided weapons to the artillery strength for pinpoint strike, and to examine guidelines for future revisions in the area of protection in the standard of defense and military facilities. To achieve these goals, the Delphi technique was used to accurately reflect objective opinions of experts from government, military, and private sectors. Based on the opinions collected, the guidelines for determining degrees of protection were derived by horizontally and vertically synthesizing key words that were extracted from the Korea Army innovation assessments and the innovation school of the Korea Army Research Center for Future and Innovation (KARCFI). To achieve a fair and even distribution, a group of 21 experts (7 civilian experts, 7 government officials, and 7 servicemen) was organized. All the experts were experienced in defense and military facilities.

After organizing the expert group, we conducted several rounds of survey (first round with open-ended questionnaires and second to fourth rounds with closed-ended questionnaires). Based on the survey, we derived considerations for setting protection degrees. In particular, the Shapiro–Wilk normality test was performed to quantify the agreement between each panel during the second to fourth rounds. Then, a factor analysis was performed to identify common features of considerations in each factor [19–21]. The result was summarized into the five tactical considerations (METT + TC). Then, the innovation school and assessments for future battlefield environment led by KARCFI extracted essential considerations for setting protection degrees as key words and combined them horizontally and vertically. Consequently, the considerations identified for the protection standard of military facilities include the following six factors: wartime/peacetime mission; omnidirectional threat; stability and resilience of troops; geology and weather; threat detection, alert, reaction and recovery time; military-private combined factor. The design process checklist is shown in Table 2, avoiding excessive design and ensuring the desired performance while considering future diversified battlefield environments and weapon systems. The highest requirements for each item in Table 2 are selected as the final degree of protection and protection level. Table 2 shows an example of determining the degree of protection and protection level for artillery positions.



305


**Table 2.** *Cont.*

• indicates applicable, - indicates not applicable.

#### **4. A Case Study for Artillery Fighting Position Using Finite Element Analysis**

#### *4.1. Setting the Protection Degree and Level*

When designing a protective structure, it is necessary to consider the dynamic loads of blast waves and impacts of fragments caused by the explosion of a high-energy bomb or a missile. Regarding such dynamic load, the characteristics of a weapon as a means of strike need to be closely examined. A case study for artillery positions in the front-line area was performed by applying the design factors of Table 2. Specifically, the standard type of the existing artillery positions and a new artillery position designed with the new design factors were comparatively evaluated through a FE analysis.

The major threat issue of the new artillery position is not the direct strike of enemy artillery, but rather the protection against blast waves and fragments caused by close explosions. METT-TC of the artillery troops being considered, the protection level was set to Level C, as the fire-and-displace is expected under the enemy's counter-artillery fire. Through the analysis, the protection degrees and levels that were ultimately desired were derived, as shown in Table 2.

## *4.2. Evaluation of Protective Performance through Numerical Analysis*

This study performed a FE analysis to identify the dynamic behavior characteristics of an artillery position under blast waves. It was assumed that 115 kg of TNT, the maximum explosion in enemy's weapon, was exploded 7.6 m away from the artillery fighting position. This is the result of taking into account the accuracy of enemy artillery weapons without using guided weapons during artillery battles.

A numerical analysis mode was developed by using ANSYS AUTODYN®. This program was developed by the Institute for Defense Analysis. It is a very useful tool for solving the ductility issue between fluids and solids through the coupling of Lagrange and Euler Solvers in solid mechanics. For a non-linear dynamic analysis of a structure, a reinforced concrete element was constructed using an explicit FE method. A standard artillery fighting position with a wall length and height of 4500 and, 700 mm, respectively, was selected as the target structure of the analysis. As for the wall thickness, five cases of 300, 350, 400, 450, and 500 mm were considered.

As for the material properties of the concrete wall, as presented in Table 3, the ordinary concrete and reinforcement bar presented tensile strengths of 24 and 400 MPa, respectively. The minimum reinforcement ratio was 0.00306.


**Table 3.** Material properties used infinite element (FE) models.

As illustrated in Figure 3, the detonation point was 7.6 m away from the artillery fighting position. The explosive was TNT with a density of 36.675 kg/m per unit length in the z-direction.

**Figure 3.** FE model developed.

## *4.3. Numerical Analysis Result of Protective Performance*

Figure 4 shows the impacts of blast waves on the structure over time. As the March front was lower than the height of the structure, the structure was affected by non-uniform pressures

**Figure 4.** Impact of blast waves on the structure over time.

Figure 5 shows the pressures of blast waves and the displacement of the structure wall over time. Protection degrees and levels desired for the artillery fighting position can be expressed by the maximum displacement and rotation angle of the wall. Table 4 presents protection levels for each case of wall thickness.

As shown in Table 4, the dynamic analysis of the reinforced wall revealed that a sufficient level of protective capacity could be secured, even if the wall thickness was reduced to 300 mm. In other words, if the current design of protective facility reflecting only the elastic displacement of reinforcement structure is replaced by the elasto-plastic design considering the protection levels for each METT + TC factor, as presented in Table 2, protective structures would be more economical and sustainable.

**Table 4.** Maximum displacement and rotation angle according to wall thickness.


**Figure 5.** Blast pressure over time and displacement according to wall height.

## *4.4. CO2 Emission Reduction E*ff*ects*

When using the new design consideration proposed in this paper, the effect of concrete savings should be confirmed. Table 5 shows the calculation results of the CO2 emissions for concrete saved by the new design procedure as a sustainable approach, when 1000 artillery positions are replaced through the Army transformation plan. When the unit CO2 emissions of ready-mixed concrete used are 3.152 ton-CO2/ton [22], the CO2 emissions from artillery position project Army planed can be reduced by approximately 476,582.4 tons, which is equivalent to 40% of the project. When Korean carbon transaction price of USD 50/ton-CO2 [23] is applied, the total cost savings of USD 23,829,120 can be calculated. Therefore, if the new design consideration proposed in this study are applied to the entire military protective structure projects, greater cost saving and reduction in CO2 emissions are expected.


**Table 5.** Calculation of CO2 emission reduction effect.

### **5. Conclusions**

Reducing the amount of concrete used in construction project is very important in terms of sustainability awareness and green planning to reduce carbon and climate change risk globally. The concrete material and reinforcement steel account for about 65% of building greenhouse gas (GHG) emissions, 40% of which is CO2 emissions from concrete. Therefore, green building planning is very important during construction projects. However, the Korean military's design concept does not take full advantage of the features of reinforced concrete structures, resulting in excessive design. The protection scheme of ROK armed forces consists of four stages. In this scheme, protection degrees are set based on relative protection capabilities against particular weapon systems. Furthermore, the protection degrees established require the protection level A, corresponding to the concept of elastic design. Accordingly, no effective protection using the behavior characteristics of structures for weapon systems is provided. As a result, the degrees of protection currently in use in the ROK armed forces are still grounded in the dated concept of protection focusing on the thickness of heavy-weight structures. This study derived the protective design considerations necessary for future protective facilities to avoid excessive design and to secure a desired level of protection performance. In addition, this study also conducted a Delphi process by organizing a group of experts from the government, military, and private sectors. The result of the Delphi method was combined with the design considerations for protective facilities that were derived by the innovation school of KARCFI and innovation consulting. Thus, sustainable design considerations for protective facilities were obtained.

Using the above considerations, an FE method was performed for the protection performance of the standard artillery position widespread in the frontline area. The protection against close explosion was determined based on METT + TC of the artillery position in each protection degree. A dynamic analysis of a reinforced structure showed that the elasto-plastic design could produce a more sustainable structure.

So far, protective structures have been regarded as heavy-weight structures with thick walls. However, if the new design considerations developed in this study are applied, more economical and sustainable protective facilities can be constructed. In particular, the case study revealed that thousands of artillery positions in the frontline area and DMZ can reduce wall thickness. For instance, if the new design procedure is used to replace 1000 artillery positions through the Army transformation plan, the CO2 emissions can be reduced by approximately 476,582.4 tons, which is equivalent to as cost of USD 23,829,120. Therefore, if the new design consideration proposed in this study is applied to the entire military protective structure projects, greater cost saving and CO2 emissions are expected. It confirms that it is possible to provide sustainable protective facilities while satisfying the operational requirements for such artillery positions.

**Author Contributions:** Conceptualization, Y.P. and K.K.; Methodology, K.K.; Software, Y.P.; Validation, Y.P. and K.K.; Formal Analysis, K.K.; Investigation, K.K.; Resources, Y.P.; Data Curation, K.K.; Writing-Original Draft Preparation, K.K.; Writing-Review & Editing, Y.P.; Visualization, K.K.; Supervision, Y.P.; Project Administration, Y.P.; Funding Acquisition, Y.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by a grant (18SCIP-B146646-01) from the Korea Agency for Infrastructure Technology Advancement.

**Acknowledgments:** This work was supported by research fund of the Korea Agency for Infrastructure Technology Advancement. The ROKA Nuclear WMD Protection Research Center at Korea Military Academy is gratefully acknowledged for providing the support that made this study possible.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Multidimensional Construction Planning and Agile Organized Project Execution—The 5D-PROMPT Method**

## **David Leicht 1,\*, Daniel Castro-Fresno 1, Joaquìn Dìaz <sup>2</sup> and Christian Baier <sup>2</sup>**


## Received: 9 July 2020; Accepted: 3 August 2020; Published: 6 August 2020

**Abstract:** Although tremendous technological and strategic advances have been developed and implemented in the construction sector in recent years, there is substantial room for improvement in the areas of productivity growth, project performance, and schedule reliability. Thus, the present paper seeks to discover why the currently applied scheduling tools and the latest agile-based project organization approaches have not yet achieved their full potential. A missing interlinkage between the project's design, cost, and time aspects within the project design phase and its sparse utilization throughout project execution were indicated as the driving contributors responsible for the slow progress in development. To fundamentally change this situation, an extensive and coherent project organization solution is proposed. The key process of this solution utilizes a 5D Building Information Model comprising tight concatenations between the individual model objects and the corresponding construction cost and time effort values. The key dates of a waterfall-based construction process simulation, set during the project planning phase, provide particular information to create a structure for agile organized project execution. The implementation of information feedback loops allows target/actual comparisons and contributes to continual improvements in future planning. A comparative case study was conducted with auspicious results on improvements in the overall project performance, and schedule and cost reliability.

**Keywords:** 5D building information modeling; agile project organization; schedule/cost reliability

## **1. Introduction**

Large-scale construction projects are multi-faceted systems of complex and dynamic processes, which are constantly subjected to a multitude of internal and external influencing factors [1]. Tight time and budget constraints and increasing technical demands create challenging conditions to keep projects within their envisaged timeframes [2]. Insufficient limitations on design changes in later project stages, due to customer requests, increase the risk of postponements and growing time issues. Furthermore, the applied schedules often do not meet project-specific process requirements, frequently run out of time, and are exceeded or totally disregarded. In this way, projects are considerably delayed, costs run out of control, and the failure of the project becomes foreseeable [2–4].

In contrast to other industrial sectors, the construction industry has struggled to achieve high productivity rates over the past years. The contributing factors are manifold and have inimically affected not just national market conditions but the global construction economy as a whole [5–7]. This trend is evidenced by the productivity ratings (gross value added at constant prices) that were continuously recorded between 1995 and 2019 by the statistic department of the Organization for Economic Cooperation and Development (OECD) (comparable EU/US data are provided by the

OECD only between 1995 and 2017). During this time, the average annual productivity growth of the European construction sector was at 0.1%, while the average U.S. value was −0.2%. Compared to the total economy, these values indicate an average deviation of 1.6% in the EU and 2.3% in the US [8].

The annual productivity values of the construction sector in the U.S. increased between 2012 and 2019, highlighting the responsible factors for the error-triggering liabilities in the construction sector. According to the results of many investigations, the tenuous development of the construction industry can be traced back to inefficient working methods. This conclusion is evidenced in Figures 1 and 2, whereupon the efficiency of an economic sector is expressed by the ratio between its aggregated input and output values (based on equivalent and comparable factors (e.g., growth per gross domestic product (GDP), gross value added (GVA), total hours worked, unit labor costs) concerning deviating factors, different activity segments, deflators, and exchange rates) [9].

**Figure 1.** Annual productivity ratings of the EU; total economy vs. construction economy; Figure based on [10].

**Figure 2.** Annual productivity ratings of the US; total economy vs. construction economy; Figure based on [10].

### **2. Literature Review**

An investigation by Kuenzel et al. (2016) indicated that close to 90% of the analyzed construction projects suffered from coordination problems and unsuccessful project management and exceeded project deadlines [11]. In the same year, Oesterreich et al. (2016) revealed organizational issues in construction projects to be a fundamental cause of project failure [2]. Oppong et al. (2017) discovered the insufficient commitment of stakeholders to the project to be an important reason for project failures [1]. Further investigative approaches have shown that planning issues, complications in project organization, and stakeholder disagreements allows projects to exceed their schedules and budgets. Increasing project complexity, constantly changing customer requests, and a wide variety of regulations result in even greater planning and execution efforts [3,12–14]. Moreover, the sophisticated technical requirements and high quantity of project participants greatly affect the deployment efforts in project management and control. Kim et al. discovered a further issue in 2018 that concerns project workflow interruptions caused by sloppy integration of the supply chain to the project execution process. Many planners, contractors, and small and medium sized enterprises (SMEs) source their information, goods, and services via highly fragmented, unstructured supply chains. Moreover, due to the use of mostly impossible just-in-time distribution options, the delivery of goods is rarely in line with the project's progression. Thus, the flow of a project is continually disturbed, which leads to significant project time and budget issues that negatively impact the project's outcome and customer satisfaction [3,4,11].

Sambasivan stated in 2007 that the issue of delays and schedule overruns in construction projects can be understood as a global phenomenon, with conclusive evidence in many studies [15]. A paper by Olawale and Sun (2015) evaluated several international investigations concerning exceeded costs and mismanaged time in construction projects; according to this paper, Hoffman et al. determined in 2007 that 72% of 332 public US-facility projects were delivered late, and 47% exceeded the project timeline by more than 4 months [16,17]. The German Federal Ministry of Construction conducted an analysis between 2000 and 2015 and found exceeded costs and mismanaged timelines in 300 building projects (>10 m EUR). Only 65% achieved their scheduled targets [18]. According to an investigation by Assaf and Al-Hejji (2006), 59% of 76 evaluated projects in Saudi Arabia were considered delayed [14].

However, the examples are not only negative. Salem et al. presented a construction project case study in 2006, where the application of specific agile organization and lean construction approaches (applied lean construction elements: Last Planner System; Increased Visualization; Huddle Meetings; First Run studies; 5S; Fail Save for Quality) brought the project's progression up to three weeks ahead of schedule [19]. Thomas et al. showed as early as 2002 that a significant reduction in project duration of about 30% is achievable through sustainable project management improvements [20]. Hanna et al. (2010) and Hwang et al. (2011) found advantages in thorough pre-planning, leading to improvements in the quality of work execution, increased productivity values, and a reduction in project duration [21,22].

Nevertheless, the main causes for project delays remain under investigation. Doloi (2012), Braimah (2014), and Larsen (2016), in addition to many others, investigated the significant impediments that directly impacted the project's schedule [13,23,24]. The results of these studies indicated weak design elements; poor project planning, site management, and project control; insufficient contractor experience; contract payment problems; equipment availability; weather/environmental conditions; and material supply issues as the primary causes for project delays [13,23,24]. A study by Gebrehiwet et al. (2017) revealed 52 of the most likely reasons for project delays; ineffective project scheduling ranked number two, behind deficient project planning [25].

This investigation shows the international situation of the construction industry and provides information about the general and fundamental problems in construction project planning and execution [7]. Weaknesses in project design and inadequate schedule and cost management appear to be of particular importance regarding the root causes of errors. The inevitable consequences of these differences between planned and actual values lead to unforeseeable and unexpected additional cost and time requirements and thus to an increasing risk of the projects success. In order to further investigate and narrow down the described causes, current project management methods and the most recent solution approaches will be examined in the following.

## **3. Current Common Project Management and Scheduling Approaches**

To understand a project's timeframe, certain project management and scheduling approaches—mostly IT-aided and cross-industry applicable approaches—have been implemented in the construction sector during past decades [15]. The core objective of project scheduling is assigning the start and end dates for individual or cumulative activities and indicating when these activities must be finished to be delivered on time [26]. A valuable method to gather and structure the required project execution activities is given by the Work Breakdown Structure (WBS) method. This method has no time references but provides a general framework for schedule development and enables project management, monitoring, and control tasks [27]. A common and widespread scheduling tool is the *bar chart* or *bar diagram*—also known as a *Gantt chart* or *Gantt diagram*, which graphically represents the connection between planned and actual work performance and whether activities are on schedule, behind schedule, or ahead of schedule [26]. Further common methods include the Critical Path Method (CPM), Line of Balance (LoB), Linear Scheduling Method (LSM), and a Network diagram [28–31].

First and foremost, these tools are based on the *waterfall* principle whose main characteristic is strictly hierarchical embossed organization (priority based) according to the ratio between a chronological task order and appropriate task durations. Each task obtains a clearly defined startand end-date; dependencies on other tasks can then be determined [31]. Waterfall systems operate according to the push principle, which releases tasks, materials, or information into preassigned procedures or scheduling systems [32]. This method is ideally suited for projects with consistent or repetitive proceedings and recognizable long-term interventions due to regular organization [33]. Due to the predefined structure that is additionally required and/or was previously unconsidered, subsequently added activities could rapidly cause postponements and disturb the flow of a project. Tory et al. stated in 2013 that schedules should be dynamic documents, which can frequently be changed and adjusted in accordance to the project's progression and its various requirements [34]. In 2005, McKay and Wiers (2005) indicated that the amount of dynamic and unpredictable activities and the capacity for compensation should be considered when the scheduling method for a project is determined [31,35]. Thus, a significant disadvantage of the waterfall system is its non-dynamic ability to react quickly to fast-track changing procedures or ad hoc operations triggered by unpredictable events during a project's life cycle. Disregarding these disadvantages, waterfall-based scheduling methods are still widespread in the construction industry [34].

This system is contrasted by the agile methodology, which follows a maximum dynamic mode of operation. Project requirements and tasks are gathered and listed in the initial phase. An iterative process—consisting of task planning, execution, and revision steps—defines the project's organizational structure. Intermediary assessments can also be implemented to revise short-term activities [36]. Sanchez et al. (2001) described agility as a cooperative and synergetic strategy that organizes the processing and delivery of customer-specific high-quality goods and services, even in dynamic or unpredictable project environments. Well-structured organization combines constituent project participants into multi-skilled and cross-functional teams with participating members from both (internal/external) customers and suppliers [37]. The purpose of this method is to streamline project management efforts and to keep flexibility high, even with changes in late project stages [32]. According to Sacks et al. (2010), the basic methodology behind agile management is the lean approach, which was implemented in and adapted to the construction sector to "reduce variation, improve coordination, implement flow, establish pull, and to reduce various forms of waste in construction projects" [38]. The potential deficiencies of agile methods include difficulty in predicting a project's progression and missing the transparency of timescale objectives due to the flexible organization of task execution [7,39]. With the introduction of the Last Planner System (LPS) in 1993/94, the first official agile method was applied to the construction industry [7]. The implementation of masterand phase-schedules have contributed to more organized project execution and have connected production targets with project work structures. Key advantages of the LPS include significant improvements in information exchange and strengthening of the cooperation between project/site managers and foremen, who gather in monthly and weekly meetings to solve upcoming issues before they become critical. Due to its agile characteristics, the LPS improves schedule reliability and is ideally suited for complex, dynamic, and uncertain project conditions [40]. However, a critical

aspect of this method is its limited implementation possibilities, as the system was developed mainly for project execution duties and is thus primarily applicable to the execution phase of a project. Further, the insufficient establishment of the lean principle to pursue perfection is a persistent issue, which prevents the continuous improvements and optimization of upcoming projects. Weekly work plans do not provide provisions to conduct any experimentation; thus, the LPS learns from failure rather than from success [38]. Moreover, the knowledge gained through work execution is not stored and organized in databases and cannot be used in further projects [33,34,39]. According to Sacks et al. (2010), the LPS achieves a reduction in variation through the early consideration of upcoming issues but misses the implementation of pull by disregarding important indications (signals) generated by downstream operations. Additionally, the LPS rarely provides a clear evaluation of the actual project status, which may cause imprecise project status indications [38,41]. In order to optimize this kind of project management, the KanBIM method was proposed, which extends the Last Planner system by the use of a 3D Building Information Model, which allows to visualize the construction progress and obstacles in the construction process through Kanban signals and symbols [7,38,42].

The innovation to use virtual 3D CAD/BIM models for the representation of project performance was initially suggested by Songer et al. (2000), who investigated workflow modeling's relationship with virtual 3D modelling to visualize project performance [43]. Later developments produced the 4D method, which connects the virtual 3D model with time-related activity information [44]. Further common approaches have added an additional dimension to offer the benefits of a virtual 3D CAD model that includes the appropriate project cost elements alongside project related time information (4D). This method is commonly referred to as the 5D methodology [45–47]. Figure 3 shows an example of the 5D BIM approach were the allocation of the 3D model objects to the corresponding Bill of Quantity (BoQ) positions as well as schedule activities is conducted manually.

**Figure 3.** The 5D Building Information Model (5D BIM) approach.

To assign work execution tasks to specific project model parts, Sacks et al. proposed the use of *fine-grained* activity information from a 3D model and the creation of work-packages, which can be split into trade-specific tasks that are manageable by individual workers. These packages are represented within this model by Kanban-symbols or as a group of highlighted objects [38]. Each contractor has to develop an individual trade-specific weekly work plan, which is later synchronized with a general (project-wide) weekly work plan. The "*Kanban card type pull flow control signals and Andon alerts*" display the constraints and workflow interruptions within the 3D model [38]. To avoid interruptions of the execution process, daily on-site inspections and adjustments are conducted by a team of trade-leaders and site managers. The actual project performance statuses are displayed live by the 3D models on various screens at the construction site [38].

Although this methodology improves the reliability of task delivery and reduces variability, it enlarges project management targets drastically, as trade-specific work plans must be developed weekly before trade-related tasks can be negotiated due to synchronization with project-wide weekly work plans. In this way, tasks with lower priority have negative effects on the decision-making process and encourage undesirable discussions. In this scenario, the highly productive and efficient weekly Last Planner meetings threaten to disappear. Moreover, a trade-specific and project-wide evaluation and coordination of constraints could cause unavoidable latencies, which are critical for proper performance status indication and may hinder the flow of the project.

This analysis demonstrates that previously described methods have yielded significant improvements for individual and specific project characteristics. Approaches for repetitive and dynamic process requirements provide helpful possibilities to handle various project execution operations, even with the implementation of fine-grained activity information from 3D models to display the work in progress. Some important factors, however, retain considerable potential for improvements but have been widely disregarded: (a) close cohesion between predesign/design information (3D model objects) and project related time and cost values could be achieved via a tight interdependence between the 3D model objects, the corresponding Bill of Quantity (BoQ) positions (costs), and appropriate execution durations—this is, in the following, referred to as the *5D Building Information Model (5D BIM)*; (b) by using the early division of the 5D BIM into clearly defined project sections (PS), the determination of executing relevant target dates could provide a basic grid, which is necessary to structure the agile-organized project execution; (c) the actual required resources and values, determined during work execution (e.g., the actual required execution durations of specific tasks/actual used resources etc.) could be compared with the planned values. On this basis, continuous improvement strategies could be implemented, thus contributing to sustainable improvements of the planning accuracy of future projects.

## **4. The** *5D-PROMPT* **Method**

Since the current scientific literature does not provide a coherent solution that combines the advantages of the previously described methods and the unexploited possibilities, this paper presents a comprehensive approach to obtain these goals. This new concept is referred to as the *5D-PROMPT* method. Its main objective is the sustainable reduction of both the deviation and variation between as-planned and as-built construction project targets. The key process consists of:


A careful selection of these principles is summarized in a multi-crossed hybrid system that operates throughout all project phases in accordance with the individual process requirements.

To provide a broad understanding of the key improvement aspects of the 5D-PROMPT method, and to explain essential enhancements a common and widespread construction design and execution process example should be introduced previously. Its basic structure represents an assumed process of

conventional (3D CAD model-based) project planning, tendering, and contracting of subcontractor services, as well as a waterfall-based organization of the work execution. The process is characterized by its appropriate process steps, which are illustrated in Figure 4.

**Figure 4.** Initial situation: currently common construction design and execution process example.

Potential weak points and missing interconnections that are critical to a coherent and interconnected workflow are represented by the numbers 1–3 and are described as follows: (1) The significant issues include the missing interconnection between the individual 3D CAD model objects/elements, appropriate schedule operations, and the corresponding BoQ positions. In addition, the construction process sequence is not corrected or optimized by simulation. Thus, the project design, project costs, and execution time threaten to drift apart over the course of the project, which will impede project control and are critical to project success; (2) a detailed development of the work execution schedule often takes place immediately before the execution phase starts. Moreover, the utilization of a slightly flexible waterfall-based scheduling method appears inadequate for the numerous unexpected and unpredictable on-site incidents. Further, permanent cooperation between the work execution schedule and on-site operations is required; (3) significant deviations between the planned and used recourse

values could be reduced by implementing specific information feedback-loops, which report back crucial as-built values/information in accordance with Deming's Plan-Do-Check-Act cycle [7].

Based on the previously presented workflow, Figure 5 demonstrates the key enhancements and general operating principles of the 5D-PROMPT method. This method consists of five main sections: (1) fully applied 5D BIM planning process; (2) IT-supported connection of the 3D BIM objects, with the associated BoQ positions as well as schedule activities by utilization of linking elements. (3) early determination of project duration and project sections (PS) and definition of target dates for PS deliveries; (5) agile project execution organization according to predefined target dates; and (5) intermediary information feedback loop implementation for project status indication and target/actual comparison by the 5D BIM as the *single source of truth* (*SSOT*).

**Figure 5.** Workflow of the 5D-PROMPT method.

## **5. Mode of Operation**

## *5.1. 5D BIM Planning Process*

To take advantage of the 5D BIM-based planning approach, it is crucial to generate the entire project design using a virtual 3D BIM model (Figure 6(➀)). The individual BIM objects are assigned to their corresponding BoQ positions, which contain product characteristics, costs per unit, and quantity information (Figure 6(➁,➂)). The technical implementation of the 5D BIM approach, which is applied within the 5D-PROMPT method, uses a link position to interconnect the 3D model objects with the associated BoQ positions as well as schedule activities. It is considered as a key element that ensures a close connection between the individual planning elements and contributes to prevent design, project cost and construction time from drifting apart. This innovation was developed by RIB Software SE within the software solution iTWO 4.0. In order to achieve the optimum performance of the 5D-PROMPT method, this methodology was used accordingly. Time-specific effort values, which provide information about the time needed to execute the required construction tasks (activity duration), are extracted from the BoQ position and transferred to a corresponding activity operation in a waterfall-based schedule application, e.g., Gantt Chart (Figure 6(➃)). Although the 5D-PROMPT method stipulates agile organized construction work execution, a waterfall-based schedule is developed during the initial project planning phase to determine the project/activity duration and provide a theoretical project execution simulation. In this way, specific start/end dates are assigned to each scheduled activity. To achieve an optimized execution workflow, the methodology of the Critical Path, including the forward pass/backward pass, float calculations, and fast-tracking options, is applied appropriately. In this way, the 5D BIM planning approach enables a virtual construction process simulation, with the concurrent process of project costs and activity durations.


**Figure 6.** Technical description: 5D BIM planning approach.

## *5.2. Determination of the Project Duration and Project Sections (PS) and the Definition of Target Dates for PS Deliveries*

To form the conditions for an aspired agile organization of construction execution, a basic grid should determine the direction of the work proceedings. For this purpose, the 5D BIM is split horizontally and vertically in *approximately* equal sized Project Sections (PSs). The corresponding BoQ positions, and schedule activities are divided and re-compiled appropriately. After schedule reorganization, the project start/end date (Project Frame—PF) and the start/end dates of each PS can be determined. Any other information provided by the waterfall-based schedule has no further use over the remaining course of the project. Applied effort values within the project planning phase are updated/corrected via actually used (as-built) values.

## *5.3. Agile Project Execution Organization According to Predefined Target Dates*

A core aspect of agile organized work execution is the collaborative competence of the contractors (capacity for teamwork) involved in the execution process. All construction trades should be tendered and contracted/subcontracted at this time of the project; specific project organization requirements must become an integral part of the contractor/subcontractor agreements. To manage/organize project execution onsite, an agile execution organization board (hereafter referred to as the *PROMPT Administration Board*) is formed in close cooperation between the foremen and site/project-manager. The basic approach in this method is similar to the Last Planner/Scrum/Kanban project organization plans or boards; however, the present method differs in its setup and arrangement [39,48]. The PFs and PSs define the general project guidelines and determine the deadlines for PS/total project delivery. Thereafter, fixed time periods are established to manage/review on-site work execution in a monthly and weekly sequence:


The form and concept of the PROMPT Administration Board are explained in Table 1.


**Table 1.** Leading structure for development of the PROMPT Administration Board.


**Table 1.** *Cont.*

The task organization is graduated from general to particular. Appropriate descriptions are provided in accordance with the specific levels of organization. The setup and formatting of the organizational structure (PROMPT Administration Board) is done during an administration kick-off meeting conducted by the execution team members (foreman/project manager—contractor side/project manager—client side). The workflow and superstructure are outlined in Figure 7. Since the project start/end dates and milestones for PS delivery are attached to the administration board, the Organizational Units and Task Units can be defined.

**Figure 7.** Setup and superstructure of the PROMPT Administration Board.

Participation in monthly/weekly meetings is compulsory for each of the execution team members. Individual arrangements and general agreements determined during these meetings must be accomplished within the envisaged time frame.

## *5.4. Intermediary Information Feedback Loop Implementation for Project Status Indication and Target*/*Actual Comparison*

The construction progress is evaluated and updated during the weekly meetings (Report Period) based on the actual completion status values of the planned (trade-specific) targets. The completion status values are 100%, completed; 50%, partly completed; and 0%, pending. The actual status values are displayed in colored markups (green; yellow; grey) by the 5D BIM. Invoices can be issued in accordance with the Billing Periods (monthly). The basis for payment approval is the actual (trade-specific) competition status of the accumulated monthly activity performance.

To achieve sustainable improvement of the planning accuracy, substantial deviations between the planned and actual execution durations are identified and evaluated by the execution team members and reported back to the project planning department to implement a sustainable correction process for time-based calculation matters. Furthermore, deviations between the planned and applied resources can be determined immediately. Thus, the planning accuracy for future projects could be improved considerably.

#### **6. The Comparative Case Study**

An initial implementation and performance test of the 5D-PROMPT method was conducted using an actual practice construction project as a comparative case-study. To pre-classify the workability and expected benefits of the new approach (and to make it comparable to previous methods), one construction project was carried out according to conventional planning and execution methods, as described in the process chart in Figure 4 (hereafter referred to as Project A). During the same time period, an equivalent construction project was carried out according to the 5D-PROMPT method, as described in Figure 5 (hereafter referred to as Project B).

The performance values of both methods were determined using special Key Performance Indicators (KPIs—listed in Table 2) measured during/after the execution phases of both projects and subsequently evaluated by a multi-criteria analysis. The required KPIs were selected based on the findings of the literature study and an analysis of the current state project management approaches.


**Table 2.** Definition of Key Performance Indicators (criteria); reference units, weighting factors, and determination to target a higher or lower performance value.


**Table 2.** *Cont.*

\* Criteria were used to determine the comparability of Project A and Project B. \*\* b =ˆ beneficial—higher performance value is desired; n-b ˆ= non-beneficial—lower performance value is desired.

Project A and Project B were carried out by the general contractor company Heinrich Schmid GmbH and Co. KG during the years 2018/19 (Location: South Germany). Twenty-one trades were planned and executed by the project stakeholders over a period of 11 months for each project. As a comparison of the two construction projects would have led to inaccuracies due to a lack of absolute equivalence, a reference value was introduced as a benchmark for data collection. In this study, it is assumed that the process sequence and implementation of Project A are generally known. Therefore, the following section will only describe the procedure for the conduction of Project B. However, to provide a better understanding, the basic process steps of both projects are compared in Table 3.

**Table 3.** Overall process steps of Project A compared to Project B.


As previously described, the design of Project B was based on a 3D BIM. This model was provided by Contelos GmbH, a virtual-modeling company. The construction software developer RIB Software SE provisioned the software tool iTWO Baseline, which was used to create the required interconnections between the individual model objects, the corresponding BoQ positions, and scheduling activities. Furthermore, the model split operations, construction process simulation/optimization, the tendering and contracting of sub-contractor services, and 3D model-based execution status representations were conducted/developed by this software. To minimize the risk of method failure, in addition to the 3D BIM, conventional 2D CAD project plans were also created and kept available for execution.

Since the 5D-PROMPT method was introduced to the project stakeholders, continuous team coaching, and individual training were required to acclimate the participating members to run the project. To ensure a solid foundation, all project-related contracts had to comply with the agile project's execution requirements. The modeling company was required to elaborate the 3D BIM to confirm the modeling guidelines of the applied 3D modeling software (Revit) and to make use of a harmonized project-wide BIM attribution. A substantial measure was the awarding and contracting of all execution trades, which had to be completed before work execution started. Next, the *execution organization team* was formed, consisting of foremen, crew leaders, site managers, and the project leader. This team performed the setup of the PROMPT Administration Board using time-regulated "project start and end date as well as milestone date" information, which was determined by the waterfall-based construction execution simulation. Moreover, monthly/weekly time periods for the Organizational Units (OUs) and Task Units (TUs), were added to the board to establish a static structure for the work execution organization.

After work execution started, the team determined, confirmed, and evaluated both upcoming and finished work execution tasks during monthly/weekly meetings. Daily work performance monitoring ensured proper target/actual provisions and provided information about the required execution durations. Actual building statuses were represented by highlighted objects in the 3D BIM. Deviations between the planned and required time and cost values were evaluated and gathered in a database to help make future project planning more precisely predictable and reliable. The core process of the case-study is represented and described in Table 4.

**Table 4.** Integrated software application and practical 5D-PROMPT workflow implementation.

The second step was the development of the individual BoQs, including model-based quantity take-off (QTO) and cost calculations based on resource-specific effort values. The BoQ positions were interlinked closely with the individual model objects.

**Table 4.** *Cont.*

**Table 4.** *Cont.*


## **7. Results and Multi Criteria Analysis**

To make the two projects comparable, the reference value for the collection of data was set to 1000 m<sup>2</sup> GFA (Gross Floor Area). All measured values were related to this factor. The planned duration of both projects, including planning, tendering, contracting, and execution, was 11 months. To increase the comparability of the two projects, both projects were divided into construction sections of an approximately equal size. These sections were required to optimally allocate the scope of work and to serve as reference points for determination of the project's process status indication.

Project B was completed within the planned construction period. Each construction section was also completed on time, so each predetermined milestone passed on time. Deviations caused by the estimated values determined during the project planning phase were compensated by the agile construction process organization. Moreover, the implementation of the enhanced 5D-BIM approach has contributed to a considerable reduction of the deviations between the as-planned and as-built values. Project A exceeded the planned end date with a delay of about 4 weeks. The project milestones were passed, on average, 3–5 days later than planned. This resulted in a shift of the entire schedule and caused the final deadline to be significantly exceeded. At first glance, Project B matched the budget; however, additional and unexpected extra costs for necessary tablet/computer hardware and servers, including maintenance, hotline, and update services, were required due to the implementation of the new methodology. These costs amounted to a total of EUR 22,825 net per 1000 m<sup>2</sup> per year for Project B. This resulted in an actual cost overrun of 6.04% for this Project. The cost overrun of Project A was 16.2% in total for the reference value of 1000 m2. This was caused by supplements due to inadequate planning, rework and defect management and the extended construction period (extra costs for provision of site equipment and staff).

To obtain the preliminary assessment of the influence of the new method in terms of its enhancements to project performance, accuracy in project planning, and schedule and cost reliability, a Multi Criteria Analysis was conducted. Based on the two alternatives, Project A (*iA*) and Project B (*iB*), and the predefined analysis criteria (*jn*) previously described by the KPIs (see Table 2), an evaluation matrix was developed. Here, the cell variables represent the project-based performance value (*Xij*) of each criterion.

To avoid assessment issues and to achieve comparable analysis results, the deviating units of the measurements of the criteria to be compared were unified, and linguistic terms of classification were assigned to a number-based performance value scale. Furthermore, normalization of the analysis matrix was required to obtain only numerical values without any units.

The allocation of each criterion into "beneficial—higher performance value is desired" (e.g., commitment of involved stakeholders) and "non-beneficial—lower performance value is desired" (e.g., exceeding of final project deadline) was next conducted, and the normalized performance value (*X ij*) of each cell had to be calculated. For this purpose, the following formulas were applied based on the criteria classification: "beneficial" or "non-beneficial". The minimum and maximum performance values were derived from the lowest and highest performance values of each criterion:

$$\text{It applies } \rightarrow \text{ X\prime}\_{ij}(bneful) = \frac{\text{X}\_{ij}}{\text{Max}\begin{pmatrix} \text{X}\_{ij} \end{pmatrix}} ; \text{ X\prime}\_{ij}(non-bneficial) = \frac{\text{Min}\begin{pmatrix} \text{X}\_{ij} \end{pmatrix}}{\text{X}\_{ij}}.$$

Since the analysis matrix was normalized, a weighting factor (*w <sup>j</sup>*) was assigned to the normalized performance values (*X ij*) of each criterion (*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *w <sup>j</sup>* = 1) (see Table 2) to classify its influence on schedule and cost reliability, accuracy in project planning, and project performance. Each normalized performance value was multiplied by the assigned weighted factor to obtain the weighted normalized analysis matrix.

To calculate the absolute performance scores of Project A and Project B, each weighted normalized performance value (*X ij*) of each Project was cumulated. The entire calculation of the performance score per alternative ((*iA*) and (*iB*) (Project A; Project B)) can be described by the following formula:

$$\text{performance score}\_{(i)} = \sum\_{j=1}^{n} \left( \frac{X\_{ij}}{\text{Max}\left(X\_{ij}\right)} \right) \* w'\_{\cdot j} + \sum\_{j=1}^{n} \left( \frac{\text{Min}\left(X\_{ij}\right)}{X\_{ij}} \right) \* w'\_{\cdot j}.$$

After the collection of all measurement data related to the respective Key Performance Indicators and the evaluation of all values, a total performance score of 0.38 was calculated for Project B. For Project A, the score was 0.12. To evaluate the significance of these values, the ranking scale shown in Table 5 was used. This scoring model is generally applied to assess alternatives based on several quantitative and qualitative criteria, objectives, or conditions. It is used to analyze a set of complex alternatives to order the elements of the set according to the analysis preferences based on a multidimensional target system. The order is represented by the performance value of each alternative. The evaluation numbers follow a five-fold scale (in this case, 0.05 to 0.55), where a higher evaluation number stands for a superior evaluation (Table 6) [50–52]. As a result, the project planning and execution of Project B according to the 5D-PROMPT method could generally be rated as "good". In direct comparison, Project A could only be rated as "bad" according to the conventional project planning and execution method. This indicates a considerable improvement of construction planning and execution under the 5D-PROMPT method and suggests an immense enhancement of overall project performance. Further results and differences of both project organization methods are listed and explained in the following section.

**Table 5.** Evaluation Matrix—Determination of the performance value (*Xij*). This method applies → *Xij* = *per f ormance value o f the ith alternative over the jth criteria*.


#### **Table 6.** Ranking scale [47].


The comparison also shows that the number of construction defects during the construction phase of Project B was 82.6% lower than that of Project A. The costs for supplements and changes in planning due to the voice of the customer were also approximately 71.5% lower than those in the comparable project. The commitment of the participants to the newly introduced method of Project B was very high (Grade 5—*strongly agree*, from the Likert scale analysis [49]), but the commitment of the project execution team to the conventional method was also rated "high" (Grade 4—*agree*, from the Likert scale analysis [49]). The number of disability notifications and notices of default was three for Project A, while Project B was completely unaffected by these measures.

There was no absolute stop of construction work in either project. Moreover, an officially decreed construction stop was imposed in neither of the projects, and no workers were injured or killed during the construction process. The costs of implementing the new methodology of Project B were related to the following services: (1) introduction and training of the project organization method, (2) software teaching and training, (3) process consulting, (4) development of project-relevant master data, and (5) operational project support. These services were calculated proportionately according to the reference values of 1000 m2 per year and amounted to EUR 38,500 net in total. These costs include, as described above, the considerable additional costs for extra required hardware equipment and software-related services.

## **8. Discussion and Conclusions**

The initially mentioned productivity growth of the American and European construction sector indicated international structural weaknesses and considerable performance deficiencies over the past twenty years. These preliminary graphs (Figures 1 and 2) represent only a portion of the international situation. Further statistics could be included, but a deeper insight into international conditions was garnered through the literature study. The literature provides a relatively far-reaching overview of the current problems of the construction industry but often refers to different project variants that cannot be compared directly or can only be compared with difficulty. Although some studies included in the literature review of this article show improvements in project organization methods due to advanced innovations, in many cases, these developments fall short of their goals, as they often relate only to the work execution phase and do not sufficiently account for essential criteria and possible improvements during the project design phase. At this point, the 5D-PROMPT method is intended to meet the requirements of transferring essential information from the design phase to the execution phase (e.g., current planning status of the BIM as a single source of truth; project timeframe and key milestones tested and optimized by simulation). In addition, the knowledge gained from the construction phase has to be reintegrated into the construction planning of future projects. The project organization method referred to as the *common* method in this article should only be understood as an example and is only one of countless other possible examples.

The comparative case study was conducted to obtain a preliminary and brief overview about possible improvements in project performance, accuracy in project planning, and scheduling and cost reliability through application of the 5D-PROMPT method. This was not a representative study, as only one sample for each criterion/alternative could be collected due to comparing only two projects. Nevertheless, a range of different Key Performance Indicators (criteria) were measured, which provided the study results with a certain significance. For the comparability of both projects, it was of particular importance that both projects were appropriately similar (Δ < 20%) and were carried out by the same executing company at the same time. The PROMPT administration board was used as an analog planning board within the case study. To avoid interface problems and a possible loss of information in future investigations, a digital planning board, fully integrated into the overall process, will be required. Automated functions based on a self-learning system should also be developed in future papers.

A main obstacle that emerged during the implementation of the project under the 5D-PROMPT method (Project B) involved defining the Task Unit (TU) content and the corresponding model objects. A general trade-by-trade definition was considered unusable since different trade specifications consist of various completion characteristics in terms of (a) the standardization of target unit measurements, (b) increasing deviations over long project execution periods, and (c) tracking of multiple correlated design changes. These issues were solved by the classification and alignment of trade compliant model sections, which are measurable, de-limitable, and directly extractable from the 5D BIM. This common problem (which is typical for agile organized projects with missing references for project target dates, deadlines, or time limits) was handled by properly determined project start/end dates and Project Section (PS)-related milestones. Thus, the course and frame of the project followed a clear structure and was traced by the participating stakeholders. To solve consequential coordination difficulties, the foremen of adjacent trades took part in the agile organization meetings and contributed to solving forthcoming issues.

The results of the case study indicated a considerable improvement of the objectives pursued. However, since only one project was counted as a sample, this study was relatively limited. Therefore, no statistical evaluation of the results was achievable. However, due to the large number of different criteria examined, it was possible to carry out a multi-criteria analysis, which provided a preliminary impression of the effectiveness of this method. The weighting of the multi-criteria analysis was particularly focused on the criteria that affect exceeding construction time and costs, as well as disruption-free construction processes. One experienced value concerns the oversized storage capacity of the server used for Project B. In future projects, this value this could be reduced considerably, thereby decreasing the direct project costs per 1000 m2.

Generally, the technical implementation and feasibility of the proposed method was proven to be beneficial, and possible technical improvements have already been derived. To obtain a reliable evaluation of the effects of the entire 5D-PROMPT method, a series of projects must be carried out and examined in accordance with this method in future investigations.

## **9. Future Perspective**

At the present stage, the 5D-PROMPT method indicates promising improvements in project organization and schedule reliability. Moreover, the cooperation of waterfall-, agile-, and lean-based process organization had a positive influence on the project performance. However, to evaluate the real advancements in terms of schedule reliability, project performance, and planning precision improvements, further research and empirical analysis is required. The appropriate project size that best fits the application of the 5D-PROMPT method should also be examined in detail.

**Author Contributions:** Conceptualization, D.L. and C.B.; Data curation, D.L.; Formal analysis, J.D. and C.B.; Investigation, D.L.; Methodology, D.L. and C.B.; Resources, J.D.; Software, J.D.; Supervision, D.C.-F. and J.D.; Validation, D.C.-F.; Visualization, C.B.; Writing—original draft, D.L. and C.B.; Writing—review & editing, D.C.-F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by Heinrich Schmid GmbH&Co.KG regarding the cooperative conduction of two construction projects used for the case-study; RIB Software SE, who provided the BIM software platform iTWO Baseline and the site management software OnSite, and Contelos GmbH for the provision of the 3D Building Information Model.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Special-Length-Priority Algorithm to Minimize Reinforcing Bar-Cutting Waste for Sustainable Construction**

## **Dongho Lee, Seunghyun Son, Doyeong Kim and Sunkuk Kim \***

Department of Architectural Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do 17104, Korea; DUL1212@khu.ac.kr (D.L.); seunghyun@khu.ac.kr (S.S.); dream1968@khu.ac.kr (D.K.) **\*** Correspondence: kimskuk@khu.ac.kr; Tel.: +82-31-201-2922

Received: 30 June 2020; Accepted: 21 July 2020; Published: 23 July 2020

**Abstract:** Reinforcing bars (rebar), which have the most embodied carbon dioxide (CO2) per unit weight in built environments, generate a significant amount of cutting waste during the construction phase. Excessive cutting waste not only increases the construction cost but also contributes to a significant amount of CO2 emissions. The objective of this paper is to propose a special-length-priority cutting waste minimization (CWM) algorithm for rebar, for sustainable construction. In the proposed algorithms, the minimization method by special and stock lengths was applied. The minimization by special length was performed first, and then the combination by stock length was performed for the remaining rebar. As a result of verifying the proposed algorithms through a case application, it was confirmed that the quantity of rebar was reduced by 6.04% compared with the actual quantity used. In the case building, a CO2 emissions reduction of 406.6 ton-CO2 and a cost savings of USD 119,306 were confirmed. When the results of this paper are applied in practice, they will be used as a tool for sustainable construction management as well as for construction cost reduction.

**Keywords:** rebar work; cutting waste; minimization; sustainable construction; CO2 emission; cutting stock problem

## **1. Introduction**

Building construction and operations accounted for 36% of global final energy use and nearly 40% of energy-related carbon dioxide (CO2) emissions in 2017 [1]. Concrete and reinforcing steel contribute about 65% of building greenhouse gases (GHG), 40% of which are CO2 emissions generated by concrete [2]. Clark and Bradley described that the mean embodied carbon dioxide (ECO2) for office buildings is 340 kg-CO2/m2, of which the structure accounts for approximately 60% [3]. In their research report, they suggest 95 kg-ECO2/ton for C25/30 concrete and 872 kg-ECO2/ton for reinforcing bar (rebar). This suggests that reducing the ECO2 in the structural frame directly produces a GHG reduction. In addition, in terms of the carbon footprint, efforts to reduce the rebar, which has an ECO2 of about 9.2 times that of concrete per unit weight [4], are very important.

In general, rebar cutting waste is estimated in the planning stage to be 3–5% [5–8] but more than 5% occurs in the actual construction stage [5,7–13]. This is because there is a lack of optimization technology on construction sites [14]. In order to solve this problem, many studies have been conducted to minimize rebar cutting waste [5–25]. Most studies use a stock length called a standard, or market length to make combinations that minimizes cutting waste [8–16,19–25]. In other words, they combine the rebar indicated in the structural drawings using stock lengths held in the rebar shop or plant in order to minimize cutting waste. If the rebar to be ordered by special length is used in the rebar combinations, cutting waste can be further reduced [5,7,14,17]. Rebar combinations using special lengths and stock lengths can further reduce the cutting waste and CO2 emissions. However, research on the use of special lengths for cutting waste minimization (CWM) in the construction industry is lacking. The study of Porwal and Hewage introduces the concept of special length combination [7], but constraints for minimization by special length (MSpL) are not clearly described. In addition, several studies have suggested the concept of MSpL but lack a detailed explanation of algorithm operation [7,14,17]. Additionally, from the viewpoint of sustainable construction, the effect of reducing CO2 emissions through minimization algorithms has not been suggested. Therefore, the objective of this paper is to propose a special-length-priority CWM algorithm for rebar, for sustainable construction.

The study proceeds as shown in Figure 1. First, we describe the originality of this paper and the lessons obtained after reviewing the references on CWM and the cutting stock problem (CSP). Then, we introduce the CWM algorithms, the core content of this paper. We describe in detail the concept of stock and special lengths, the CWM process and algorithms, and MSpL. Next, after applying the proposed algorithms to the case project, we analyze the rebar savings details. In addition, we confirm the CO2 emission and cost reduction effects associated with the rebar quantity reduction. Finally, we discuss the problems, lessons learned, and opportunities for further studies, and we describe the results of this present study.

**Figure 1.** Research process and methodology.

## **2. Literature Review of CWM Problems**

Research on CWM began with a study to solve the CSP. The study of the CSP was first mentioned by Kantorovich in 1939 and first published in *Management Science* in 1960 [26]. The problem consists of determining the best way of cutting a set of large objects into smaller items [27]. In operations research, the CSP is the problem to cut standard-sized pieces of stock material, such as rebar, paper rolls, or sheet metal, into pieces of specified sizes while minimizing the material wasted [28]. Kantorovich provides two examples in his paper to make the CSP easier to understand [26]. Since then, many scholars have conducted research to obtain a solution to the CSP using linear programming [29–42], genetic, or heuristic approaches [43–48].

In the case of rebar CWM, studies using linear programming and/or heuristic algorithms have been conducted [5,7,8,10,12–17,19–25]. In most cases, however, research has been conducted to minimize scrap or cutting waste using stock lengths, and the opportunity to further reduce cutting waste using special lengths has been lost. From the previous studies [5,14], we have confirmed that MSpL reduces cutting waste more than minimization by stock length (MStL). There have been several studies on minimization by special length (MSpL) [5,7,14,17], but various conditions required in practice have not been reflected in algorithms. Furthermore, the application process of MSpL reflecting these conditions was not specifically introduced. In the case of MSpL, variables such as minimum order quantity, rebar lengths for special order, minimum loss rate, and minimum and maximum combination length should be considered in practice. However, in most studies, these conditions were not reflected or sufficiently explained. In the paper of Porwal and Hewage [7], they proposed an algorithm for minimization by market and special lengths using rebar data extracted from building information modeling (BIM). However, detailed descriptions of constraints such as the rebar loss rate and minimum order quantity

are not clearly described. In other papers [5,14,17], the MSpL concept was introduced, but detailed application processes were not described in their manuscripts.

In this paper, we propose algorithms that perform minimization by stock length on the rebar that is left after MSpL. However, since many scholars are familiar with MStL, MStL is first introduced, and MSpL is discussed later in the manuscript.

## **3. Cutting Waste Minimization Algorithms**

## *3.1. Definition of Stock and Special Lengths*

Based on the examination of the studies to date, the CWM methods for rebar are largely divided into two types. Minimization by stock length (MStL) [5,7,8,10,12–17,20–25] and MSpL [5,7,14,17]. In these two methods, the target loss ratio and minimum quantity can be added as constraints [5,14,17]. Figure 2 is an example of combinations of stock lengths and special lengths. In the case of cutting pattern 1 in Figure 2a, two reinforcing bars are combined using a stock length of 12 m, and 0.6 m of cutting waste or loss occurs, which corresponds to a loss rate of 5%. In the case of cutting pattern *i*, three reinforcing bars are combined and 0.3 m of cutting waste is generated, which corresponds to a loss rate of 2.5%. If the special length of 11.7 m is used as shown in Figure 2b, in the same cases there is 0.3 m of cutting waste, a 2.6% loss rate, and a loss of zero, respectively. As shown in the examples, when using special lengths, the cutting waste is generally reduced more than with stock lengths.

**Figure 2.** Combination examples of stock and special lengths: (**a**) Combination cases of stock lengths; (**b**) Combination cases of special lengths.

For reference, "special length" means the length determined by the customer's order, not the rebar length sold on the market. For example, stock or market length means the length determined by the producer in regular interval values such as 9, 10, 11, and 12 m, whereas special length includes irregular values such as 8.4, 9.7, and 10.1 m. Although there are differences by country, stock lengths of 7, 8, and up to 12 m are common in many countries, and when ordering rebar with special lengths, conditions for length, minimum quantity, and delivery time must be satisfied. In the case of Korea, orders must be made in 0.1-m intervals with a minimum quantity of 50 tons and a delivery time of two months or more. For example, rebar with a diameter of 25 mm and a length of 8.4 m can be obtained by special order in a quantity of 60 tons and a delivery time of two months.

## *3.2. Cutting Waste Minimization Process*

As mentioned earlier, rebar combination by special length provides an opportunity to reduce cutting waste or trim loss more than by stock length. Therefore, unlike the previous studies, which performed CWM by stock length only, the CWM algorithms proposed in this paper perform an MStL on the rebar that is left after performing an MSpL, as shown in Figure 3.

**Figure 3.** Cutting waste minimization process.

Figure 3 is described briefly as follows: (1) Read the rebar cutting list from the BIM [7] or computerized IPD system [17]; (2) Input options such as minimum and maximum lengths of rebar to be ordered, target loss rate, and minimum rebar quantity to be combined; (3) Execute the MSpL that satisfies the input options; (4) If the desired solution is not derived, decide whether to perform the MSpL again after mitigating options or perform an MStL; (5) If the desired solution cannot be derived from the MStL, decide whether to perform the optimization again by changing options. Otherwise, the process is terminated after analyzing the cutting waste and CO2 emissions.

## *3.3. Cutting Waste Minimization Algorithm*

In general, CWM is performed for stock lengths using the objective function shown in Equation (1) as introduced in several studies [5,14,17]. This is a mathematical formulation that minimizes the difference between the length of the cutting pattern (*li*) and the stock length (*Lsti*) obtained by combining multiple rebars. In this case, the constraints of Equations (2) to (4) must be satisfied. For reference, in Equation (2), *li* corresponds to the demand length in Figure 2a, and *r*1, *r*2, *..*, *rn* correspond to rebar1, rebar2, rebarn. Equation (3) is not necessary if a single stock length is used, but it must be satisfied if several stock lengths are used. In the case of construction sites, the conditions of Equation (3) are generally valid because rebar of multiple market lengths can be purchased.

$$\text{Minimize } f(X\_i) = \sum\_{i=1}^{N} (\text{Lst}\_i n\_i - l\_i n\_i) / \text{Lst}\_i n\_i \tag{1}$$

$$\text{Subject to } l\_i \le Lst\_i, \ l\_i = r\_1 + r\_2 + \dots + r\_n \tag{2}$$

$$L\_{\rm min} \le Lst\_i \le L\_{\rm max}0 \tag{3}$$

$$\mathbf{i} < n\_{i\prime} \text{ integer, } \mathbf{i} = 1, \, 2, \ldots, \, \mathbf{N} \tag{4}$$

Here,

*Lsti* = Stock length of cutting pattern i (m)

*li* = Length of cutting pattern i obtained by combining multiple rebars, demand lengths (m)


*Lmin* = Minimum length of rebar to be ordered (m)

*Lmax* = Maximum length of rebar to be ordered (m).

So far, most rebar optimization studies in the construction sector have been conducted for MStL. This is because materials such as rebar, structural steel, pipes, and timber are supplied by the manufacturer in market lengths. If the target loss rate is added as a constraint to the CWM, Equation (5) should be used. In this case, the combination is executed only when the loss rate (ε) caused by the cutting pattern is less than or equal to the target loss rate (ε*t*). When MSpL is performed, the loss rate can be further reduced but many algorithms have focused on MStL because of the complexity of the optimization algorithms.

$$
\varepsilon = \frac{Lst\_i - l\_i}{Lst\_i} \le \varepsilon\_t \tag{5}
$$

Here,

ε*<sup>t</sup>* = Target cutting waste or loss rate (%)

ε = Cutting waste or loss rate (%).

## *3.4. Minimization by Special Length*

The mathematical formulation of CWM by special length is described in Equations (7)–(11), which is similar with previous studies [5,7,14]. Special lengths (*Lspi*) that satisfy constraints such as the target loss (scrap or waste) rate (ε*t*) and minimum quantity for special order (*Qso*) must be searched. In this case, the special length must be within the range of minimum (*Lmin*) and maximum (*Lmax*) lengths where special orders are possible.

$$\text{Minimize } f(X\_i) = \sum\_{i=1}^{N} (\text{Lsp}\_i n\_i - l\_i n\_i) / \text{Lsp}\_i n\_i \tag{6}$$

$$\text{Subject to } l\_i \le Lsp\_{i\prime} \; l\_i = r\_1 + r\_2 + \dots + r\_n \tag{7}$$

$$L\_{\rm min} \le Lsp\_i \le L\_{\rm max} \tag{8}$$

$$
\varepsilon = \frac{Lsp\_i - l\_i}{Lsp\_i} \le \varepsilon\_{t\_f} \tag{9}
$$

$$Q\_{so} \le Q\_{total} \, 0 \tag{10}$$

$$1 < n\_{i\prime} \text{ integer, } i = 1\text{, } 2, \dots, \text{ N} \tag{11}$$

Here,

$$d\_i = \text{Length of cutting pattern i obtained by combining multiple rebars, demand lengths (m)}$$

*Lspi* = Special length of cutting pattern i that satisfies the target loss rate (m)

*Lmin* = Minimum length of rebar to be ordered (m)

*Lmax* = Maximum length of rebar to be ordered (m)

*ni* = Number of rebar combinations with the same cutting pattern i

*ri* = Length of combined rebar (m)

ε = Cutting waste or loss rate (%)

ε*<sup>t</sup>* = Target cutting waste or loss rate (%)

*Qtotal* = Total combined rebar quantity (ton)

*Qso* = Minimum rebar quantity to be special ordered (ton).

For example, in the case where the loss rate is less than 2%, the length is between 8 and 12 m at intervals of 0.1 m, but the total quantity (*Qtotal*) of the same length that will be more than 50 tons is searched. The MSpL that satisfies these conditions proceeds with the process shown in Figure 4. The minimization process in that figure is described in pseudocode, as follows:


loss rate is not entered, the combination that satisfies the condition of *Qso* with a special length priority is executed by default.


$$\mathcal{Q}\_{\text{total}} = w \sum\_{i=1}^{N} \mathcal{L}sp\_i n\_i \tag{12}$$

here, *w* = unit weight of combined rebar per meter (ton/m).

(5) If *Qso* ≤ *Qtotal* is not satisfied, MSpL is repeated while *Lspi* is decreased by 0.1 m until *Lspi* ≤ *Lmin* is satisfied. If a solution that satisfies the constraints is not found in the process so far, it should be decided whether to perform the minimization again after alleviating the combination conditions. Otherwise, MStL must be subsequently performed.

**Figure 4.** Minimization process by special length.

If *Qso* ≤ *Qtotal* is satisfied, the quantity of special length is determined and MStL is executed for the remaining rebar. As all of the rebar is not combined by special length, minimization is performed with stock lengths for the remaining ones.

## **4. Verification of CWM Algorithms**

## *4.1. Brief Description of the Case Project*

The effectiveness of CWM algorithms by stock and special lengths described so far should be verified through case application. To this end, the case project shown in Table 1 was selected in this study. This is a commercial building project constructed in Seoul, Korea, with a total floor area of 66,644 m2, three basement levels, and 20 floors above ground. The site area of the case project is not large enough for rebar work on site. Moreover, considering the quality, time, and safety, the rebar was supplied from the plant.


**Table 1.** Description of the case project.

Reviewing the structure of the case building, the underground structure is steel reinforced concrete (SRC) and the superstructure is reinforced concrete (RC). In addition, the first and second floors are designed as a column-and-beam structure, and as shown in Figure 5, from the 3rd floor to the 20th floor, it is designed as a flat slab structure. That is, the case building includes three types of structures. For effective verification of the proposed CWM algorithms, as shown in Figure 5, the case application is performed on the flat slab structure from the 3rd floor to the 20th floor, which is the largest part of the building.

**Figure 5.** Structural frame of the case building.

The flat slab structure of the case project is composed of columns, slabs, and drop panels. Therefore, as shown in Figures 5 and 6, the top of each column is reinforced by drop panels. Figure 6a is the sectional detail of the drop panel that is most frequently applied to the case structure, and below the drop panel, 11-D16s are reinforced at 300-mm intervals as shown in Figure 6b. In the case of the slab part, D13 is installed at 300-mm intervals in both directions on the upper and lower sides as shown in Figure 6c. Furthermore, at the top of the drop panel, D16 is additionally reinforced at 300-mm intervals over the width of the column strip. As shown in Figure 6a, the slab thickness is 250 mm, and the drop panel thickness is 450 mm (200 mm thicker than the slab). For reference, the cross-section of deformed bars is variously marked in many countries as Y, H, D, etc. In the case of this paper, it is denoted by D (Deformed bar), which is commonly used in Korea.

**Figure 6.** Detail of drop panel reinforcement: (**a**) Sectional detail of drop panel; (**b**) Section A-A; (**c**) Detail at the slab part of drop panel.

For the columns of the case project, as shown in Figures 5 and 7, all of them, including C3, have four sections with different reinforcements, such as 900 × 1200, 800 × 1000, and 600 × 1000, from F3 to F20. This is because the design was optimized according to the change in the load condition of each floor. As shown in Figure 7, the main bars are designed to have 26, 16, and 14 deformed bars with a diameter of 25 mm, gradually decreasing in number. Additionally, the sizes and combinations of tie bars and hoops designed for buckling vary from 5-D10 at F3 to F10, to 2-D10 at F14 to F20, as shown in Figure 7. For reference, the 5-D10 tie bars and hoops consist of five deformed tie bars and one hoop with a diameter of 10 mm. The columns shown in Figures 5 and 7 are connected by mechanical couplers, so there is no splice lapping. Therefore, according to the cross-sectional change, the rebar that is installed continuously in the upper and lower columns is connected by couplers, but the rest of it is anchored to the upper column. The case building has less rebar than the structure size because, in order to reduce the cross-sections of the structural members, super-high-tensile deformed (SHD) bars with a yield strength of 500 MPa were used for D10 and D13, and ultra-high-tensile deformed (UHD) bars with a yield strength of 600 MPa were used for D16, D19, and D25.


**Figure 7.** Rebar details of column C3.

## *4.2. Application of CWM Algorithms*

In this study, for the verification of the proposed algorithm, rebar combinations were performed on structural frames from F3 to F20. The rebar cutting list generated from the bar-bending schedule was used for rebar information. At the case site, various diameters of rebar were used. For example, in the case of the column in Figure 7, 25-mm-diameter rebar was used for the main bar, and 10-mm-diameter rebar was used for the hoop. Tables 2 and 3 show the combination results of the CWM algorithms for the main bars of all of the columns.


**Table 2.** Combination report for special lengths of 25-mm diameter rebar.

**Table 3.** Combination report for stock lengths of 25-mm-diameter rebar.


Table 2 shows the results of minimization by MSpL according to Equations (6) to (11), and the final loss rate, i.e., cutting waste rate, was calculated to be 0.58%. A detailed description of Table 2 is as follows: (a) Combination is performed on the 25-mm rebar in the bar-cutting list file named "proj101\_bcl.dat": (b) The minimum quantity of special length is 50 tons after combination for rebar with a minimum length of 6.0 m and a maximum length of 10.0 m, and the maximum loss rate is not specified as 3.0%; (c) Cutting pattern S1 has the same combined and order lengths of 7.4 m, so the loss rate is zero for the order quantity of 176.2 tons; (d) The combined and order lengths of the cutting pattern S2 are equal to 9.2 m, so the loss rate is zero for the order quantity of 42.8 tons; (e) Finally, in cutting pattern S3, the combined length is 8.53 m, but in order to satisfy the minimum order weight of 50 tons, 18.82 tons must be ordered with a length of 9.2 m, in which case the loss rate is increased to 7.85%. However, as shown in Table 2, the quantity of S3 is relatively small compared with S1 and S2. Therefore, the loss rate of the final MSpL is 0.58%, which corresponds to 1.37 tons of cutting waste.

For reference, in the case structural frame, 25-mm-diameter rebar was used for the columns only. Additionally, there were not many cutting patterns because there were many main rebars of the same length. In other words, as shown in Figure 5, the lengths of the main rebars of all of the columns on the same floor were the same and the length was changed according to the change in floor height. The total number of main rebars in the case frame was 15,734, which were identified in five lengths.

Table 3 shows the results of MStL by Equations (1) to (4), and the final loss rate was calculated as 1.58%. In Table 2, the cutting pattern is combined into one because it is performed on the remaining rebar after combination by special length. For cutting pattern N1, the combined length is 8.86 m and the stock length is 9.0 m. The combined and stock weight are 1.65 and 1.68 tons, respectively, and the loss rate is 1.58%. For reference, in the case of MStL, the combination conditions are the same as for MSpL but the minimum weight is not specified. This is because it is assumed that there is a sufficient quantity in stock.

Comparing the results of Tables 2 and 3, the minimization results by special length have a lower loss rate than by stock length. This is because combination by special length is performed to further reduce the loss rate.

Table 4 shows the results of applying CWM algorithms to all types of rebar used in the case structural frame in Figures 5–7. Five diameters of rebar were used, and a total loss rate of 0.96% was calculated. The total quantity of rebar required for construction is 1.807.45 tons, and the quantity to be supplied in special and stock lengths is 1.824.75 tons. The loss rate is different for each diameter of rebar depending on the design characteristics of the structural member in which each rebar is used. The details are as follows.


**Table 4.** Combination report by rebar size.

D10, D13, and D16 are mostly rebars that are repeatedly used in various structural members such as slabs, staircase walls, stairs, hoops, and drop panels. In those applications, many rebars of the same length are placed in the same type of structural members. In addition to these characteristics, small-diameter rebar left after cutting for primary use can be used for various other purposes. For example, they can be used as diagonal bars for crack reinforcement, reinforcement of openings, etc. Therefore, the cutting waste rate is lower than that of large-diameter rebar. Moreover, the combined weight was sufficient to cover the various cutting patterns, so MSpL and MStL using the CWM algorithms were performed smoothly.

In the case of D19, MStL was performed because there was no combination that met the minimum weight for a special order (50 tons). As a result, the cutting waste rate increased. Lastly, in the case of D25, most of the rebar was combined with the MSpL algorithm, as described in Tables 2 and 3, so the cutting waste rate was the lowest.

## *4.3. Comparison of Actual and Optimized Rebar Quantities*

In order to verify the effectiveness of CWM algorithms, actual and optimized rebar quantities must be compared. As shown in Table 5, the actual quantity of rebar in the case structural frame is 1.942.05 tons and the quantity optimized by the CWM algorithms is 1.824.75 tons. As a result, 117.30 tons were saved, which is 6.04% of the actual quantity. It can be seen from this table that the reduction rate differs significantly for each diameter of rebar.


**Table 5.** Comparison of actual and optimized rebar quantities.

As described above, small-diameter rebar had more secondary use after cutting. Therefore, it is common that less cutting waste is generated with small-diameter rebar. However, in Table 5, the 10.27% reduction rate for D10 is higher than that of D13 and D16, which means that the loss rate of the actual quantity of rebar is higher than the optimized one. The high quantity reduction rate calculated after optimization by the CWM algorithms means that the loss rate of the actual rebar quantity is high. The reason is presumed to be a problem with the rebar work management. In addition, Table 5

confirms a relatively small loss rate for D13 and D16. This is because there are many rebars with the same lengths repeatedly used in structural members such as slabs, staircase walls, and drop panels.

In the case of D19, it is confirmed that the cutting waste is increased because the quantity required for the work is relatively small and there are not many rebars of the same length placed repeatedly. However, it is confirmed that a cutting waste reduction of 12.39% can be obtained using the CWM algorithms proposed in this study. Lastly, in the case of D25, it is used for the main rebar of the columns, and it is relatively easy to manage the rebar to reduce cutting waste because there are not many changes in length. Therefore, it is confirmed that the quantity reduction of this rebar by optimization is 3.75%, which is smaller than that of the other types of rebar, as shown in Table 5. For reference, when the reduced rebar quantity of 117.3 tons is converted into money, it is about USD 98,976 including material, cutting and bending, and placement costs.

## *4.4. CO2 Emission Reduction E*ff*ects*

When using the CWM algorithms proposed in this study, the contribution to sustainable construction should be confirmed. For this, Table 6 shows the quantitative calculation of the CO2 emissions for rebar saved by the algorithms. Substituting 3.466 ton-CO2/ton [49], the unit CO2 emissions of high-tensile deformed bar published by the Korea Institute of Construction Technology (KICT), show that the CO2 emissions from the actual rebar work and the optimized result are calculated to be 6.731.15 and 6.324.58 tons, respectively. For reference, the LCI DB (Life Cycle Inventory Database) varies by country, and this study cited the data presented in the research report of the government-funded research institute, KICT. In addition, because the LCI DB for SHD and UHD used in the case project has not been officially provided, the unit CO2 emission data for high-tensile deformed bar are cited in this paper.

**Table 6.** Calculation of CO2 emission reduction effect.


As shown in Table 6, when the CWM algorithms are applied, it can be seen that the case project has a CO2 emission reduction of 406.60 tons which is equivalent to 6.04% of the structure. As mentioned in the introduction, in the case of buildings, the structure accounts for about 65% of building GHGs [3]. Considering this reference, there is a CO2 emission reduction of 3.93% based on the whole building. From a carbon footprint point of view, the embodied CO2 per unit weight or volume of rebar is about 9.02 times that of concrete [4]; therefore, the CO2 emission reduction produced by the CWM algorithms has a great effect on sustainable construction.

It is necessary to confirm the cost reduction effect by converting the CO2 emission reduction effect to the carbon price. To this end, the cost savings of USD 20,330 can be confirmed when applying the Korean carbon transaction price of USD 50/ton-CO2 [50] announced by the Carbon Disclosure Project (CDP). When this amount is added to the previously calculated savings of USD 98,976 in construction cost, a total savings of USD 119,306 is confirmed. Similar to LCI DB, the annual price of carbon traded by CDP varies by country. According to CDP data, in the case of Korea, the price was USD 64/ton-CO2 in 2016 and USD 50/ton-CO2 in 2017, which is USD 14/ton-CO2 lower than the previous year.

The proposed algorithms were applied to the 3rd to 20th floors, designed as a flat slab structure, which is part of the case project. The amount of rebar used in the entire structural frame of the case project was found to be 3.444.06 tons. Therefore, if the CWM algorithms proposed in this study are applied to the entire structural frame, greater CO2 emission and cost reductions are expected.

#### **5. Discussion**

In this study, as shown in Figure 3, we proposed an algorithm that performs MStL on the rebar that remains after special length minimization. With this algorithm, cutting waste or trim loss is further reduced because, as illustrated in Figure 2, the special length is combined at 0.1-m intervals, unlike the stock length, which is generally combined at 1-m intervals. This is also confirmed by the results shown in Tables 2 and 3.

Through this study, we confirmed that additional in-depth studies are needed on these two issues. First, it is possible to combine all of the rebar for one project at the same time, but the results may not be practical. For example, rebar placed on the first and 20th floors can be combined. In this case, the inventory management cost may be high because there is a significant time difference between the use of rebar on the 1st floor and the use of the remaining rebar on the 20th floor. Therefore, the combination condition for rebar to be used within a certain time must be added. For example, the condition for performing a combination of rebar to be used within two weeks should be added. So far, most of the papers related to CSP do not consider the time factor. If the required rebar information can be obtained automatically from the BIM [7] or from an integrated project delivery system [17] linked to the schedule, this problem can be easily solved.

Existing CSP-related studies, including this paper, use original rebar information generated after the structural design. In this case, there are different amounts of rebar of various lengths, and numerous combinations are repeated to search for solutions. Additionally, as mentioned above, rebar scattered in various locations are combined. In addition to cutting patterns that are difficult to apply practically, cutting waste rates cannot be reduced below a certain level [5]. However, it was confirmed that near-zero cutting waste could be achieved by realigning the rebar in the drawings created after the structural design in special lengths using heuristic algorithms. It was also confirmed that heuristic algorithms would be more efficient than mathematical algorithms in performing rebar realignment. Therefore, further studies on the rebar alignment algorithm for near-zero cutting waste should be performed for sustainable construction.

During the case study, it was confirmed that significant efforts have been made from the structural design stage to increase the productivity of rebar work and reduce the rebar loss rate. For example, it is common for some Korean companies to use 500- or 600-MPa super- or ultra-tensile bars instead of 400-MPa high-tensile deformed bar, and to use couplers to connect rebar more than 20 mm in diameter. The goal of near-zero cutting waste for sustainable construction is expected to be achieved if heuristic rebar alignment algorithms are applied along with these efforts.

#### **6. Conclusions**

Efforts to reduce carbon and climate change risk are being carried out globally and in all industries. In particular, rebar, which has the most ECO2 per unit weight in built environments, generates a significant amount of cutting waste in the construction phase. Therefore, there is not only the cost of rebar construction but also a considerable amount of CO2 emissions to be expected. To solve this issue, we proposed rebar CWM algorithms for sustainable construction. The effectiveness of the proposed algorithms was verified through a case project, and the following results were obtained.

First, in the case of the optimization of D25 rebar, the cutting waste rate for special lengths was 0.58%, whereas that for stock lengths was 1.58%. This proved the assumption that combination by special length reduced the loss rate more than combination by stock length.

Furthermore, although the actual quantity of rebar put into the case project was 1942.05 tons, the quantity optimized by the proposed algorithms was 1824.75 tons, which represented a quantity reduction of 117.3 tons. This corresponds to 6.04% of the actual quantity and a savings of USD 98,976 in construction costs.

In addition, CO2 emissions by the proposed optimization algorithms compared with actual emissions had a reduction of 406.6 ton-CO2. This corresponds to a CO2 emission reduction of 3.93% based on the whole building, reflecting that the structure accounts for about 65% of building GHGs [3]. This is a savings of USD 20,330 based on the carbon trade price in Korea, and a total savings of USD 119,306, including a reduction in construction costs. The quantity of rebar used in the entire building of the case project, including the flat slab structure on the 3rd to 20th floors, was found to be 3444.06 tons. If the proposed algorithms had been applied to the entire building, further CO2 and cost savings would have been expected.

These results confirmed that the proposed CWM algorithms worked as an effective tool for sustainable construction. During this study, it was observed that near-zero cutting waste could be achieved by realigning the rebar in the structural drawings to special lengths. In other words, it was observed that repositioning rebar of a certain length while satisfying the structural design criteria might significantly reduce cutting waste. In order to do this efficiently, heuristic algorithms of a new concept rather than the mathematical algorithms proposed in this study should be developed in the future.

**Author Contributions:** Conceptualization, S.K.; methodology, S.K.; validation, D.L., S.S. and D.K.; formal analysis, D.L., S.S. and S.K.; investigation, D.K.; data curation, D.K.; writing—original draft preparation, D.L. and S.K.; writing—review and editing, D.L. and S.K.; visualization, S.S.; supervision, S.K.; project administration, S.K.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MOE) (no. 2017R1D1A1B04033761).

**Acknowledgments:** The authors thank SK E&C for providing the rebar data of the case project to verify the CWM algorithms.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Analysis of Musculoskeletal Disorders and Muscle Stresses on Construction Workers' Awkward Postures Using Simulation**

## **Shraddha Palikhe 1, Mi Yirong 2, Byoung Yoon Choi <sup>2</sup> and Dong-Eun Lee 2,\***


Received: 15 May 2020; Accepted: 8 July 2020; Published: 15 July 2020

**Abstract:** The negligence involved in musculoskeletal disorder (MSD) at construction sites results in high rates of muscle injuries. This paper presents findings identified by the MSD for each part of a worker's body, categorizing the awkward postures of each body part, estimating muscle stresses, and establishing the benchmark using anthropometry and hand force data. MSDs and their corresponding frequencies were identified by administering the Nordic Musculoskeletal Questionnaire (NMQ) survey, which solicits responses regarding construction workers' awkward postures. Musculoskeletal stresses were estimated using three-dimensional static strength prediction program (3D SSPP) biomechanical software. The new benchmarks were established for existing preventive measures using the anthropometry and hand force data. Workers suffering from different body muscle pains in awkward postures may be predicted using the compression forces magnitude, strength capability, and body balance. The model was verified by comparing its outputs with the survey analysis results. The study is of value to practitioners because it provided a means to understand the contemporary scenario of MSD and to establish a practical benchmark based on the physical capability of workers. It is relevant to researchers because it digitally predicts MSD and facilitates experimentation with different dimensions, thereby contributing to construction productivity improvement. Test cases validate the prediction method.

**Keywords:** musculoskeletal disorders; construction workers; muscle stress; standard Nordic questionnaire; awkward posture; simulation

## **1. Introduction**

Construction is ranked as the most hazardous operation involving musculoskeletal disorders and injuries. MSDs are caused by sudden exertion or prolonged exposure to physical factors (i.e., high force, repetitive motion, awkward body posture, and vibration) and affects the muscles, nerves, tendons, joints, cartilage, and supporting structures of the upper and lower limbs, neck, and lower back, etc. [1]. MSDs are attributed to handling heavy manual materials, manipulating excessive and repetitive hand tools, performing repetitive screw motions, reinforcing works involving difficult postures, and so on [2]. When the working posture differs from the neutral posture in which the body is aligned and balanced while placing minimal stress on the muscles, tendons, nerves, and bones, the stress on the body parts (i.e., the muscles, tendons, joints, arms, hands, and shoulders) increases, resulting in awkward postures and/or movements of the body parts of the workers, in turn leading to a negative impact on the safety and health of the workers as well as on productivity. The percentage of construction workers exposed to the musculoskeletal hazards in Korea while carrying heavy loads, standing long, and maintaining

tiring and painful positions, is about 72%, 83.8%, and 67.9%, respectively [3]. Herein, we identify the factors that either affect ergonomic interventions or reduce MSDs in construction workers (i.e., masons, pavers, and electricians) [4].

Existing studies provide ergonomic analysis methods that employ motion sensing and assessment tools [5] to alleviate MSDs or to implement preventive measures. However, the correlation between anthropometry and the magnitude of hand forces has not been well explored. A new ergonomic analysis method that identifies the correlation between these two will be beneficial to a construction administrator for estimating, say, the compression on the lower back, and for establishing a benchmark of the workload imposed on a worker using BMI and hand forces exerted in diverse working postures (i.e., pushing forward, lifting, stooping, and kneeling). Such estimations may contribute to securing labor safety and health by efficiently identifying competent workers from an ergonomic viewpoint for the given work task. Three dimensions of environment, society, and economy are frequently used to model how sustainability can be incorporated into one's mission, goals, and practices. However, the issues involved in the social dimension of sustainability (i.e., labor relations, diversity, workers benefits/compensation, human rights, the organization of work, etc.) have often been overlooked, resulting in negative impacts (i.e., hazards to workers and creating tension between goals). Many worker issues exist within the concept of sustainability. The proposed posture simulation and benchmarking approach, relevant to the workers' social issues that promotes labor welfare, safety, and health.

The research was conducted in five steps. First, the performance of existing ergonomic modeling and analysis methods for the construction industry was investigated through a literature review to identify new research contributions. Second, a set of Nordic Musculoskeletal Questionnaire surveys was administered to workers from various construction trades to identify the MSD issues of each trade. The ergonomic data, including the MSDs affecting various body parts of workers, were collected from workers engaged in bare-hand manual operations in four Korean construction sites. Third, the new ergonomic model that establishes a benchmark of the workload imposed on construction workers engaged in diverse working postures was implemented in an automated tool by mapping the survey findings into a three-dimensional static strength prediction program (3D SSPP) software. Fourth, the model performance was demonstrated using a set of working postures (i.e., pushing forward, lifting, stooping, and kneeling). The validity and effectiveness of the model were verified by performing a series of case studies, each of which the common awkward body postures were identified and the static strength and compressive forces attributed to each awkward posture were estimated using 3D SSPP. It was confirmed that the model established the benchmark using hand force and body mass index (BMI). Finally, the research contributions and limitations were examined. The material in this paper is organized in the same order. Indeed, the findings will be of help for construction administrators to understand MSD issues experienced by workers employed in a specific operation and will provide clues to identify those tasks that can be semi-automated or fully-automated for better benefits.

## **2. Current State of Musculoskeletal Disorder Studies**

MSD is the highest contributor to global disability, accounting for 16% of all years lived with disability; lower back pain is the single leading cause (Global burden of disease, 2017). In South Korea, the percentage of workers aged 50 years or older was 25% in 2010 and this value is expected to exceed 33% in 2020 [6]. Workers suffering from MSDs include aged construction craftsmen exposed to severe vibrations, construction and mining technicians, and construction finishing workers (61.3%, 47.8%, and 46%, respectively). The prevalence of chronic MSDs and degradation of body parts attributed to aging may lead to decreased physical labor capability. The frequency of back pain, upper extremities, and lower extremities and fatigue are chronically high in construction workers, about 30.7%, 61.3%, 49.2%, and 35.6%, respectively [3]. Existing studies claim that, compared to young workers, aged workers are more likely to suffer from musculoskeletal symptoms.

Meanwhile, existing ergonomics analysis techniques may be classified into self-reporting, manual observation, direct sensing measurement, and vision-based analysis. Self-reporting is a data collection process that involves conducting interviews and web-based questionnaires [7]. Manual observation tools facilitate measuring and/or evaluating MSD via hybridizing body position and movement-tracking tools (e.g., assessment of repetitive task, Ovako working posture analysis system, posture activity tools and handling, rapid upper limb assessment, and rapid entire body assessment (REBA)) [8]. However, they lend themselves neither to a precise posture measurement nor to the recording of delicate movement patterns such as that possible in time-lapse video observation. In direct sensing measurement, various sensor(s) are attached to the body parts of workers; these approaches outperform the two approaches in terms of measurement accuracy. The accuracy may be augmented by hybridizing the measurement method with Microsoft Kinect Cameras for efficient real-time motion analysis [9,10]. Vision-based analysis allows for precise motion tracking along with biomechanical parameter measurements that use devices such as tapes and goniometers, microelectromechanical systems (MEMS), electrodes for electromyography (EMG), and magnetoresistive sensors. Although this method facilitates measuring joint angles, including the angle of the neck, it is cumbersome because construction workers must wear devices while working. Although it outperforms existing methods, methods based on such analysis are still far from the ideal [11].

Existing studies have identified that construction workers suffer from physical fatigue and muscle pain when exposed to excessive energy consumption, resulting in human error, unsafe actions, and productivity loss, etc. In addition, a few studies claim that practical methods that assess MSD risks to all parts of the body are necessary in construction, proposing new technologies that facilitate the identification of preventive measures involved in appropriate body posture [12]. However, they are not yet arrived in a maturity to proper implementation. MSD remains a substantial concern with considerable personal and societal burdens [13]. Indeed, it would be beneficial to enlighten MSD issues to the construction practitioners hybridizing posture simulation and survey methods. It may contribute to identifying human MSD along with a benchmarking approach that supports making preventive MSD tools for construction personnel.

For construction workers, the anthropometric traits and hand forces to which different body parts (i.e., the neck, shoulder, fingers, knee, and wrist) are subjected depend on the task types (i.e., overhead work, ground-floor-level work, and manual material-handling work) [14]. Several musculoskeletal injury prevention measures (e.g., site-specific ergonomics programs, engineering controls, mechanical devices, exercise programs) have been enforced to reduce the burden of manual-lifting hazards. The "best practices" do not cause pain and/or discomfort in the back and wrist, and were identified to increase the productivity [2,15]. These measures encourage the development of initiatives that analyze ergonomic hazards and implement site-specific mitigation strategies and practices. It will be beneficial to reengineer improvement techniques and to upgrade its dynamic condition against work-related MSDs. Existing studies provide methods to identify the body postures of workers and suggest their corresponding preventive measures. However, these studies did not deal with tracking transitory motion changes at an appropriate level of detail or modeling MSDs in a working environment. Therefore, construction safety still incurs considerable personal and socioeconomic burden. A new simulation modeling, analysis, and controlling tool that effectively handles the MSD issues faced by workers involved in a construction operation will be beneficial. A simulation model formulated based on worker survey results may contribute to the construction safety and health by establishing a benchmark for actioning MSD-prevention measures.

#### **3. Materials and Methods**

### *3.1. Research Method*

The research method map is shown in Figure 1. Each stage of this research consisted of four "processes" with two "outputs," indicated by numbers (1) to (5). For each "process," a set of criteria (or standard mean) were used to identify the variables and develop a simulation model. First, the variables involved in the MSDs of workers (i.e., anthropometry and hand forces) were identified

through comprehensive literature reviews. Second, comprehensive literature reviews were conducted to identify the variables involved in the MSDs of workers (i.e., anthropometry and hand forces). Third, the variables that influenced the MSD symptoms of workers were confirmed by surveying workers actively engaged in construction tasks. Anthropometry and hand force data for awkward postures were obtained from the NMQ survey. The outputs show that three motions (i.e., pushing forward, lifting, and kneeling) among the awkward postures manifested on a specific task deserve special attention. The justification for using these variables for MSDs was confirmed via a survey. Third, the data of these variables were used as the input parameters for simulation using 3D SSPP (Ver. 2017), an easy-to-use model that considers all variables together. This model estimates physical demands by considering input postures in a specified window frame, predicting the changes physical demands as the workers shift from one posture to another, capturing and saving pictures of each awkward posture, creating a digital twin of virtual workers using the photos, duplicating postures, and calculating the lower back compression and body balance. In addition, the anthropometric data, hand load measured for each construction task, and loads obtained from workers' experience, were mapped into the modeled virtual workers. The validity of the survey output was confirmed by comparing it with the simulation output data from a series of simulation experiments. The preventive measures to reduce MSDs were discussed using the obtained static strength for postures. Body balance was assessed by computing the center-of-pressure (COP) and evaluating the location of the COP projected onto the floor while taking into account the limits of the functional stability region using 3D SSPP. Fifth, the benchmark was established according to BMI and corresponding hand forces by changing the magnitude of the hand forces while keeping the body weight constant. In addition, the model was tested under several different sets of variables to estimate lower back compression, percentage of accurate predictions, and body balance. Finally, the contributions and limitations of the model and suggestions for its improvement were discussed.

**Figure 1.** Research method.

## *3.2. Administering the Nordic Musculoskeletal Questionnaire (NMQ)*

The NMQ provides a structured and standardized interview method considering the lower back, neck, and shoulder, studying general complaints from an epidemiological perspective. Its validity and reliability are well accepted in the field of MSD study [16]. The standard questionnaire consists of two parts. One is a general questionnaire of 40 forced-choice items that identify the body parts suffering from musculoskeletal problems; the other is a supplemental questionnaire that considers in depth the problems of the lower back, neck, and shoulder pain [16]. In this study, the NMQ survey was designed and administered to 120 male workers of four high-rise condominium building construction projects in Korea. The participants who were actively engaged in various manual construction operations were identified based on their trade (i.e., carpenters, masons, and ironworkers), task (i.e., ceiling work, material handling, and ground-floor-level helping), and role (i.e., craftsman, journeyman, and helper). The average age of these workers was 48.46 years. Questionnaires were prepared and provided to these workers in envelops. Of the 120 envelops, 28 were returned in either an incomplete or an invalid form. The rate of valid response for the 120 envelopes was 76.66%. In order to obtain accurate data, the objective of the study was clearly explained to the participants before they responded to the survey. After obtaining admissible informed consents from the participants, each criterion, which included a moderate (non-extreme) level of self-reported physical activity, was collected on a daily basis. By adopting the standard NMQ survey process [17], valid anthropometry data along with answers for all questions were obtained. In addition, the extensiveness and precision of the survey were confirmed by the descriptive statistics using the NMQ survey, which included the MSD pain prevalence data for 12 months, frequency of pain over the total working days/weeks, and the distribution of MSD on each body part.

## *3.3. Biomechanical Assessment Using Three-Dimensional Static Strength Prediction Program (3D SSPP)*

The biomechanical assessment software 3D SSPP (Version 8.0) was developed at the University of Michigan, USA, and is well accepted as an effective tool for handling the relationship between various lifting motions and lower back pain [18,19]. It was used to validate the NMQ survey output analysis in this study. The software identifies not only the physical demand attributed to a task (including posture data, force parameters, and male/female anthropometry), but also predicts the static strength requirements for lifting, pushing, and stooping tasks. It provides the percentage of worker strength performance given a designated task, and the spinal compression forces based on the National Institute of Occupational Safety and Health (NIOSH) guidelines.

#### **4. Results**

## *4.1. Descriptive Analysis of the NMQ Survey Outputs*

Of the 92 participants (87%) who provided valid questionnaire, responses complained of MSD symptoms attributed to the construction works (Tables 1–3).


**Table 1.** Height, weight, and body mass index (BMI) of respondents.


**Table 2.** Tenure and working days.


**Table 3.** Relation between age groups and MSD pain.

The age distribution of the respondents was as follows: 7% were 18 to 30 years old, 30% were 30 to 50 years old, and 63% were 50 to 65 years old. The worker's age appeared to be a significant contributor to MSD symptoms. The distribution of workers according to their work experience was as follows: 54% with less than 10 years' tenure, 29% with 11 to 25 years' tenure, and 16% with more than 25 years' tenure. The average body weight and BMI of the population was 75 kg and 24.8, respectively, which indicates that the construction workers had normal BMI. The longest duration for MSD pain was more than a month for the lower back, followed by the neck and shoulder during the past 12 months (Table 4).

**Table 4.** Duration of MSD pain in the past 12 months.


## *4.2. Reconfirming Awkward Postures Contributing to MSDs*

Existing research portends that pushing, lifting, and kneeling are major awkward postures contributing to MSDs. The NMQ questionnaire survey found that nearly 43%, 38%, 16%, and 16% of the studied population suffered from pain in the shoulder, lower back, neck, and knee, respectively (Table 5). The main awkward postures (e.g., lifting, pulling, kneeling posture) obtained by these surveys on construction workers' muscle pain were chosen for biomechanical simulation to validate the survey results. Since the amount of pain for upper back, hip, and wrist were nominal, these postures were not included in the simulation. In addition, the main motions contributing to each MSD were identified. Shoulder pain was the outstanding MSD complaint during daily working hours. It was mainly attributed to the bending and/or twisting of the body. Working in a bent or twisted body posture for long hours daily may increase this MSD symptom significantly. It was found that leg squatting while performing tasks on the ground or floor was an awkward motion that involved the knee acutely. In addition, the most common awkward postures at construction job sites were pushing forward (posture 1), lifting (posture 2), and kneeling (posture 3).


**Table 5.** MSD profile among respondents.

## *4.3. Simulation Modeling and Analysis Using 3DSSPP*

Anthropometry data were obtained from the survey at the aforementioned four Korean construction sites and provided the posture details and input parameters for workers (including average height (174 cm), weight (165 lb), and left- and right-hand forces). These data are listed in Table 6. They were used as input data for the simulation model.

**Table 6.** Anthropometry data of each posture.


## 4.3.1. Analyzing Motion of Pushing Forward

The pushing forward motion shown in Figure 2a did not cause a severe risk of injury to the lower back because it demanded a low lumbar disc compression force (L4/L5) that was less than the NIOSH back compression action limit of 770 lb (3400 N). The compression force on a disc of the spine was recommended by the NIOSH. The safety level for disc compression force during lifting objects in manual material handling should be less than 3400 N (Waters et al. 1993). While pushing forward against a force of 295 lb (1338 N), the worker did not bend his torso. Therefore, high flexion of the back was not needed to move forward an object of weight up to 9 kg (20 lb) in the simulation experiment, as shown in Figure 2b. The heavier the object that the worker pushes and/or the greater bender the extent to which the worker bends his/her torso, the higher the compression force. The simulation output analysis confirmed that only 35% and 62% of the surveyed workers could perform the posture of the wrist joint and that of the knee joint, respectively, and manifested in the pushing forward motion. Further, the other joints fell within the critical zone, indicating the influence of the pushing forward motion on body balance (see Table 7).

**Table 7.** Simulation output analysis of pushing forward.


**Figure 2.** Pushing forward motion (**a**), and limb angles in pushing forward motion (**b**) obtained by the three-dimensional static strength prediction program (3D SSPP).

The change in location of the center of gravity of the worker's body dictated the functional stability region obtained while releasing the pushing forward posture and was projected on the body balance graph by obtaining 30 window frames within a second, as shown in Figure 3. The virtual manikin retained static balance when the value of hand forces was decreased. The manikin could bend further forward if its center of gravity was located further backward from its base support. Thus, it may be beneficial either to hire a stronger worker or to decrease the hand force according to the BMI of the workers in order to avoid falling accidents.

**Figure 3.** Center of gravity of body in pushing forward motion.

## 4.3.2. Analyzing Lifting Motion

Four different postures of lifting a 25 lb box may cause a severe injury to the low back and were thus modeled in Figure 4a–d. The compression force (L4/L5), i.e., 3821 N (859 lb), exceeded the NIOSH back compression action limit of 3400 N. Since the worker bent his torso, these postures required high flexion to move an object (25 lb weight in the simulation experiment) forward. It was confirmed that the other joints fall within the critical yellow zone.

**Figure 4.** Body balance in lifting postures based on center-of-pressure (COP)—acceptable (**a**), acceptable (**b**), critical (**c**), and unacceptable (**d**).

These postures may cause severe low back injuries since the compression force (L4/L5) exceeds the NIOSH back compression action limit of 3400 N (Table 8).


**Table 8.** Simulation output analysis of lifting.

Note: A = acceptable, C = critical, U = unacceptable.

The maximum and the minimum compressive forces exerted while performing the lifting motion were 859 lb (3821 N) and 343 lb (1525 N), respectively. Since the worker did not bend his/her torso, these postures did not require high flexion to move forward an object weighing 12 kg (25 lb) used in the simulation experiment. The compression force increased if the worker bent his/her torso to push a heavier object. Only 77%, 87%, and 70% of the population could perform the corresponding postures of the wrist, knee, and shoulder joints, respectively. A posture may have static balance, fall within the yellow zone, or tend to cause a fall. The change in the location of the center of gravity while releasing the lifting posture was projected on the body balance graph, as shown in Figure 5. Body balance was categorized as acceptable, critical, or unacceptable by 3D SSPP when the COP was within, on the boundary, or outside the functional stability region, respectively, as depicted in Figure 4a–d. The virtual manikin retained static balance when the value of hand force was decreased from 25 lb to 15 lb The further backward the center of gravity of the manikin was located from its base support, the farther the manikin bent. Thus, it is beneficial to either hire a stronger worker or decrease the hand force according to the worker's BMI to avoid a dropping accident.

**Figure 5.** Reinforcing postures for (**a**) 25 fps and (**b**) 20 fps.

## 4.3.3. Analyzing Kneeling Posture

Two different postures reinforcing rebar for 25 frames per second and 20 frames per second were modeled, as shown in Figure 5a,b, respectively. The compression force (L4/L5), i.e., 742.6 lb (3303 N), was within the NIOSH back compression action limit of 3400 N (see Table 9), resulting in a margin of a lower back injury. Since the worker must bend his torso, these postures require high flexion to move an object weighing 9 kg (20 lb) forward in the simulation experiment. Only 74%, 72%, and 84% of the population could perform the holding posture of the knee, ankle, and torso joints, respectively. Further, only 70% and 52% of the population could perform the reinforcing posture of the wrist and knee joints, respectively. It was confirmed that the other joints engaged in the holding posture were unacceptable, but those engaged in the reinforcing posture were acceptable.


**Table 9.** Simulation output analysis of kneeling.

Note: U = unacceptable, A = acceptable.

Two different postures involving stooping (bending at the waist) with a hand tool for 25 frames per second and squatting down to reinforce rebars for 20 frames per second were modeled, as shown in Figure 6a,b. The center of gravity of the body while performing these postures was located away from the support, leading to a tendency to fall. The center of gravity of the virtual manikin remained in the base and the manikin maintained static balance. Indeed, either decreasing the hand force or maintain a constant hand force will be a good preventive measure to avoid a falling accident for a given BMI.

**Figure 6.** Stooping postures with hand tool for (**a**) 25 fps and (**b**) reinforcing rebar for 20 fps.

## *4.4. Tradeo*ff *between BMI and Magnitude of Force*

While decreasing hand force when pushing forward, lifting, and kneeling, the low back compression, body balance, and the percentage of strength capability were obtained. These values are listed in Table 10. The body balance in the lifting posture was critically unacceptable, but it became stable as the hand force decreases. The lower back compression decreased from 942 lb to 752 lb, which was less than the standard level (770 lb), as the hand force decreased. The percentage of strength capability increased remarkably to more than 90% for all the body joints, including the knee, shoulder, wrist, and hip, as seen from the data in Table 10. The benchmark provided admissible evidence that a Korean worker with an average weight of 75 kg can carry 16 lb, 19 lb, and 16 lb of loads when performing tasks involving pushing forward, lifting, and kneeling, respectively. When the hand force was greater than these loads applied to the manikin (i.e., the virtual worker) weighing 75 kg, the body tended to be unbalanced in those postures.


**Table 10.** Tradeoff between BMI and the magnitude of forces in different postures.

Note: P-1 = posture 1, P-2 = posture 2, P-3 = posture 3, C = critical, A = acceptable, U = unacceptable.

#### **5. Discussion**

The method combining the NMQ survey, biomechanical analysis, and benchmark approach facilitates quantitative MSD control over the muscle stress of construction workers. It encourages informed decision making on recruiting appropriate workers considering their physical merits (i.e., muscle strength) for a specific job function in a construction operation. It fills gaps the computational method handling different parts MSD that the existing methods had not adequately described to access the health risks of construction workers by doing simulation on construction worker postures. It may replicate specific task, finding construction workers' MSD issues attributed to using semi-automatic or fully automatic tools. The biomechanical analysis outputs involved in the unacceptable and awkward postures (Tables 5–7) that impose high risk provide a tool to field employment managers to identify the tradeoff between BMI and the magnitude of the hand forces to execute preventive measures (i.e., exercise programs and engineering controls) [20]. Few studies provide an insight into reducing work-related musculoskeletal injuries given the existing preventive measures. This lack of research may be attributed to the fact that analysis of work tasks at a job site is a complex task because of various factors (e.g., organizational, human, task factors). The new hybrid method allows an elaborate analysis of work postures associated with construction tasks by considering job-specific risks attributed to process, motion, and posture. Note that the method identifies potential MSDs associated with the awkward postures of a worker performing a job function while controlling other job-site variables.

The limitations of the method are related to the biomechanical issues as follows: First, it is desirable that sophisticated postures are considered jointly by accommodating 3D motion analysis in a future version of the method. A worker's muscle strength may not be determined by the biomechanical simulation model alone. However, the model may provide a control tool for MSD safety and health of workers by validating the physical demands (e.g., lower back compressive strength, percentage of strength capability, and body balance) obtained by expert group surveys of the construction community. Second, the momentary and transitory issues involved in motions have been intensified among the workers involved in construction. A controlled experiment on construction workers is not feasible because it is not easy to have many workers perform identical motions at a construction job site. In addition, their motions are momentary, transitory, and involve multiple tasks at a time. It will be commendable to perform controlled experiments in a simulated construction job site to generalize the outputs obtained by the method. Third, it will be beneficial to track each motion activated by a participant performing a specific task. It may encourage the elaborative evaluation of MSD. For instance, biomechanical human simulation may effectively predict the relation between muscle strength for a construction task and the workplace dimension by using identified awkward postures. Fourth, extensive controlled experiments with different exercise protocols (i.e., the type of working layouts, frequency of postures, intensity of motion, and duration of posture) may contribute to identifying unknown variables that influence the relationship between two variables and to secure the validity of the method and its corresponding data.

## **6. Conclusions**

The main contribution of this study is that the hybrid method lends itself to scientific fact-finding. A set of benchmarks was established using the model by manipulating the BMI and hand forces of the workers. The method provides a means to not only understand the contemporary scenario of MSD in construction workers but also to establish a practical benchmark based on the physical capability of workers that is helpful to construction managers during recruitment. It confirms that 87% of respondents suffering from MSD had three common awkward postures. Further, the simulation output analysis provided admissible evidence that the muscle stress involved in lower back compression exceeds the tolerance. The body of a worker suffering from back pain may be unstable while performing a work task. The awkward postures in which the body balance is proportional to the loads aggravate the situation. Indeed, decreasing the hand forces makes the posture static, thereby reducing the MSD. It will be beneficial to incorporate these findings into computer-based predictions to secure the effectiveness and validity of biomechanical human simulations. The current version of the developed method handles static postures, not dynamic movements. It is desirable to extend the new method to assess real-time work processes to identify the dynamics in real practices in the future study. The new method promotes academic division in the multi-paradigm computing approach and may contribute to the advancement of the construction workers' health assessment when monitoring the next generation.

**Author Contributions:** Conceptualization, S.P. and D.-E.L.; Data curation, M.Y. and B.Y.C.; Formal analysis, S.P., M.Y., B.Y.C. and D.-E.L.; Funding acquisition, D.-E.L.; Investigation, S.P. and M.Y.; Methodology, S.P.; Project administration, D.-E.L.; Resources, S.P.; Software, S.P. and B.Y.C.; Supervision, D.-E.L.; Validation, D.-E.L.; Visualization, S.P. and B.Y.C. Writing—original draft, S.P.; Writing—review & editing, D.-E.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2018R1A5A1025137) and (NRF-2019R1I1A1A01062006).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Evaluation of Space Service Quality for Facilitating E**ffi**cient Operations in a Mass Rapid Transit Station**

## **I-Chen Wu \* and Yi-Chun Lin**

Department of Civil Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan; x791220@gmail.com

**\*** Correspondence: kwu@nkust.edu.tw; Tel.: +886-7381-4526 (ext. 15238)

Received: 29 May 2020; Accepted: 29 June 2020; Published: 30 June 2020

**Abstract:** In an urban public transport system, mass rapid transit (MRT) stations play an important role in the concentration and deconcentration of passengers. Spatial conflicts and unclear routes may lead to crowding in MRT stations and reduce their operational efficiency. For this reason, this study proposes a space service quality evaluation method based on agent-based simulation by employing spatial information from building information modeling (BIM) systems as boundary constraints. Moreover, passengers and trains are simulated as interacting agents with complex behaviors in a limited space. This method comprehensively assesses congestion, noise, and air quality to determine service quality in different spaces. Moreover, the results are visualized in different ways for decision making about space planning. Finally, this research demonstrates and verifies the functions of the proposed system with an actual MRT station. Such simulation results can be used as a reference for management personnel to adjust space/route plans to increase passenger satisfaction environment quality, and operational efficiency in the operation stage of an MRT station. The evaluation method establishes valid and reliable measures of service performance and passenger satisfaction as well as other performance outcomes.

**Keywords:** building information modeling; agent-based simulation; space service quality; efficient operation

## **1. Introduction**

Mass rapid transit (MRT) stations play an important role in hosting and distributing passengers through an urban transport system. However, station space is a limited resource, and passengers move through or temporarily halt in this limited space. Many studies have examined how to effectively configure and use space. For example, Bahrehmand et al. [1] present an interactive layout solver that can assist designers in layout planning by recommending personalized space arrangements based on architectural guidelines and user preferences. Guo and Li [2] present a method for the automatic generation of a spatial architectural layout from a user-specified architectural program. But the quality of an open public space may be significantly negatively associated with psychological distress. Therefore, emphasis must be placed on space planning and service quality in building such spaces [3]. The quality of space planning for MRT stations will affect passengers' evaluations of the space service quality. Poor space planning may, to a large extent, negatively impact passengers' perceptions of the space service quality. In addition to the space in the infrastructure, time and user experience are additional factors to consider in space and route planning to improve the overall levels of service quality and passenger satisfaction. To enhance space planning and the degree of user satisfaction with station services, Li et al. [4] employed a scientific method to assess a building's space performance while emphasizing its influence on environmental quality and passenger satisfaction. They also developed an evaluation tool to continuously monitor the overall sustainable performance

in the operation stage. Wang et al. [5] used a questionnaire survey to understand the overall level of satisfaction with the interior environment of a flight terminal, and the outcomes can be used to assist in the future design and planning of airports and their operations. Tomé et al. [6] stated that buildings are complex dynamic systems composed of sub-systems and components in continuous interaction with human behavior. Therefore, information needs to be obtained from records to understand passenger concentrations and levels of space usage. The above research studies belong to the post-occupancy evaluation (POE) method. The question which we must consider next is the cost and operational impacts of reconstruction caused by future use problems due to poor design. Hayek et al. [7] and te Brömmelstroet et al. [8] both argued that planning integration should occur in the early stages of design to avoid any imperfections or conflict problems in the public transport system. However, planning is often based on the instinct and experience of the decision-makers, even if they lack the ability to interpret modeling results. Therefore, it is a challenge to provide models and evaluation results that are easy to understand so that they can assist planning personnel in the decision-making process to achieve a reasonable balance between planning design and evaluation. Evaluation methods can be divided into two types: non-parametric and parametric evaluation methods. Data envelopment analysis (DEA) is a non-parametric method in operations research and economics for the estimation of production frontiers [9,10].

Another method is the parametric evaluation method, as Indraprastha and Shinozaki [11] present in the computational model to analyze and assess the quality of architectural space by using visual distance combined with viewing angle to obtain the spatial quality. Understanding and evaluating space quality at the design stage can assist in making modifications at the pre-construction stage. Zawidzki et al. [12] propose a framework wherein the architectural functional layout is optimized for the following objectives: functionality (defined by users), insolation (calculated according to geographical conditions), outside view attractiveness (assessed on-site) and external noise (measured on-site). Although we can use mathematics or optimization methods to evaluate the quality of the space design, these research approaches ignore the influence of human interaction and grouping in a confined space. The agent-based modeling (ABM) [13,14] technique is also widely adopted to simulate real social conditions and human psychological reactions to determine the problems that may arise. A multi-agent system consists of multiple independent agents interacting with each other. They can result in different sorts of complex and interesting behaviors. It is a method to model real-life situations. Research has supported the validity of ABM in modeling human behaviors. Lee and Malkawi [15] utilized ABM to predict passenger behaviors, demonstrating that passenger behaviors impact both comfort and energy management activities. Osman [16] applied ABM to predict infrastructure asset management activities to study the effects of the social-psychological behaviors of users on how the users spend time on infrastructure services. Langevin et al. [17] developed and validated ABM for occupant behaviors using data from a one-year-long field study in a medium-sized, air-conditioned office building. Building information modeling (BIM) [18,19] is another popular technique in the construction industry, where it has been applied in the design and planning and the operation and maintenance phases. BIM can provide a visual modeling environment to assist in space planning, thus reducing discrepancies between design and actual construction outcomes. BIM models consist of comprehensive engineering attributes and spatial information. In view of these merits, in this study, BIM and ABM are applied to the evaluation of space service quality for providing dynamic visualization of the interactions between passengers and space. In simulating the actual conditions, the effects of spatial topology and human perception factor on the service quality of a space are considered. The results can not only assist planning personnel in adjusting current spatial designs but also provide feedback for the planning of station space and routes and in the analysis of alternative options at the design stage. This method can reduce labor and other financial costs associated with making changes in the design to improve passenger satisfaction levels.

## **2. Evaluation Method for Space Service Quality**

Human beings have long endeavored to create indoor environments in which they can feel comfortable. Durmisevic and Sariyildiz [20] pointed out key features that influence underground space design, including accessibility and nearest surroundings, orientation and way finding, spatial proportion, contact with the outside world, natural and artificial lighting, materials and colors, noise level, and air quality, among others. In addition to hardware equipment and environmental factors, other features are predominantly related to subjective feelings. In determining the impact of an indoor space on the comfort of the human body, the most common discrimination item is the indoor environment quality (IEQ), which is a benchmark for residential quality performance. It includes four items: thermal comfort, air quality, noise level, and lighting level [21]. Some research scholars have shown that indoor environmental quality factors can affect occupant satisfaction [22,23]. Among them, the thermal comfort and lighting level can be improved by adjusting the hardware equipment to improve the quality of space services. However, noise and air quality are more difficult to improve. The main reason is that different measurements result from the interactions and states of crowds of people, so they are difficult to quantify. Therefore, this study considers the impact of human interaction and grouping on space to propose a novel method for evaluating space service quality for facilitating the efficient operations of an MRT Station. This method employs BIM technology combined with agent-based simulation to simulate the behavioral patterns of passengers at MRT stations. Through dynamic modeling of possible scenarios, an examination of spaces crowded by large volumes of passengers and the associated noise and air quality issues is performed to understand service quality at MRT stations. Figure 1 illustrates a flowchart of the evaluation method for space service quality. A major feature of this method is the reuse of the BIM models, which reduces the time and cost associated with the data preparation required for simulation. The BIM model contains geometric shapes, spatial locations, and boundaries, which are important simulation constraints. Subsequently, this research establishes agent-based models based on BIM and the passenger and train movement conditions. The actual conditions are simulated by setting the relevant influencing factors and behavioral patterns. This system can simulate the status of space usage, usage level, air quality, noise, etc. Moreover, the 2D and 3D visualization and statistical charts are used to present the simulation results. Finally, the simulation results of the space service quality measurement can be exported into Excel spreadsheets to assist planning personnel in evaluation and decision making in the design and operation stage during the building's life cycle. In this way, the potential impact of future activities on service quality can be predicted and managed in advance.

**Figure 1.** Flowchart of the evaluation method for space service quality.

## **3. Data Preparation**

For space and pedestrian circulation planning, the dynamic simulation of virtual display methods needs to involve the timing between moving and stationary objects, the space configuration, and user participation characteristics. This research uses AnyLogic simulation software for agent-based simulation. Although it can support the 3D object formats VRML (Virtual Reality Modeling Language) and X3D (Extensible 3D), the VRML format has too many restrictions, and no updated version beyond VRML 97 exists. Therefore, this study employs X3D as the model conversion object. Since X3D is developed in XML format, it can be verified or modified using XML-related editing tools. Having extensible characteristics, it is a highly readable format due to the interaction between cross-platforms and is one of the current unified exchange formats for 3D data. In addition, to improve the sustainability value of the conversion program, the BIM model information exchange format called the Industry Foundation Classes (IFC), a data format released by BuildingSMART, is used as the standard format for converting all BIM models in the process. It can be used by different modeling software, such as Bentley AECOsim, Autodesk Revit, ArchiCAD, and Tekla, and this conversion mechanism can be used to export files in the X3D 3.0 version format for subsequent analysis and simulation. Thus, the consistent format conversion of the data model in the data preparation stage is a problem that must be solved. This study develops a BIM model data capture and format conversion tool, as shown in Figure 2. The user can directly retrieve the floor plan and compartment data of the BIM model, convert it into the X3D virtual reality file format, and directly import it into AnyLogic for conversion to the active space boundary condition of the system. This study reuses the BIM model established in the design planning stage to ensure the accuracy of the simulation and to avoid the labor cost of rebuilding the model. Moreover, the data on pedestrian circulation were collected from historical data of the operation stage. In addition to traffic at the exits and entrances of the station, passengers board and alight from MRT trains at the platforms. Thus, this research collected the train schedules to obtain train capacities at different times and consider the overall pedestrian circulation.

**Figure 2.** Concept of X3D format and coordinate conversion.

## **4. Agent-Based Simulation**

Space quality, similar to service quality, is subject to user perceptions. Crowded spaces or spaces with noise and/or bad air quality directly negatively affect people's perception of the service quality. Agent-based modeling can deal with continual temporal and spatial states of events. Agents can make decisions on space boundaries, destinations, entrances, exits, and route disturbances as well as identify potential problems. A multi-agent system consists of multiple independent agents interacting with each other. A multi-agent system simulation can be applied to society, biological bodies, mechanical processes, human beings, or any movable object. The social force model proposed by Helbing et al. [24] can be used to promote or influence agents' physical and psychological states, generating distance

between the agents and resulting in socio-psychological, physical, and reaction forces. This model is widely applied for the simulation of behavioral patterns of agents. Therefore, in addition to using BIM to understand the walking behaviors of passengers in an MRT station space, this study uses an agent-based method to simulate congestion, noise, and air quality according to the number of passengers, determine the extent of their effect, and evaluate service quality. The results can serve as a reference and a basis for decision making in the planning process.

## *4.1. Modeling for Congestion*

MRT stations serve male and female passengers of different ages. The passenger's speed, grouping, behavior, etc. produce different interactions within the space. Factors such as the location of entrances and exits and the placement of equipment affect the passengers' circulation within the space. They also have different behavioral impacts on other agents, which are reflected in the results of subsequent decisions. Understanding the relationship between passengers and MRT station spaces allows reductions of relative obstacles and increases in circulation speed. Therefore, this study uses the BIM technology to capture the boundary conditions of the MRT station model, integrates the agent's virtual role to simulate the flow of people, and reflects the behavioral state and judgment logic of the passengers in different environments in different spaces. Through the establishment of influencing factors, the simulation results can be presented in dynamic 2D and 3D visualization without static assumptions. It is hoped that based on the specific situation analysis and the actual situation, by the setting of parameters and simulation of the agent, whether the existing space can cope with varying crowd sizes can be analyzed. In addition, through relevant settings, the possible behavior results of various agents in different spaces, the flow of people and the state of congestion are evaluated, and data results are provided to improve the impact of service quality within the space.

Pedestrian agents are intelligent in the state of social force model agent simulation. Passengers are simulated through continuous calculation and judgment for each step they take; this study also adds the calculation parameters in the space to the agent calculation equation, making the agent more reliable in the simulated state. The parameters include simulated walking targets, walking velocity, walking distance, walking speed, passenger influence range, obstacle avoidance, and other factors affecting pedestrian agents between spaces and obstacles. In addition, because the agent system needs to first generate a category during the construction, this category generates the agent character objects according to the conditions based on the parameters and state settings defined in the study. This ensures that the agent character objects are independent of each other. Different state behaviors are additionally set in the category, resulting in different behavioral rules for different pedestrian agents. In this study, pedestrian agents are distinguished by age as the object parameters of adults, the elderly, and children, and they are set in groups or partnered so that the pedestrian agents not only walk independently but also may be in the group movement state. However, in situations such as queuing, waiting, ascending and descending stairs, and boarding, the states that produce a separate pass or use conditions need to be changed. This study presents the above-mentioned conditions based on the basic conditions of pedestrian agents in the simple behavioral state, and they are presented in flow charts, as shown in Figure 3.

At the platform level of the MRT station, a track area caters to the trains' demand outside the passenger use area. Therefore, simulation is performed while considering the train entry and exit statuses and the passengers' boarding behaviors in the same space. If the simulation is not performed simultaneously, it will not be able to meet the changes in trains' spatial demands for different passengers. However, trains and passengers are objects of different agent types and have different characteristics. Therefore, it is necessary to establish the various agent types for different agents. Moreover, because the train travels on a track, it is divided into two service states—inbound and outbound—so there are no roaming and collision problems. The number of train cars is set to 8 according to the number of platform doors. The length of each car is 23.5 m, and the train's arrival and travel times are controlled by parameters in the simulation. The basic traveling speed of the train is 20 m/s, and the train is set to

have an acceleration/deceleration state when entering and leaving the station. The transition mode of the cyclic state is shown in Figure 4. In this study, the train agent is connected to the process in the initial state of the train, and as shown in the flowchart, the position of the inbound and outbound track, as well as the length and running speed of the car, are set. Before entering the model, we establish that no train is on the track, and we set the speed of entry and departure as well as the stops for passengers. In addition, we set the time for the boarding of passengers in a delayed state for simulation of both passengers and cars, and then we set a fixed cyclic state after the train leaves and change the cycle time according to demand. This is used as a train agent simulation process.

**Figure 3.** Pedestrian agent basic settings and behavior flow chart.

**Figure 4.** Train agent basic settings and state transition diagram.

Simulation of the congestion conditions requires knowing the number of people entering the model, the number of users of each space, train arrival times, the number of passengers brought in by each train, the hourly passenger volume, the area of the space, etc. The walking routes of agents are recorded and used in the simulation to derive the results.

## *4.2. Modeling for Noise*

Balaras et al. [25] studied the indoor environment quality of Greek airports in 2003. The study showed that noise is a major problem, with a dissatisfaction rate of 78%. This study reflects that noise is one of the main factors affecting the quality of space service. Sound is a perception of human hearing, and noise pollution in the space will cause discomfort to people. Passengers walking and talking in public environments produce basic sounds, which can have a superimposing effect in the space. From an acoustic point of view, the human ear can detect sound due to rapid pressure changes in the air transmitting the sound. Therefore, the noise in this study is calculated in terms of the Sound Pressure Level (SPL) in decibels (dB) [26,27]. It is defined as the common logarithm of the ratio of the effective value of the measured sound pressure *p*(*e*) and the reference sound pressure *p*(*ref*), multiplied

by 20, as given by Equation (1). The general value of the reference sound pressure *p*(*ref*) in the air is <sup>2</sup> <sup>×</sup> <sup>10</sup>−<sup>5</sup> Pa.

$$SPL = 20\log\_{10}\left(\frac{p(e)}{p(ref)}\right) \tag{1}$$

In the MRT space, passengers will create other basic sounds, such as speaking, phone calls, or footsteps, which affect the environment. In this study, the SPL is added to the pedestrian agent's self-behavior with a random parameter number such that the passenger gains a decibel value of sound when entering the space. At the same time, to evaluate the total noise amount in each space, the total SPL generated by the cluster is calculated according to Equation (2).

$$SPL\_{(total)} = 10 \log\_{10} \sum\_{i=1}^{n} 10^{\frac{\text{SPI}}{10}} \tag{2}$$

This study constructs a noise model based on the above description. Passengers talking to each other, footsteps, and phone sounds are added to pedestrian agent behaviors as variables, and each agent randomly generates only one type of sound. Pedestrian agents walk in different spaces according to specific behaviors and routes. The number of people and different sound states in each space is shown in Figure 5. About parameter settings for affecting the space, the total SPL obtained is considered as the basis for evaluating the decibel levels of the sound generated by each passenger, and other sounds increasing the decibel level result are considered. Then the impact score due to noise in each space is calculated to facilitate follow-up space service quality result measurement. The parameter settings required for the simulation are shown in Figure 6.

**Figure 5.** Related settings and methods of Pedestrian agent voice influence.

**Figure 6.** Relevant settings and methods of Space agent noise impact.

## *4.3. Modeling for Air Quality*

Air quality is one of the main factors affecting the space environment. A concentration of carbon dioxide of 1000 ppm or higher in an indoor environment will cause dizziness and tiredness in people and affect their work mood. If the content of carbon dioxide is too high, it will harm the human body, causing hypoxia, numbness in the hands and feet, and loss of consciousness, or even difficulty in breathing, coma, and possibly suffocation. Therefore, this study considers air quality for space service quality and uses the concentration of carbon dioxide as the main simulation subject. To calculate the carbon dioxide equivalent produced by each passenger every minute during the simulation, the amount of air inhaled in each breath, the number of breaths, the amount of ventilation per minute per person, and the space area are set as variables in this study. Based on the simulation time for the method, the carbon dioxide content exhaled per minute per person can be calculated as shown in Equation (3).

$$\mathcal{C}\_p = (N\_{brath} \times V\_{brath}) \times R\_{CO\_2} \tag{3}$$

where

*Cp*: The concentration of carbon dioxide produced per person per minute

*Nbreath*: Number of breaths per minute

*Vbreath*: Volume of each breath

*RCO*<sup>2</sup> : The proportion of carbon dioxide in the air

Since the space has been set to the agent type, the spatial parameter can be set in the pedestrian agent through a variable reference for the calculation. Owing to the movement of air and passengers, there is no fixed result, and it is necessary to focus on the causal feedback relationship between the overall simulation process and a large number of variables. To understand the mutual influence of the movement of people on the carbon dioxide level in each space, from the perspective of system dynamics, the passengers in the simulation process are considered to have a pure level initially, which can be accumulated or reduced as time goes. During the simulation, through the interactive relationship between carbon dioxide level and passenger behavior, the feedback of the information obtained from the interaction results in the change of the carbon dioxide volume and the behavior of the impact rate.

In the planning process of the carbon dioxide model, we must first understand the setting parameters of the carbon dioxide air exchange required by the pedestrian simulation, as shown in Figure 7. This allows determining the amount of carbon dioxide generated by each passenger in the space. Next, the passengers are randomly generated in the space, and the carbon dioxide emissions are continuously calculated. The emissions are then fed back to the space agent to calculate the overall carbon dioxide concentration. In addition to the carbon dioxide produced by passengers, each space has the original carbon dioxide value generated by environmental equipment, this must also be included in the calculation. Furthermore, considering the poor ventilation environment of an MRT station, most of the air conditioning systems use forced ventilation to improve space ventilation efficiency. Therefore, this study also takes into account the ventilation rate to obtain the total value of carbon dioxide concentration accumulated in the space, as shown in Figure 8. The space will have reduced air quality due to the increase in the number of passengers. Therefore, the number of passengers in each space is obtained through simulation, and the current carbon dioxide concentration in each space is calculated with the carbon dioxide equation given by Equation (4).

$$\mathbb{C}\_{\text{space}} = \left(\sum\_{0}^{n} \mathbb{C}\_{p} + \mathbb{C}\_{o} - R\_{v} \times T\right) \tag{4}$$

where

*Cspace*: Carbon dioxide concentration in the space

*n*: Number of persons in the space *CCO*<sup>2</sup> : The amount of carbon dioxide exhaled by each person *Co*: Original CO2 content in the space (ppm)

## *Rv*: Ventilation Rate (ppm/minute)

## *T*: Time (minute)

**Figure 7.** Basic settings for CO2 emissions by Pedestrian agents.

**Figure 8.** Relevant parameter settings of total CO2 produced by Space agents during the simulation.

## *4.4. Space Service Quality Measurement*

In addition to congestion, noise and air quality are important factors influencing the evaluation of space service quality. Overcrowding will lead to greater noise and air pollution. These three factors are simulated separately in this research, and the results are combined to derive the final score for overall space quality.

Table 1 indicates the influence score (Qc) for congestion conditions [28]. In the color schema, blue corresponds to a score of 1, indicating a sparse density of less than 0.31 persons/m2, the non-congested condition of 0.32–0.72 persons/m<sup>2</sup> is represented by green, corresponding to a score of 2. A score of 3 denotes a normal condition of 0.72–1.08 persons/m2, shown in yellow. Orange corresponds to a score of 4, representing a slightly congested condition with 1.09–2.5 person/m2, and red, with the highest score of 5, means a highly congested condition with a distribution of greater than 2.5 persons/m2.



When passengers enter the space, the system will calculate the sound of one person, and the result will be used to analyze the effect of the entry of agents on the noise level. This system refers to a WHO research report [29] in defining the influence scores (Qn) for the noise levels and effects as shown in Table 2. The noise of less than 40 dB is scored 0, while the noise of more than 120 dB is scored 5. The effect of noise on space service quality is measured as such.


**Table 2.** Noise levels and effects.

The system based on the ASHRAE standard [30] defines five levels corresponding to different colors and scores, as shown in Table 3. "Good" is scored 1 point and represented by green, "unhealthy" is scored 3 points and represented by red, "hazardous" is scored 5 and represented by brown. A higher score implies lower quality.

**Table 3.** Color scheme and influence score for air quality.


This method of space service quality measurement is illustrated in Figure 9. The highest score for the overall space service quality is 15. A higher score indicates poorer space service quality. The scores are provided to relevant parties for modification and adjustment to achieve high-quality planning of space service.

**Figure 9.** Measurement of space service quality.

## **5. Demonstration**

In this study, 3800 people are imported into the simulation system to represent the peak traffic time of the Daan Park metro station, and the possible scenarios are set. For example, considering passengers entering and exiting the station at entrances, cashing out, purchasing tickets, checking tickets, boarding the trains, and even moving from location to location allows for more realism in the simulation, thus enabling potential problems and difficulties to be evaluated and observed. This allows management and decision-makers to produce more accurate judgments and analysis before the actual implementation. Before setting the congestion state, the space configuration and planning must be completed, the pedestrian agent process and logic settings must be completed, and the corresponding train agent and boarding behavior agent must be set to understand the state of congestion in the simulation. The space configuration planning status, such as the platform level in this study, is divided into pedestrian agent needs and train agent needs, in which track area, waiting area, and other area configurations and planning are completed, as shown in Figures 10 and 11.

**Figure 10.** Planned platform space configuration of the Daan Park Station.


**Figure 11.** The area and volume of the space configuration in the Daan Park Station.

This study presents all the data in the main editor of the software system after the space, passengers, trains, and congestion density are set. One of the simulation results is shown in Figure 12. This system simulates different floor spaces separately. The simulation results of the Hallway indicate that the ATM location, ticket machine location, entrance and exit locations, and changes in pedestrian circulation greatly affect the degree of space usage. In addition, the circulation chaos caused by the device locations increases the level of crowding in the space. Moreover, the sizes of the entrances and exits are a factor in congestion. If the equipment locations were set according to the circulation requirements, the practical function and quality of the space of this MRT station would be greatly improved.

**Figure 12.** Simulated density distribution of space congestion on the platform.

In terms of noise, this study adds sound factors to the pedestrian agent's self-behavior, so the passengers themselves have sound parameters. There are different volume levels according to different parameters of the sound, and the range of influence of the sound will vary according to the decibel level. During the simulation, you can clearly see the decibel presentation status issued by each person, and the influence range of the agent's own noise will be visualized in the simulation, as shown in Figure 13. In reality, the passengers have a multiplying relationship with the sounds in the space. Therefore, in this study, the total number of people in the space is simulated, and then the total sound pressure value is calculated for all decibel values; the simulation results of each space are obtained through Equation (2) and are shown in Figure 14. The results clearly indicate the locations and distributions of places where the noise gathers.

**Figure 13.** Extent of the sound volume generated by the agents.

**Figure 14.** Visual representation of noise agents.

The system also calculates the decibel level of each space after the simulation is performed and presents the calculation results in the form of a bar chart, as shown in Figure 15. These results provide managers with an understanding of how the decibel levels change in the spaces in the simulation.

**Figure 15.** Changes in decibel levels in spaces in the simulation.

To simulate the space air service quality, the space is set as an agent type, and the carbon dioxide concentration is used. According to the above-mentioned parameters and settings required for the simulation of the congestion state and the setting for the carbon dioxide in the air, including carbon dioxide concentration, space area, number of people, etc., the air quality-related parameters are connected through the space environment agent. The number of passengers in each space during the simulation is obtained, and the current carbon dioxide concentration in each space is calculated according to Equation (4). During the simulation, the user can mouse-click any space to select it and obtain the current number of users in the space and the current carbon dioxide concentration. The actual simulation results are shown in Figure 16.

In the system in this study, the impact of an agent's carbon dioxide emissions caused when the agent enters each space during the simulation process is presented through a line graph, as shown in Figure 17. The system calculates the data changes every 15 s for planners to understand the current status of the space visually; the data vary for different time periods and simulation times. Users can understand the changes in carbon dioxide concentrations from the data recorded in this graph and then return to the model to understand the relationship between the state of the space and the change in air quality. This enables planning personnel to change the design as well as the circulation needs.

**Figure 17.** Changes in carbon dioxide concentration in the simulation.

The highest overall space service quality score is 15, based on the congestion state score of Table 1, the noise and decibel impact state evaluation of Table 2, and the carbon dioxide concentration evaluation score of Table 3. The three evaluation scores are summed for the overall score, with higher scores indicating lower space service quality. This study uses statistical bar graphs for the scores obtained for each space, as shown in Figure 18. Then it provides relevant units for modification and adjustments to achieve high-quality space service planning.

**Figure 18.** Space service quality measurement results.

The system can also export the relevant information service quality measurement results of each space to an Excel spreadsheet, which can be used by subsequent personnel in related fields for decision-making, as shown in Figure 19.


**Figure 19.** Excel report of space service quality measurement.

When space planners or decision-makers receive the information, they can accordingly adjust or change spaces with low service quality. For example, BreakSpace2, which has a space service quality score of 9, originally has an area is 50.68 m2. If its area is increased to 75 m2, the original settings of passenger flow, noise, and carbon dioxide settings will still affect the parameter values for simulation after the modified model is imported. In larger spaces, more people can be accommodated. This implies that the noise and carbon dioxide concentration will be relatively increased, but the overall service quality score after the simulation will be significantly reduced to 4 due to changes in space conditions affecting pedestrians' circulation, which in turn affects the adjacent space quality score.

## **6. Conclusions**

To study the current space usage, this research used Daan Park Station as a case study to simulate streams of people entering and exiting the station from trains or from the outside. It also proposed combining building information modeling technology and an agent-based model to simulate the interaction of agents in the space. A study of the published literature revealed that, in addition to space planning and route interruptions, factors that can lead to a low quality of space service include noise and air quality. Therefore, this research set these factors as agents, including passengers, space, noise, and air quality. The results on space service quality were presented in 2D and 3D visualizations. Possible scenarios were visualized to provide solutions for the space design of an MRT station and route planning. Different colors were used to show and distinguish the space usage so as to provide decision-makers with a better understanding of the actual space usage and service quality at MRT stations through visual presentation. Simple equations were used to combine simulation results for the derivation of the space service quality score.

In the present study, the three influencing factors were simulated comprehensively. We expect to integrate various relevant factors and provide various infographics and dashboards in the future to present results that bear a better resemblance to reality. Good visualization results will be used as a bridge to facilitate communication with other relevant parties so that planning personnel can make space adjustments and other modifications. We would also like to provide these results as feedback for the space designs of MRT stations and routes and the analysis of alternative options with the aim of reducing the labor and costs associated with design variations.

**Author Contributions:** The work described in this article is the collaborative development of all authors. I.-C.W. devised the project, the main conceptual ideas, and proof outline. Y.-C.L. carried out the implementation. I.-C.W. took the lead in writing the manuscript. Both authors provided critical feedback and helped shape the research, analysis, and manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Taiwan Ministry of Science and Technology, grant number 103-2221-E-151-021-MY3.

**Acknowledgments:** The authors are grateful for the support of the CeIT Laboratory, BIM Research Center, and University.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Natural Hazard Influence Model of Maintenance and Repair Cost for Sustainable Accommodation Facilities**

## **Sang-Guk Yum 1, Ji-Myong Kim <sup>2</sup> and Kiyoung Son 3,\***


Received: 28 May 2020; Accepted: 17 June 2020; Published: 18 June 2020

**Abstract:** To optimally maintain buildings and other built infrastructure, the costs of managing them during their entire existence—that is, lifecycle costs—must be taken into account. However, due to technological improvements, developers now build more high-rise and high-performance buildings, meaning that new approaches to estimating lifecycle costs are needed. Meanwhile, an accelerating process of industrialization around the world means that global warming is also accelerating, and the damage caused by natural disasters due to climate change is increasing. However, the costs of losses related to such hazards are rarely incorporated into lifecycle-cost estimation techniques. Accordingly, this study explored the relationship between, on the one hand, some known parameters of natural disasters, such as earthquakes, high winds, and/or flooding, and on the other hand, the data on exceptional maintenance costs, represented by gross loss costs, generated by a large international hotel chain from 2007 to 2017. The regression model used revealed a correlation between heavy rain and insurance-claim payouts. This and other results can usefully inform safety and design guidelines for policymakers, both in disaster management and real estate, as well as in insurance companies

**Keywords:** natural disaster; risk management; accommodations; operations and maintenance; lifecycle cost; disaster management

## **1. Introduction**

As the sizes and heights of buildings continue to increase, new approaches for estimating and managing their lifecycle costs have become necessary [1,2]. Construction's impacts on development, society, the environment, and the economy should all be considered as fundamental to considerations of long-term building sustainability. Accordingly, an increasing number of studies are being conducted on buildings' social impacts, including numbers of fatalities during disasters; environmental ones such as CO2 emissions during deconstruction/demolition; and economic ones such as natural-disaster-related repair costs [3–6]. According to the Intergovernmental Panel on Climate Change [7], average global temperatures have been rising, making natural disasters such as typhoons both more frequent and less predictable. It is therefore very important to assess the maintenance and repair costs that have been associated with such natural disasters in the past as a means of anticipating such costs going forward. Due not only to the increased likelihood of various types of damage related to global warming but also to public demand for greater urban-system resilience, effective estimation of such future costs should comprehensively incorporate those factors that may require structural repair or complete replacement [8]. For this study, hotel facilities were chosen because the hotel business is perhaps uniquely vulnerable to the negative consequences of both poorly maintained facilities and natural disasters [9]. Yet, despite the profound impact that the cosmetic appearance of a building can have

on hotel revenues, and despite the long lifespans and considerable age of many hotel buildings, their natural-disaster management tends to be passive rather than proactive, unsystematic, and poorly funded relative to their overall budgets [10,11].

## *1.1. Research Background*

Building maintenance costs are increasing due to the greater frequency of natural disasters and the generally greater heights of new commercial buildings [8,12]. To address this challenge, effective management should take into account the specific features of every building, along with a comprehensive range of factors that might cause that building's functionality to deteriorate. In recent decades, many property managers have applied asset-management techniques to more efficiently deal with the maintenance costs incurred in the operating stage, which account for the highest proportion of any building's total lifecycle costs [13], the other stages being planning, feasibility studies, basic design, execution design, construction, and demolition. However, asset management can be difficult for many entities to implement since, as well as being building-specific, it requires information compatibility across all stages of the building's lifecycle, massive quantities of maintenance materials, and long-term investment. A considerable body of asset-management research is devoted to mitigating these drawbacks, but such studies rarely consider the relationship between natural disasters and maintenance costs. The present paper addresses this gap in the literature, using 11 years of data on an international hotel chain's insurance-claim payouts.

## *1.2. Research Objective*

A hotel chain was chosen as this study's research case because an insurance company was willing to provide the researchers with gross loss data on that chain's claim payouts. First, this paper contains a review of the prior research on asset management as it relates to building lifecycles. Second, it features hotel property-loss data on the 2007–2017 period, including the gross loss, loss factor, and date of loss, to explore the relationships between natural-hazard factors and operation and management costs. Finally, this paper includes a regression analysis of the data collected to identify the correlations among maintenance costs, damage, and the incidence and intensity of earthquakes, high winds, and flooding.

## **2. Literature Review**

Facility management comprises professional methodologies aimed at ensuring the functionality of properties and built environments (International Facility Management Association, 1992, 2015). Its techniques, which include lifecycle-cost estimation, address safety and durability, as well as economic considerations [14]. Lee and Jung's [15] comparison of facility-management practices in the United States, Canada, Australia, and South Korea suggests that, although all four countries focused on the operation and management stage, which occupies, on average, 85% of the building lifecycle [13], only Australia emphasized a lifecycle-cost approach to managing costs. Specifically, Lee and Jung [15] conducted a high-volume review of the existing literature on applied facility management and categorized this discipline's functions into 19 types, covering property, service, space, communication, energy, environment, equipment, moves, quality, security, costs, documents, human resources, materials, outsourcing, regulations, schedules, technology, and general management.

Yu et al. [16] proposed a methodology for developing facility-management functions and their computerization. Foster [17] emphasized the importance of operation and management costs, especially energy costs, which account for 25% of all operation costs, but which many U.S. federal buildings were found to neglect. The same author also advocated the establishment of a strategy to reduce energy costs in the operation and management stage. Williams et al. [18] investigated the potential usage of building information modeling (BIM) in facility management using a survey and interviews to explore the gaps between real-world applications and common perceptions. They found that, although there was still a need to improve and educate facility-management professionals about real-world utilization

of BIM, this approach to facility management should be adopted due to its usefulness not only for information exchange but also for collaboration among construction stakeholders.

Lifecycle costs have also been recognized as an important basis for the improvement of structural resilience, and in that context, American Society for Testing and Materials (ASTM) [19,20] developed a standard for the quantification and specification of costs at each stage of a building's lifecycle to produce more accurate lifecycle-cost estimates. Several researchers have also developed methodologies for improving lifecycle-cost assessments, with some focusing on costs during the early design process of buildings and other types of infrastructure. For example, matrix-based frameworks for choosing cost-efficient materials were investigated by Pettang et al. [21], whose estimates of projects' construction costs included labor, materials, and operation and management costs, in an effort to support decision making by construction stakeholders in a range of material scenario. Later, Günaydın and Do ˘gan [22] proposed a novel cost-estimation approach based on artificial neural networks (ANNs) but, again, focused on the early stage of building construction. Another approach to creating an accurate construction-cost estimation model, developed by Kim et al. [23], was built around three different methods—statistical analysis (regression modeling), ANNs, and case-based reasoning—and established that it could effectively manage construction projects' costs in their early stages. However, their approach did not consider long-term operation and management costs despite the fact that they account for 85% of lifecycle costs [7].

The effects of aging on buildings were investigated by Rahman et al. [24], who proposed a decision framework for simultaneously evaluating various criteria, including resilience, energy, cost-effectiveness, durability, and environment. They concluded that multiple aspects of building performance should be considered during the operation and management stage, and the materials should be altered accordingly. Another perspective on lifecycle costs focused on energy consumption has arisen amid the development of advanced building materials and technologies with the potential to make buildings more energy-efficient, which in turn, would likely reduce costs and lessen environmental impacts [12]. For example, Hasan [25] used lifecycle-cost assessment to optimize wall thickness for purposes of insulation; Kneifel [26] investigated the effects of energy-efficient design on commercial buildings' lifecycle costs, energy consumption, and carbon emissions; Morrissey and Horne [27] studied the interrelationships of new buildings' thermal properties, initial construction costs, and whole-life energy costs; and Gluch and Baumann [28] applied the concept of lifecycle costs to a proposed framework for eco-friendly decision making.

In addition to investigating tools and techniques for the effective management for high-performance buildings, such as increasing their energy efficiency as discussed above, a comprehensive lifecycle assessment still needs to take into account the repair costs arising from natural hazards if overall asset management is to be effective. Prior research has utilized building characteristics such as height, area, and price as variables for the extent of damage to properties [29,30]. According to those studies, building height had a clear statistical relationship with financial losses caused by natural hazards, notably windstorms.

As well as the relationships between building features such as height and losses from natural hazards, some researchers have emphasized the importance to lifecycle costs of damage by such hazards, despite the inherent randomness with which such events strike, both by building type and geographically. As noted by Chang and Shinozuka [31] in connection with the 1994 Northridge (USA) and 1995 Kobe (Japan) earthquakes, it is tempting to neglect the potential for damage by natural disasters when estimating the lifecycle costs of infrastructure systems due to these many uncertainties. However, through a case study of pipeline systems, they demonstrated the value of extending traditional lifecycle-cost assessment to include potential repair costs and related user costs arising from earthquake damage. Similarly, Wei et al. [32] argued that the potential cost of damage from earthquakes should be added to lifecycle costs when evaluating long-term building performance. Nevertheless, it remains very difficult to estimate property losses caused by natural hazards, as both their frequency and intensity are inherently random and uncertain [12].

Due to global warming, unexpected natural disasters have been increasing in frequency, driving up lifecycle costs during buildings' operation and management stage. In addition, high-performance and high-rise buildings—which are increasingly prevalent due to accelerating urbanization and population growth—have special vulnerabilities to natural hazards, as shown in previous studies on this aspect of lifecycle cost [12,30]. The present paper tackles these problems directly, by proposing a lifecycle-cost assessment method that covers not only expected costs such as routine repairs, but also the exceptional ones associated with natural hazards across buildings' entire lifecycles.

## **3. Research Methods**

### *3.1. Case-Study Approach and Research Process*

In this study, we investigated the relationship between natural disasters and the operation and management costs of a hotel chain that is currently one of the largest of its kind in the world, comprising more than two dozen brands and over 5000 properties around the globe. Despite their geographic dispersal, these properties are similar in terms of construction quality, construction type, and exterior design, among other characteristics. Therefore, their guidelines and methods for estimating lifecycle costs, including operation and management costs, should also be similar. This research relied on the data on 725 incidents of gross losses from natural hazards that this hotel chain incurred from early 2007 to late 2017. The most prevalent type of damage was water-related—a category including floods, overflow, and water-supply facility failures—which comprised 44% of all damage by the number of reported incidents. The second most prevalent was wind-related, including but not limited to hurricanes and typhoons, which comprised 17% of all damage. Other natural disaster-related damage, including but not limited to earthquakes, hail, and wildfires together made up an additional 1%. Prominent among the non-disaster-related incidents that accounted for the remaining 38% of all damage included HVAC failures (13%), fires (6%), and extreme cold (2%).

First, the characteristics of the particular natural disasters that affected the hotel chain's properties were categorized as independent variables. Second, the gross loss data were categorized according to the natural hazards that caused them. In this step, claim-payout amounts served as the dependent variable as a proxy for operation and management costs, while the causes of damage were utilized as the independent variables. Third, a regression analysis was conducted to establish the relationships of the independent and dependent variables. Those variables are described below, along with this study's data-collection procedures and statistical analysis methods.

## *3.2. Dependent Variable*

Losses from individual events that caused damage during the period of interest ranged from less than \$10,000 to \$57,445,698. The smallest single payout was \$37.

## *3.3. Independent Variables*

Although lifecycle-cost assessment can be utilized to design buildings to cope with natural hazards, the expected costs of natural-hazard damage related specifically to building loading are often minimized, which could cause problems [33–35].

According to Harvard's Joint Center for Housing Studies [36], repair costs related to all types of natural disasters made up 8.2% of all improvement expenditures by homeowners in the United States in 2013. At USA \$15.8 billion, these hazard repairs were also among the most costly of the 54 categories of expenditure in the same study.

Ayyub [37] emphasized that, among all types of natural disasters, earthquakes were the most severe from the point of view of damage to buildings and infrastructure systems while also having substantial effects on society, the economy, and the environment. Wei et al. [32] noted that many researchers have sought to reduce structures' seismic response, but relatively few have focused on

the costs of earthquake-related damage over the course of a building's lifecycle. The intensity of earthquakes is usually represented as peak ground acceleration (PGA) [12].

Hurricanes, for their part, can also be very damaging to buildings and infrastructure systems. Their severity can be characterized according to their maximum wind speed radius, forward-motion speed, and sustained maximum wind speed [38–40]. Among these, sustained maximum wind speed and maximum wind speed radius are the main factors utilized to estimate hurricane damage [38,40,41].

However, some research has emphasized the importance of rainfall in the accurate evaluation of the extent of hurricane damage [42,43]. Recently, damage from flooding has also been on the rise, in part due to the effects of urbanization, including the reshaping of river systems [44,45]. Brody et al. [42] highlighted the importance of effective flood control, given that water systems can easily overflow when heavy rain occurs, magnifying flood damage. Therefore, altitude and distance from such systems are useful indicators of water-related hazards that were adopted for this study.

## *3.4. Data Collection and Statistical Analysis*

In this study, the wind was measured by wind speed, and earthquakes by PGA, while flooding was measured as a combination of precipitation, the distance from water systems, and the difference in altitude from the nearest water system (Table 1). Data on the first three of these (i.e., wind speed, earthquakes, and flooding) independent variables were provided by the National Oceanic and Atmospheric Administration (NOAA), while the latter two were computed using Google Maps. Data on the dependent variable were provided by the insurers of the hotel chain that participated in this research. To establish correlations between the dependent and independent variables, the ordinary least squares regression method was used.



## **4. Results**

### *4.1. Descriptive Statistics*

Table 2 presents the descriptive statistics of the variables, with N standing for the number of data points (i.e., insurance-claim payouts corresponding to at least one of the types of natural disaster considered in this study).


**Table 2.** Descriptive statistics of variables.

## *4.2. Multiple Regression Analysis*

Normality test of the dependent variable was conducted to verify if the variable followed normal distribution or not before multiple linear regression analysis. The result showed that the significant level of 0.000 was smaller than 0.05, which means that the dependent variable did not follow a normal distribution. Therefore, the dependent variable was transformed to natural log as follows;

$$\text{Transformed gross loss} = \text{Log (Gross loss (\\$))}\tag{1}$$

As seen in Table 3, the result of the normality test with the transformed gross loss showed that the significant value of 0.232 was greater than 0.000. It was proved that the dependent variable was normally distributed. The histogram and *Q-Q* plot of Figure 1 confirm that the gross loss followed a normal distribution.

**Figure 1.** Histogram and *Q*-*Q* plot, transformed gross loss.

**Table 3.** Normality-test results, transformed gross loss.


The results of our multiple regression analysis for the gross loss connected with natural disasters are shown in Figures 2 and 3 and Table 4. The histogram and *P-P* plot in Figure 2 indicate that the residual of the regression model was normally distributed. The scatter residual plot of the regression model in Figure 3, meanwhile, shows that the variable of the residual was constant, confirming homoscedasticity. In addition, the significance level of 0.000 in Table 4, being smaller than 0.05, indicates that the regression model was statistically significant. It confirms that the relation of the dependent variable to the independent variables was linear. The regression model's R<sup>2</sup> value was 0.342, meaning that it can explain 34.2% of the variation in the dependent variable. The *p* values indicated that precipitation and the distance from water systems were correlated with the dependent variable, but that the other three independent variables were not. The variance inflation factor (VIF) values of this study's variables ranged from 1.002 to 1.114, which means there was no significant multicollinearity among them.

**Figure 2.** Histogram and *P*-*P* residual plot, regression model.


**Figure 3.** Scatter residual plot, regression model.

**Table 4.** Regression analysis: final results. VIF: variance inflation factor.


*Note.* \* denotes *p*-value which was smaller than 0.05.

A beta coefficient (standard coefficient) was utilized to compare the independent variables, with the highest absolute value being recognized as having the strongest effect on the dependent variable. Table 4 shows that precipitation (0.168) and the distance from water systems (0.074) had higher beta-coefficient values than the other independent variables did.

#### **5. Discussion**

The research method proposed in this paper offers an opportunity to incorporate loss costs arising from natural hazards into the lifecycle cost, specifically by relating operation and management costs to prior natural disasters. Total worldwide gross property damage caused by high winds, flooding, and/or earthquakes during such events cost the insurer of the hotel chain that participated in this study around U.S. \$200 million in the 2007–2017 period. The correlation between this gross loss and the full set of chosen variables was confirmed as significant.

Among this set, however, the significance of this correlation was accounted for by just two loss factors, precipitation and distance from water systems, both of which had *p* values < 0.05 (0.000 for precipitation and 0.045 for distance from water systems). The regression's adjusted R2 value (0.342) indicated that 34.2% of the variation in the dependent variable, gross loss, can be explained by these two loss factors, while the other 65.8% of the variation in the dependent variable was due to loss factors that were not covered by this research. Thus, through statistical analysis, we discovered that the adopted natural-hazard variables had an important relation to the hotel chain's gross loss.

The findings above reinforce those of previous studies [34,35], which suggest that heavy rainfall and built environments—construction activities or flood-control facilities—are the significant factors in losses arising from natural hazards. Precipitation and distance from water systems are commonly related to flooding damage and, in this case, indicated that heavy rain is likely to cause considerable damage to the hotel chain's properties. An unexpectedly high volume of rain can seep into existing cracks in buildings, leading to severe damage to their interiors, including furniture, partition walls, and other internal structural components. Usually, hotels' basements are used to store essential equipment, but when heavy rain occurs to the point that water systems overflow, basements are very susceptible to inundation.

The identified correlation between two types of natural hazards and gross loss is potentially useful to professionals and policymakers concerned with hotel operations and catastrophe management, as this finding goes some way in addressing the absence of disaster losses in operation and management costs in traditional lifecycle-cost estimation. The present study's findings should also enable insurance companies to modify their business models and/or premium prices based on natural-hazard loss factors and estimates of maximum loss, risk exposure, the probabilities of certain events occurring, and so forth. Likewise, construction companies building hotel facilities may wish to reassess their designs, materials, building features, and safety guidelines from the point of view of vulnerability to precipitation and distance from water systems. In short, the present study confirms that facility management will be greatly enhanced if due consideration is given to natural disasters as important factors in lifecycle costs, especially when—as is the case here—actual gross loss costs can be used in lifecycle-cost estimation.

## **6. Conclusions**

The purpose of this study was to explore the relationship between costs arising from natural hazards, both in the broad context of lifecycle-cost estimation and the narrower one of operation and management. It demonstrated the value of incorporating the most damaging types of frequent natural disasters as lifecycle-cost variables through quantitative analysis of the actual gross losses suffered. Even though this research was limited to properties belonging to a single hotel chain, the global nature of that chain increases the likelihood that its findings may be generalizable to other such chains and other types of property portfolios.

Nevertheless, future research should incorporate more resources related to building features such as area, height, material, and price, as well as type-of-damage data (e.g., at a minimum, whether the damage is structural vs. non-structural), as part of the ongoing quest for optimally effective means of managing lifecycle costs. Additionally, to broaden the concept of buildings' long-term sustainability, environmental risks such as proximity to land areas with mountain slopes altered by human activity should be taken into account, since such changes can increase the chance of avalanches [44,46]. Additional loss factors such as the radius and forward-movement speeds of hurricanes, areas of basin, and vegetation types should also be considered, especially in light of the present work's relatively low adjusted R2.

A balanced consideration of social, economic, and environmental impacts is necessary if a complete picture of buildings' long-term sustainability, and thus their lifecycle cost, is to be achieved. Accordingly, future research should give due consideration to energy consumption, CO2 emissions, and other environmentally relevant factors during construction and end-of-life demolition, as well as costs such as water, lighting, garbage disposal, and mechanical, electrical, and plumbing (MEP) systems during the operation and management phase. It should also be noted that the present research did not account for variations in the climate or local economies of the locations of the hotel's different properties, which may mean that its approach cannot be generalized to all locations. Thus, future research should give greater consideration to geographic variation in climate, local economies, and construction/repair costs to ensure that the proposed approach to lifecycle-cost estimation can be applied accurately in a full range of global contexts. In such future projects, artificial neural networks (ANNs) would be a useful tool for investigating complex non-linear relationships among research variables, for identifying independent variables, and for optimizing the process through training and testing phases in such future research; this would be valuable not only for initial-stage cost estimation but also during other phases of construction and other aspects of construction-project management.

Researchers could also use BIM in such research by including natural disasters as n-D, followed by 4D modeling that includes construction scheduling in the 3D model. The insurance industry is already using catastrophe-modeling techniques to predict damage from natural hazards, estimate maximum losses, and adjust premium prices. Similarly, facility-management companies looking to manage their properties more effectively by reducing unexpected costs could use the results of the present research as a basis for hazard mapping or hazard-prediction modeling at regional and national levels, combining n-D modeling or catastrophe modeling as mentioned above, because such models can estimate the value of unexpected potential loss from natural disasters. Moreover, fragility or vulnerability curves including building information such as building history, number of floors, locations, and building codes corresponding to wind speed and/or distance from water systems could be usefully included in future research on risk assessment for hotel properties.

**Author Contributions:** Conceptualization, S.-G.Y.; Data curation, S.-G.Y. and J.-M.K.; Funding acquisition, J.-M.K.; Investigation, S.-G.Y.; Methodology, S.-G.Y.; Project administration, K.S.; Software, J.-M.K.; Supervision, K.S.; Validation, S.-G.Y. and J.-M.K.; Resources, J.-M.K.; Writing—original draft, J.-M.K.; Writing—review & editing, S.-G.Y. and K.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1F1A1058800).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **System Dynamics Model for the Improvement Planning of School Building Conditions**

## **Suhyun Kang 1, Sangyong Kim 1, Seungho Kim <sup>2</sup> and Dongeun Lee 3,\***


Received: 5 April 2020; Accepted: 13 May 2020; Published: 21 May 2020

**Abstract:** As the number of aged infrastructures increases every year, a systematic and effective asset management strategy is required. One of the most common analysis methods for preparing an asset management strategy is life cycle cost analysis (LCCA). Most LCCA-related studies have focused on traffic and energy; however, few studies have focused on school buildings. Therefore, an approach should be developed to increase the investment efficiency for the performance improvement of school buildings. Planning and securing budgets for the performance improvement of school building is a complex task that involves various factors, such as current conditions, deterioration behavior and maintenance effect. Therefore, this study proposes a system dynamics (SD) model for the performance improvement of school buildings by using the SD method. In this study, an SD model is used to support efficient decision-making through policy effect analysis, from a macro-perspective, for the performance improvement of school buildings.

**Keywords:** school buildings; system dynamics; deterioration; rehabilitation; lifecycle cost analysis; budget allocation

## **1. Introduction**

Recently, due to the rapid increase in deteriorated social infrastructures, the significance of long-term planning for sustainability and performance improvement has been noted. In the United States, the facility deterioration problem was noted in the 1980s, and in 2017, the condition grade of infrastructure was confirmed to be "D+" on average. In particular, according to the '2017 infrastructure report card', which was published by the American Society of Civil Engineers (ASCE), the required restoration cost is approximately KRW 1,120 trillion. In Japan, 63% of roads and bridges, 62% of river management facilities, and 58% of harbors and wharves in 2033 will have passed over 50 years of age after construction. Therefore, in major advanced countries, the rapid deterioration of social infrastructures has been caused by the lack of appropriate measures and investments, despite the increase in deteriorated social infrastructures [1]. Furthermore, because of climate change and the frequent occurrence of natural disasters (i.e., earthquakes and storms) worldwide, many human lives are lost during disasters such as the collapse of deteriorated bridges [2]. Therefore, the importance of life extension and performance improvement of existing deteriorated social infrastructures is stressed, to ensure the safety of people from these disasters and catastrophes.

Systematic and effective asset management strategies are required to solve these problems. One of the most common analysis methods for preparing an asset management strategy is life cycle cost analysis (LCCA). Most LCCA-related studies have focused on traffic [3], pavement [4] and energy [5,6], however, few studies have focused on school buildings accommodating a population of over 50 million people on a daily basis.

In recent years, many studies have discussed the performance improvement of school buildings; however, the governments of various countries are experiencing difficulties in planning and financing, because of the lack of comprehensive data regarding school buildings [1]. Until recently, the maintenance of most school buildings was conducted using a breakdown maintenance method, instead of a preventive maintenance method. This method has led to the rapid deterioration of school buildings because the appropriate maintenance period was missed. Currently, the governments of advanced countries are hurriedly allocating excessive budgets; however, because executing these budgets within a financial year is an impossible task, the budget is customarily carried over to the following year, every year. This phenomenon is seemingly a result of short-term and emergency response, instead of investments based on mid-/long-term planning for the performance improvement of school buildings. Therefore, an approach should be developed to increase the investment efficiency for the performance improvement of school buildings. Planning and securing budgets for the performance improvement of social infrastructures is a complex task that involves various factors, such as current conditions, deterioration behavior, and maintenance effect [7]. Based on these complexities, it will be advantageous to predict the policy effect by using a simulation, to ensure that the policy-makers can plan the changes in policy direction in advance.

In this study, a system dynamics (SD) model is proposed to support efficient decision-making through policy effect analysis, from a macro-perspective for the performance improvement of school buildings. The SD model performs LCCA simulation based on performance improvement scenarios, to predict the deterioration pattern of school buildings and respond to it. Based on the simulation results, this study evaluates the long-term effects of rehabilitation policy on the performance grades of school buildings. Moreover, this study identifies an effective policy scenario that can achieve performance improvement.

## **2. Literature Review**

Common methods of analyzing the complexity of asset management of social infrastructure include agent-based simulation (ABS) and SD. ABS is a micro-simulation method that can model interactions between agents; this method is used in various fields related to social infrastructures [8–12]. Echaveguren, Chamorro, and De Solminihac [13] modeled the interaction among agents (state, private and public) related to road infrastructure management systems, and analyzed the effects of the decisions made by agents regarding maintenance plans. Mallory, Crapper, and Holm [14] developed agent-based models (ABM) for fecal sludge (FS) recycling and proved the efficacy of the model by using case studies. Zechman [15] developed ABM for a water distribution system, and analyzed the interaction of systems. However, the ABS has a limitation in terms of modeling strategies, because the simulation results can differ based on small changes in the interaction method. Moreover, the level of detailed factors is high.

On the other hand, SD is a macro-simulation method that can decipher all the behaviors of complex systems [16]. In general, SD is used for modeling problems, such as performance measurement related to a social system, and estimating the effects of strategies and alternatives, as well as those of various social policies [17]. SD describes the interrelationships between factors causing changes in complex system growth estimates and patterns of change. SD emphasizes the causal relationships and feedback among individual components in a system [18]. Therefore, all causal relationships are recognized as circular relationships, without distinguishing between independent and dependent variables. This method focuses on the types of dynamic trends in changes among variables based on the flow of time, instead of obtaining the accurate value of the variable. Furthermore, SD is helpful when decision-makers examine the behavior of complex systems and evaluate the long-term policy effects [16]. Therefore, many studies have applied the SD modeling approach to determine the asset management strategy of social infrastructures in various fields.

Rehan [19] applied the SD modeling approach to develop asset management strategies for water and wastewater systems, and demonstrated the advantages of the SD model for modeling the interactions between physical, social, and financial systems. Mohammadifardi [20] also verified the applicability and efficacy of the SD model for wastewater collection (WWC). Hong, Frangmin, and Rongbei [21] developed an SD model related to highway maintenance issues, and proved that the SD method is effective during decision-making for a long-term plan by using case simulations and analyses. Soetjipto, Adi, and Anwar [22] developed a bridge deterioration model, and used it to simulate the possibility of bridge failure and detect that components that cause bridge failures. Furthermore, they used the SD model to analyze environmental pollution and energy problems, such as CO2 emission in the transport industry [23,24]. Sing, Love, and Liu [25] proved that adopting the SD modeling approach is useful for dealing with the long-term rehabilitation policy of existing building stock, related to the sustainability of a city. Wang and Yuan [26] used an SD simulation to determine an optimal measure for effective risk management in infrastructure projects.

Therefore, various studies have shown that the SD modeling approach is an effective tool for exploring asset management strategies and the policy effects of social infrastructures. This approach is also used in various types of social infrastructures. However, studies are yet to use the SD modeling approach to investigate the performance improvement of school buildings. Therefore, this study proposes a model for the performance improvement of school buildings via the SD method.

## **3. Research Methodology**

The overall study procedure is shown in Figure 1. A literature review is conducted to determine the conventional modeling methods of asset management, and to find a suitable model for this study. A decision-making model is then developed for performance improvement of the school building. The SD method is applied as a modeling method, and it is developed by considering the correlations among the deterioration, rehabilitation and finance models. The SD model used in this study is developed using the following sequence: (1) define the problem, (2) create a causal loop diagram (CLD), (3) create a stock and flow diagram (SFD), and (4) verify the model. For the completed SD model, the effectiveness of the decision-making model is proven by using case studies including data from school buildings. Moreover, suggestions for long-term planning and financing are provided for future performance improvement of school buildings, based on the test results of various policy scenarios.

**Figure 1.** Research procedure.

### **4. Causal Loop Diagram of School Building Rehabilitation Management**

During the first stage of SD model development, the causal relationships between key variables are determined to define the problem and compose the CLD for the school building rehabilitation system. A dynamic hypothesis is developed to explain the dynamic behavior of key variables in the structure. The overall system must be understood to establish a dynamic hypothesis, thereby emphasizing the need for conducting a literature review, expert group discussion, and survey. This study derived key variables and a dynamic hypothesis to understand the system by conducting a literature review. As a result of the relevant literature review, the SD model proposed in this study considers three major functions (asset deterioration, rehabilitation action, and total repair cost) for the macro-analysis of the rehabilitation system. Based on the literature review [7,19,27], nine variables, composing the three major functions, were derived. Figure 2 presents the CLD showing the causal relationships between the nine variables.

**Figure 2.** Causal loop diagram of the school building rehabilitation network management.

The CLD shown in Figure 2 consists of arrows, "+" or "−" signs, and feedback loops. A causal relationship between variables is expressed using the "+" or "−" signs through an arrow. These signs indicate the relationship between variables. The "+" link indicates that two variables (var1 and var2) are changing in a similar direction in the model. In other words, if the independent variable increases, the dependent variable also increases [Equation (1)].

$$\frac{\Delta \text{Var2}}{\Delta \text{Var1}} > 0 \tag{1}$$

The "−" link indicates that two variables (var1 and var2) are changing in different directions in the model. In other words, if the independent variable increases, the dependent variable decreases [Equation (2)].

$$\frac{\Delta \text{Var2}}{\Delta \text{Var1}} < 0 \tag{2}$$

The arrows of Figure 2 form the feedback loop, thereby indicating the characteristics of the loop. There are two types of loop, based on the characteristics of the loop: (1) reinforcing loop, and (2) balancing loop. The CLD presented in this study consists of two reinforcing loops (R1 and R2) and one

balancing loop (B1). Each feedback loop shows the dynamic behaviors of deterioration, rehabilitation, and rehabilitation finance (expenses and budget) for school buildings.

## *4.1. Feedback Loop in School Building Deterioration*

The deterioration loop (R1) shows the representative physical deterioration process of social infrastructure. The two variables ("school building condition grade" and "school building deterioration") that form this loop are connected by the "−" link, thereby indicating that the "school building condition grade" negatively affects the "school building deterioration", and the "school building condition grade" is affected by the "school building deterioration". If the "school building condition grade" decreases (e.g., in the scale of A–E, whereby A is the optimal condition and E is the poor condition), the deterioration increases. If the deterioration increases, the "school building condition grade" decreases. Furthermore, the "deterioration rate of school building" is connected with the "school building deterioration" by the "+" link. Therefore, if the "deterioration rate of school building" increases, the "school building deterioration" increases. A combination of these links produces a reinforcing loop (R1), as depicted in Figure 2. A reinforcing loop includes the feature of reinforcing to the extreme of a certain side (thus causing an index growth/decrease behavior). Therefore, the deterioration loop (R1) establishes a cycle, wherein the condition deterioration of the school building accelerates as time elapses. For this dynamic behavior, a similar process has also been reported in many asset management studies and related references [7,19].

## *4.2. Feedback Loop in Rehabilitation*

The rehabilitation loop (R2) shows the rehabilitation process of the school building. Because the deterioration loop (R1) causes the exponential deterioration of the school building, the "school building condition grade" decreases and the "rehabilitation action" increases. Therefore, the relationship between the two variables is connected with a "−" link. In the real world, monetary payments are required to perform maintenance and repair tasks during rehabilitation. Therefore, "rehabilitation cost" has a positive relationship ("+" link) with "rehabilitation action". If the "rehabilitation action" increases, the "rehabilitation cost" also increases. On the other hand, "rehabilitation cost" and "repair works" have a negative relationship ("−") link. This is because repair works can be performed only when sufficient rehabilitation budgets are supplied. Therefore, the rehabilitation loop (R2) shows the rehabilitation process of the school building, and the decrease of "repair works" indirectly indicates the decrease of "school building condition grade".

## *4.3. Feedback Loop in Rehabilitation Finance*

The finance loop (B1) shows a budgeting process. If the "school building condition grade" decreases, users' condition improvement demands (those by students, teachers, staff, and local residents), and the managers of school facilities is increase. If the need for the school building's condition improvement is noted, the government can secure a budget for rehabilitation. The secured budget is appropriately allocated, based on the policies and plans. According to the final budget, the maintenance and repair tasks are performed, thereby improving the condition grade of the school building. This combination of links generates a balancing loop (B1), as described in Figure 2, and the finance loop (B1) mitigates the condition grade decline of the school building by the deterioration loop (R1) and the rehabilitation loop (R2).

Therefore, the "rehabilitation cost" of the feedback loop (R2), and the "rehabilitation budget allocation", directly affect the maintenance and repair tasks. Therefore, they affect the "school building condition grade".

## **5. Stock and Flow Modeling for System Dynamics Simulation**

After understanding the overall feedback loop through the CLD, it should be converted into an SFD to perform computer simulations. System dynamics is a diagram-based programming language, and the following diagrams constitute an SFD in the Vensim software: Stock, Flow, Valve, and Cloud (Figure 3a).

**Figure 3.** Stock and flow diagram.

Stock is a variable that accumulates or integrates the state of systems based on time. Flow is a variable that changes the value of the stock variable, and consists of inflow and outflow. Valve is a variable that controls the amount of inflow and outflow, and shows a boundary point of entry and exit of cloud. The relationship of stock and flow can be expressed using Equation (3), thus showing the value of the stock variable, based on the simulation time [16]. In this equation, t0 is the initial time, t is the current time, and stock (t0) is the initial value of stock. Inflow and outflow refer to flow coming into, and going out from, the stock, for an arbitrary duration (s) between the initial time (t0) and the current time (t). Equation (4) determines changes in the rate of stock, based on time [16].

$$\text{Stock}(\mathbf{t}) = \int\_{\mathbf{t}\_0}^{\mathbf{t}} [\text{Inflow}(\mathbf{s}) - \text{Outflow}(\mathbf{s})] \, \text{ds} + \text{Stock}(\mathbf{t}\_0) \tag{3}$$

$$\frac{\text{d}(\text{Stock})}{\text{dt}} = \text{Inflow}(\text{t}) - \text{Outflow}(\text{t}) \tag{4}$$

The relationship of stock and flow can be expressed according to the aforementioned, as shown in Figure 3b.

## *5.1. School Building Deterioration Sector*

The school building deterioration model in this study is developed with the goal of simulating the overall deterioration pattern. Most assets are managed based on the school building condition; deterioration models using data regarding the condition have been presented using various methods. Based on the results of the literature review, deterioration models are primarily classified into three categories: deterministic, stochastic, and artificial intelligence [28,29] (Figure 4).

**Figure 4.** Classification of deterioration models.

Among the categories of deterioration models, Markov chain is a stochastic method for predicting the future condition state of assets in a social infrastructure management system. This method is also most frequently used [30–33]. Therefore, this study attempts to predict the deterioration pattern of school buildings by using the Markov chain.

The Markov chain indicates a case wherein the probability of reaching a specific state for a stochastic variable depends only on the state of the preceding time point. This study classifies the physical condition of school buildings using grades A–E, according to the condition evaluation criteria provided by the Ministry of Education in South Korea (Table 1).


**Table 1.** Physical condition grade of school buildings.

The deterioration model is developed based on the assumption that a school building deteriorates to the next condition state only from a specific condition state (e.g., from condition A to B, and B to C). The five stock variables (A–E) shown in Figure 5 indicate the number of school buildings for each condition grade. A transition probability variable is derived, based on case study data, that serves as an auxiliary variable of each flow variable. Moreover, to induce a pattern that is similar to the actual deterioration behavior of assets, the stock variable has a feedback relationship that affects the flow variable. This variable can be expressed as shown in Equation (5) (X indicates the condition grade, and X-1 refers to the condition grade that is one step lower than that of the condition X).

$$\text{Determtional } \mathbb{X} \text{ to } \mathbb{X} - 1 = \mathbb{X} \text{ \* Transstitution Probability } \mathbb{X} \text{ to } \mathbb{X} - 1 \tag{5}$$

**Figure 5.** System dynamics (SD) model of deterioration and simulation graph.

Moreover, to identify the condition grade of overall school buildings based on time, Equation (6) was applied, based on the grade score shown in Table 1.

$$\mathbf{\color{red}{Schaol}} \text{ should building condition}$$

$$\mathbf{\color{red}{\mathbf{A}}} = (\mathbf{A} \bullet \mathbf{\color{red}{\mathbf{5}}} + \mathbf{B} \bullet \mathbf{4} + \mathbf{C} \bullet \mathbf{3} + \mathbf{D} \bullet \mathbf{2} + \mathbf{E} \bullet \mathbf{1}) \text{ / Total number of school building}$$

To test the completed deterioration model, data regarding the safety inspection and condition assessment of school buildings for the winters of 2014–2018, from the Education Office in Daegu metropolitan city, was used in this study. The condition grades of 214 school buildings in total showed: grade A = 55, grade B = 67, grade C = 79, grade D = 10, and grade E = 3 buildings. The transition probability was set as: A to B = 0.45, B to C = 0.1, C to D = 0.09, and D to E = 0.15 (the transition probability of school buildings that are applied in the case studies are described in detail in Section 5.1). The result of testing the deterioration model using the case study data is shown in the graph on the right side of Figure 5.

The result of testing the deterioration model using the case study data is shown in the graph on the right side of Figure 5. As time elapses, the number of school buildings with the condition grades A, B, C, and D decreases, and the number of school buildings with the condition grade E increases. The curve illustrating the comprehensive condition of school buildings based on these dynamic changes has an initial value of 3.75, which is close to the grade B. However, after 50 years, the value deteriorates to 1.09, thus the school building condition grade deteriorates to grade E. Therefore, this study verified the validity of the deterioration SD model as a tool that predicts deterioration patterns, using the number of assets by grade.

## *5.2. Rehabilitation Sector*

The rehabilitation model shows the rehabilitation action based on the condition grades of school buildings. The model proposed in this study assumes that schools categorized under the three grades, C, D, and E, which do not indicate a good condition, will be repaired to ensure the school building is categorized under the best grade A. Based on this assumption, the rehabilitation action was integrated with the deterioration models shown in Figure 6.

**Figure 6.** SD model of rehabilitation and simulation graph.

The dynamic flow of flow variables (e.g., Repair C Grade), that expresses the rehabilitation action in Figure 6, is pointed toward an improved condition state (grade A) from a specific condition state (grades C, D, or E). The value of the rehabilitation flow variable is determined based on an auxiliary variable (e.g., % Repair C Grade). This variable shows the proportion of repairing from condition grade X to grade A in the number of school buildings of a specific condition grade. The flow variable showing the rehabilitation action of the SD model is calculated by Equation (7).

$$\text{Repair} \,\, \mathbb{X} \,\text{Grade} \,\, = \, \mathbb{X} \,\, \* \,\, \% \,\, \text{Repair} \,\, \mathbb{X} \,\, \text{Grade} \,\, \tag{7}$$

Equation (7) determines the number of buildings of each condition X (grades C, D, or E). Stock A—the number of school buildings that secured grade A—increases through the rehabilitation action. This is expressed using Equation (8)

$$\begin{array}{rcl} \text{Stock A(t)} &=& \int\_{0}^{t} [\text{Repair C grade(s)} + \text{Repair D grade(s)} + \text{Repair E grade(s)} \\ & & - \text{Deterionation A to B(s)}] \text{ds} + \text{Stock A(t\_0)} \end{array} \tag{8}$$

Equation (8) indicates the inflow into Stock A, which refers to the number of school buildings that have been improved from the grades C, D, and E, during an arbitrary time period between the initial time t0 and the current time t. Equation (8) also indicates the outflow to Stock B caused by deterioration as time elapses.

The auxiliary variables—% Repair X Grade—were set to 5% to test the model that included rehabilitation action. For a case concerning the performing of rehabilitation actions to the school buildings with grades C, D, and E every year, the result of simulating the condition grade changes over 50 years is shown in a graph on the right side of Figure 6. A comparison of the number of school buildings in each grade, described on the right sides of Figures 5 and 6, indicates that the graph curves of grades A, B, C, and D in Figure 5 decrease rapidly, whereas the grades A, B, C, and D in Figure 6 maintain specific levels for approximately 25 years.

Table 2 shows the simulation results, on a yearly basis, for the condition grade values of all school buildings for 50 years, targeting the deterioration (Det.) and rehabilitation (Reh.) models. Based on the 50 year period, the condition grade value in the deterioration model was 1.09, which was critical. However, the value improved to 2.75 in the rehabilitation model, thus indicating a poor condition. The results of repairing 5% of the school buildings in grades C, D, and E to grade A, respectively, every year are shown in Table 2.

**Table 2.** Comparison of simulation results of school building physical condition grade between deterioration model and rehabilitation model.


Moreover, Table 3 compares the differences in the deterioration model (Det.) and rehabilitation model (Reh.), based on their respective grades for the same simulation results. Although the initial value (0 year) was identical, the number of school buildings for each grade indicated a significant difference between the two models as time elapsed. Based on the 50 year period, the Det. model showed that most school buildings deteriorated to grade E, whereas the number of school buildings was evenly distributed in the Reh. model. Therefore, it is proven that the rehabilitation SD model in Figure 6 can quantitatively analyze the effects of school buildings' deterioration and rehabilitation action on the increase or decrease of the physical conditions of all school buildings.

**Table 3.** Comparison of simulation results between deterioration model and rehabilitation model.


## *5.3. Finance Sector*

One of the crucial tasks in a maintenance and rehabilitation (M&R) plan is efficiently distributing a limited fund to achieve an optimal outcome. This section aims to propose an SD model that has added a cost model to the deterioration and rehabilitation models for efficient budget allocation.

Figure 7 is an integrated SFD, whereby the cost model is included in Figure 6. To fulfill the rehabilitation action of a deteriorated school building, maintenance costs are required, which are provided from a limited budget. The "Available Rehabilitation Policy Budget" variable refers to the total budget that can be used for the rehabilitation action in the model. The "Allocated Budget to Repair X Grade" variable refers to a budget allocated from the limited total budget to repair the school buildings belonging to the respective grades X (C, D, and E) [Equation (9)].

$$\begin{aligned} \text{Alocated Budget to Repair X Grade} &= \text{Available Rehabilitation Policy Budget} \ast \\ \text{\textbullet\textbullet Budget to Repair X Grade''} &\end{aligned} \tag{9}$$

**Figure 7.** SD model for rehabilitation cost and budgeting analysis.

The value of this variable is determined by a variable "% Budget to Repair X Grade", which shows the percentage of budget allocated to each grade X. Moreover, unlike the rehabilitation model, the cost model shows that the number of school buildings restored is limited according to the allocated budget. This is determined through Equation (10).

$$\begin{array}{c} \text{Repair X Grad} = \text{IF THEN ElSE}(\text{X } \* \text{ \% Repair X Grad} \* \text{ \\$ Repair X Grad} \* \text{\\$})\\ < \text{Allocated Budget to Repair X Grad}, \text{\\$} \* \text{\\$ Repair X Grad}, \\ \text{\\$} \* \text{\\$} \text{\\$} \text{Repair X Grad} \* 0) \end{array} \tag{10}$$

Equation (10) used the IF THEN ELSE({cond}, {ontrue}, {onfalse}) function, which is a built-in function of Vensim, to derive different values, based on the condition. The variable "\$ Repair X Grade" shows the repair cost required to rehabilitate the school buildings of each grade. If the cost for repairing from a condition grade X (C, D, or E) to the grade A (X \* "% Repair X Grade" \* "\$ Repair X Grade") is less than the limited budget ("Allocated Budget to Repair X Grade"), the rehabilitation action is conducted; otherwise, it is stopped. Moreover, the life cycle cost by time t shows an equation similar to Equation (11).

$$\begin{array}{lcl} \text{LCC(t)} &= \int\_{0}^{t} [\\$ \text{Repair C grade} \ \* \text{Repair C grade(s)} + \\$ \text{Repair D Grad} \\ &\text{\* Repair D Grad(s)} + \\$ \text{Repair E grade \* Repair E grade(s)}] \text{ds} \end{array} \tag{11}$$

The completed integrated SD model is used as a model for determining the life cycle cost analysis and future outcome prediction, to improve the performance of school buildings via use of a case study. This study performs a scenario analysis, to investigate the effects of budget allocation by grade on the total outcome and cost, based on simulations considering various values for the "% Budget to Repair X Grade" variable.

## **6. Application of the Developed SD Model**

This section performs the budget allocation scenario analysis, to predict the deterioration behavior of school buildings and improve performance by simulating the SD model proposed in Section 5. This study used the safety inspection and condition assessment data provided by the Ministry of Education in South Korea, in investigating 214 school buildings in a metropolitan city, Daegu. Based this data, this study acquired data regarding the 5 year (2014–2018) condition assessment and maintenance cost of school buildings, categorized by condition grade. At present, the Ministry of Education in South Korea designates only grades D and E, among the five condition grades A–E, as disaster-prone buildings, and conducts performance improvement primarily for these buildings. However, the SD simulation sets the rehabilitation scenarios by considering buildings up to the grade C for preventive maintenance. Finally, the simulation analysis is performed by applying the deterioration rate variable (transition probability matrix, TPM), derived by using the Markov chain stochastic process, to the integrated SD model (Figure 7).

#### *Markov Approach*

Markov chain is a discrete time stochastic process, and the conditional probability of a specific future event changes according to only the current condition; it is unrelated to past conditions [34]. Because five condition states exist in the case study data, the transition probability from one condition state to another is expressed in a 5 × 5 matrix, and the simplified transition probability matrix (TPM) is shown in Equation (12).

$$\text{TPM} = \begin{bmatrix} 0.88 & 0.12 & 0 & 0 & 0 \\ 0 & 0.96 & 0.04 & 0 & 0 \\ 0 & 0 & 0.91 & 0.09 & 0 \\ 0 & 0 & 0 & 0.86 & 0.14 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \tag{12}$$

Each element (Pij) of TPM shows a probability (P) of transitioning from a state "i" to another state "j". For example, '0.88' indicates the probability of transitioning from state A to state A (the probability of a school building remaining in state A), and '0.12' refers to the probability of transitioning from state A to state B. It is assumed that the condition state of school buildings shift from one condition state to the next condition state only. Suppose the initial condition state's value is CS0, and the distribution of the condition state by year is n; then, Equation (13) can be derived.

$$\text{CS}\_{\text{n}} = \text{CS}\_{0} \times \text{TPM}^{\text{n}} \tag{13}$$

Equation (13) shows that a future state (CS0) can be estimated when the TPM and the initial state (CSn) are known.

#### **7. SD Model Simulation Results of Scenario Analysis**

The developed SD model (Figure 7) analyzes the effect of budget allocation strategy scenarios to determine cost-effective rehabilitation actions for school buildings. Table 4 shows the budget allocation proportions of grades C, D, and E, based on the average annual educational environment improvement budget provided for Daegu city. The results of simulating 10 scenarios using the Vensim software are shown in Figure 8.


**Table 4.** Budget allocation scenarios.

**Figure 8.** SD model simulation analysis results for budget allocation scenarios.

Figure 8 shows the scenario analysis results for the condition grades (based on Table 1) and Total life cycle cost (TLCC) of all school buildings. S2 and S3 can reduce the TLCC in the long term. However, because the condition grade of school buildings declines gradually to 1.75 (grade E: Critical), they can be perceived as the worst-case scenarios. TLCC of approximately KRW 20 billion is expected for S1, S4, S5, S7, S8, and S10, and TLCC of approximately KRW 9 billion is expected for S6 and S9. Therefore, S9, S10, and S4 are picked as scenarios with good performance improvement effects relative to the cost. From these results, it can be noted that the appropriate budgets were allocated to school buildings in the condition grade C. In the case of S10, wherein the condition grade was the highest, 50% of the total budget was allocated to the condition grade C and the remaining 50% was equally allocated to the condition grades D and E. This result shows that, when repairing is performed primarily on buildings in condition grade C, the rehabilitation cost required is less than that of buildings in grades D and E, and in the long term, a preventive maintenance effect can be obtained. Therefore, by using the 10 different scenario analyses, it is ascertained that budget allocation based on condition grade has a crucial impact on the total school building performance.

## **8. Discussion and Conclusions**

This study proposed an integrated SD model for the rehabilitation policy analysis of school buildings. To validate the SD model, 10 rehabilitation budget allocation scenarios were analyzed, based on the simulations. The results show that the integrated SD model can support strategic decision-making, by identifying the school building condition grades and TLCC behavior for each scenario in the long-term perspective. According to the scenario analysis, the rehabilitation action of preventive maintenance that primarily repairs the buildings in condition grade C showed the best performance improvement effect relative to the cost.

The Ministry of Education in South Korea currently performs post-event maintenance management, to repair buildings when performance deterioration occurs (grades D and E). However, the preventive maintenance method should be adopted to reduce the deterioration speed of school buildings. The costs calculated based on the SD simulation can be used for the long-term planning of rehabilitation action, by estimating the cost that will be injected into repairing the deteriorated school buildings for 50 years in the future. However, the proposed SD model has several limitations. The available case study data for this study was insufficient, and increasingly accurate deterioration modeling will be possible if it is supplemented with an optimal method for estimating accurate TPM with limited data. Moreover, the budget of the Ministry of Education in South Korea, which is the subject of the case study, is in a situation wherein continuous investment for the performance improvement of school buildings is difficult because of other educational policies, such as free school meals and the New University for Regional Innovation (NURI) project. Therefore, if the proposed SD model is expanded to consider the effects of other educational policies, the crucial performance improvement budget can be estimated in the long-term perspective.

**Author Contributions:** Conceptualization, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim), and D.L.; data curation, D.L.; formal analysis and investigation, S.K. (Suhyun Kang), S.K. (Sangyong Kim); methodology, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim); resources, D.L.; software, S.K. (Suhyun Kang) and S.K. (Seungho Kim); supervision, S.K. (Sangyong Kim) and D.L.; validation, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim), and D.L.; visualization, S.K. (Suhyun Kang); writing—original draft, S.K. (Suhyun Kang), S.K. (Seungho Kim); writing—review and editing, S.K. (Sangyong Kim) and D.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2018R1A5A1025137).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**

1. ASCE. *2017 Infrastructure Report Card*; ASCE: Reston, VA, USA, 2017.


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress**

## **Seungho Kim 1, Sangyong Kim <sup>2</sup> and Dong-Eun Lee 3,\***


Received: 18 April 2020; Accepted: 14 May 2020; Published: 18 May 2020

**Abstract:** Compared to the past, the complexity of construction-project progress has increased as the size of structures has become larger and taller. This has resulted in many unexpected problems with an increasing frequency of occurrence, such as various uncertainties and risk factors. Recently, research was conducted to solve the problem via integration with data-collection automation tools of construction-project-progress measurement. Most of the methods used spatial sensing technology. Thus, this study performed a review of the representative technologies applied to construction-project-progress data collection and identified the unique characteristics of each technology. The basic principle of the progress proposed in this study is its execution through the point cloud and the attributes of BIM, which were studied in five stages: (1) Acquisition of construction completion data using a point cloud, (2) production of a completed 3D model, (3) interworking of an as-planned BIM model and as-built model, (4) construction progress tracking via overlap of two 3D models, and (5) verification by comparison with actual data. This has confirmed that the technical limitations of the construction progress tracking through the point cloud do not exist, and that a fairly high degree of progress data which contains efficiency and accuracy can be collected.

**Keywords:** building information modeling; drone; LIDAR; point cloud; progress tracking

## **1. Introduction**

Data related to the progress of construction projects are very useful both to determine whether timelines are being kept and to assess the quality of the work done, and these data are essential to improving the productivity of construction management [1,2]. However, the progress of construction projects is currently tracked in various ways, such as scheduling, utilizing construction methods, expenditure management, and resource/quality management, and it is difficult to accurately track and record all of those activities [3].

The information required to measure the progress of a construction project can be classified in two categories. The first one is information related to the plan and design and can be acquired at the end of the design phase. The second one is information related to the current construction progress. The latter type cannot be easily collected, and continuously changes. Unfortunately, for most construction project sites, data acquisition depends on the manual recording of information on paper, and the use of photos and documents causes many constraints in time and space. Automation is considered to be the most economical solution to these data acquisition-related problems [4,5].

The goal of this study is to improve the efficiency and accuracy with which progress data is acquired, as this is important to the overall management of the progress of each construction project. To achieve this goal, the study considers recent trends with some construction projects and proposes

an alternative process for solving the problems related to data collection on project sites. This study conducts verification procedures on various buildings to confirm the validity of the proposed measures as well as to identify methods of post-processing the acquired data. The contents of each of the performance stages presented in this study are shown in Figure 1.

**Figure 1.** Construction progress tracking procedure.

#### **2. Existing Studies on Automated Progress Data Acquisition**

Conventional processes employed to acquire data related to the progress of construction projects are inefficient, both in terms of time and cost, and this has led to many studies in the field of automation technologies [6–9]. Various mobile IT devices were initially proposed as a way to automate data acquisition because they can transmit information via the Internet. Initial representative studies involved the development of various automated methods to perform field data acquisition using data acquisition technologies (DATs) such as radio frequency identification (RFID), global positioning systems (GPSs), bar codes, time-lapse cameras, and ultra-wideband (UWB) [5,10–13]. The above-mentioned studies generally indicated that the introduction of mobile-based IT could enhance the efficiency with which data are collected from project sites in real time. Nevertheless, in practice, the proposed methods had technical limitations and they were therefore not commonly applied to construction projects. Typical problems include the cost of purchasing equipment and software for construction projects and the cost of upgrading hardware for maintenance. Furthermore, it has been confirmed that the technical limitations have not yet deviated from the conceptual stage in terms of automation and that the usefulness of the information collected has been poorer than information collected through other techniques [14].

Photogrammetry is a technique that involves the development of a point cloud model from digital photos in order to acquire data about the progress of a construction project. El-Omari and Moselhi [15] is one of the representative studies on photogrammetry, where the amount of work done for a certain time was estimated based on images captured over the corresponding time interval. However, as progress data need to be stored over time, the memory space reserved for data storage should be secured.

The video-based measurement collects progress data by filming the construction project site. This method is effective as it is possible to extract sequential video frame data [16]. Studies on construction projects that have utilized the video-based measurement to acquire data have focused on civil engineering projects such as roads and bridges. The damage detection and safety assessment of facilities and the detection of mobile equipment have been the main areas of focus [17–19]. In particular,

video-based measurements are affected by many factors, including temperature changes of objects, focus, the data-capture range, and camera resolution.

In 3D laser scanning, laser lights are emitted onto an object, and the distance to the object is calculated using the return time of travel of the light. This method is widely used in the engineering field [16]. Representative studies in this area have examined monitoring methods for the process and interference of mechanical, electrical, and plumbing (MEP) by comparing two 3D models, or by utilizing other methods to create 3D models using actual progress data acquired by LIDAR [20,21]. However, as data acquisition using LIDAR requires the emission of laser lights, if an object has a high reflectance, the efficiency decreases [22]. Besides, the high cost and limited applicability of LIDAR in complex indoor environments are obstacles to its popularity.

Augmented reality (AR) is a combination of various technologies, where virtual images from a computer are added to a real environment [23]. BIM is the representative software used for AR, and it is also applicable to visualization, simulation, information modeling, and safety testing [24,25]. The advantages of AR are that the construction progress and potential defects can be easily determined during the decision-making process, and if necessary, corrections can be made. While AR has been adopted by a large number of studies, there are still many problems related to user convenience, noise, and data interference filtering. Accordingly, practical methods of solving those problems need to be developed. Table 1 shows the characteristics of the data acquisition technologies and is based on elements that should be considered for technical use in measuring the progress in a construction project.


**Table 1.** Comparison of data acquisition technologies.

In recent years, studies have been conducted to verify the progress by comparing as-built 3D models collected through LIDAR with those produced during the design phase [20,26]. Representative studies in this area have examined the visualization of process rate monitoring through a 4D simulation model conducted in combination with modeling based on laser scanning.

Han and Golparvar-Fard [27] developed a progressive model via laser scanning and studied the construction progress through the BIM. Patraucean et al. [28] also conducted research on the modeling method for the progressive status of a project through the BIM. Meanwhile, Adan et al. [29] focused on the recognition of objects. After segmenting the point clouds corresponding to the walls of a building, a set of candidate objects was detected independently in the color and geometric spaces, and an original consensus procedure integrated both results to infer recognition. In addition, the recognized object was positioned and inserted in the as-is semantically rich 3D model, or BIM model. Wang et al. [30] developed a technique to automatically estimate the dimensions of precast concrete bridge deck panels and create as-built building information modeling (BIM) models to store the real dimensions of the panels. Bueno et al. [31] presented a novel automatic coarse registration method that is an adaptation of as-is 3D point clouds with 3D BIM models. Rebolj et al. [32] proposed methodology including three parameters (minimum local density, minimum local accuracy, and level of scatter) to measure the quality of point cloud data for construction progress tracking. While a recent study investigated the relationship between the quality of point cloud data and the successful identification of building elements, research is still lacking that can identify the required point cloud data quality for each specific application.

Therefore, Wang et al. [33] suggested three main future research directions within the scan-to-BIM framework. First, the information requirements for different BIM applications should be identified, and the quantitative relationships between the modelling accuracy or point cloud quality and the reliability of as-is BIM for its intended use should be investigated. Second, scan planning techniques should be further studied for cases in which an as-designed BIM does not exist and for UAV-mounted laser scanning. Third, as-is BIM reconstruction techniques should be improved with regard to accuracy, applicability, and level of automation. Puri and Turkan [34] mentioned that future work should focus on multiple larger construction projects that contain elements with complex geometrical shapes.

## **3. Point Cloud-Based Progress Data Acquisition**

## *3.1. LIDAR-Based Point Cloud Data*

Image scanning is a technique that involves optically reading images and converting them into data, information and objects, and a LIDAR device is a device that supports image scanning [35]. LIDAR emits a laser beam to an object at specific intervals, and expresses the shape of the object in a set of 3D coordinates by using the direction of the reflected laser and the distance measured [36].

Points that are obtained in this way have 3D X, Y, and Z coordinates, including geo information, and each constituent point is formed at a point where the laser of the LIDAR is reflected from the object. Accordingly, although no geometric information of the object is given, the surface coordinates of the object are included, from which the length, height, and other similar attributes of the object can be acquired (Figure 2). Consequently, a point cloud that includes the geoinformation of an object can provide high-resolution data without distortion using a 3D mesh model. More information can be acquired by modeling points that are obtained from each scanning task into a shape.

**Figure 2.** Geometric information acquired by LIDAR.

In each scanning iteration, LIDAR enables only the object seen in the straight line from it to be scanned. If there is another object between the LIDAR and the object, no scanning data are acquired. For this reason, in the case where a laser beam does not arrive at a point from the measurement point, information about the point cannot be determined. In addition, as shown in Figure 3, LIDAR radially emits the source of light and thus generates a shadow area. In other words, even if a projection plane is created vertical to the scanning direction of LIDAR, there may be an overlap such as the one shown in the dotted line inside the circle. To prevent such a phenomenon from occurring behind the object to be measured, all of the whole information pertaining to the appearance of the object needs to be scanned. This means that an object should be scanned at least two times.

**Figure 3.** Example of shadow area due to LIDAR scanning.

LIDAR is classified mainly as contact LIDAR and noncontact LIDAR. Contact scanning is a measurement method that attaches a contact sensor called a Touch Probe to an object. The coordinate measuring machine (CMM) is a representative device. Nevertheless, because the sensor directly touches the surface of the object, the object may be easily deformed, thus making the measurement either impossible or time consuming for materials that are likely to become deformed [35].

The first principle of noncontact scanning is that 3D coordinates are formed by timing the return of a laser beam emitted to and reflected from an object surface on the basis of the time-of-flight (TOF) measurement [37]. As this method does not require the sensor to contact the surface of the object, wide areas can be measured at much faster speeds [38]. The TOF measurement installs a measurement device on an axis of rotation and rotates it by a certain angle for horizontal scanning. Meanwhile, for vertical scanning, the laser reflection mirror inside the measurement device is moved by a certain angle. The second principle of noncontact scanning is laser-based triangulation, as illustrated in Figure 4. The reflected light from a target object on which a line-shaped laser beam is irradiated, is measured at a specific cell of a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). In other words, this method restores points obtained by scanning a target object into a 3D plane or figure.

**Figure 4.** Laser measurement by triangulation.

The distance between the laser oscillator and the optic sensor is specified, and the oscillation angle is also given. Thus, in a triangle consisting of the laser oscillator, the optic sensor, and the target object, the lengths of two sides can be obtained from the remaining side and two angles. A larger number of points can be measured within a given time period when compared with the TOF method. However, rotation is needed to scan the whole area. Other methods that are used to acquire 3D shapes of objects are the shape from shades (SFS) and the structured light system (SLS). SFS restores the 3D shape of an

object by illuminating it with light, and then measuring the intensity of the reflected light source. SLS identifies the outer shape of an object by projecting a light source with a regular pattern on a target object and using the shape of pattern reflected.

The point cloud data of each scene, which are obtained by scanning a target object, need to be combined into a single coordinate system to measure the object dimensions, and to analyze points with nonuniform curvatures, and modeling of shapes. The alignment target is the criterion for the alignment process. Generally, the data of a single scanned scene consist of numerous points, and several hundreds of millions or billions of points are given after the alignment process. Accordingly, it takes a long time to accurately align data. However, depending on users' demands, the scanning or alignment time is preferred relative to the alignment accuracy. The scanning or alignment time may vary according to the alignment method employed. Table 2 presents the characteristics of alignment methods for point cloud data.


**Table 2.** Alignment methods for point clouds and their characteristics.

Cloud-to-cloud alignment does not require any specific target but utilizes a particular point in a point cloud. After selecting the model space of two stations to be aligned, a particular point is picked up in the same place, and individual points are selected in a multi-pick mode and are aligned. Here, a station is a scanning point, that is, a point raised by a laser scanner. When selecting the feature points of scanned scenes, which are obtained at each station, they need to be maximally magnified so that an identical point can be selected and picked, and a fixed point with a nonreflecting material should be selected. In addition, accurate picking is required because the alignment quality and error rate are affected. When stations are aligned, at least three pairs of identical points are needed between each scan, and three points need to constitute as large a triangle as possible to minimize the alignment error. In the case where three or more stations are to be aligned, this alignment process should be repeated.

The target-to-target alignment utilizes targets to align two scanned scenes and to combine them as a single scene Targets are installed beforehand on the plane or bottom, wall, and edge of a target object, and the central points or edges of targets are used for alignment. Targets are to be firmly installed. In the case where a shadow area is included in a scene captured by an installed LIDAR, the object needs to be scanned in a different direction so that at least three common points can be recognized between two scan datasets. In this way, accurate alignment can be possible. If a target is installed on the ground that may be inclined or uneven, care is required because the alignment software program may not recognize the target. Besides, as the size of a target varies according to the scanning points, the alignment program may not recognize the target. Accordingly, if the target object is far from the scanner, the target needs to be larger.

Visual alignment is a manual alignment performed by a user who imports two scanning stations to be aligned into the same space. With this method, after two scanning stations are aligned in the X-axis and Y-axis from the user's perspective, they are also aligned by being moved on the Z-axis and rotated. Visual alignment is most effective for the same or similar features and is also easy for beginners to master.

Cloud-to-cloud alignment and target-to-target alignment, which identify the coordinates of each point and are basically manual operations, are representative methods for the geometric modeling of scanning data. However, if the scanned object is complex, a lot of time and alignment works are required. In such a case, a specific reverse engineering program is usually implemented to automatically extract and align the parts desired by the user. Nevertheless, the use of automatic extraction by employing reverse engineering software has a limitation in terms of the reverse engineering shape, and shapes are often wrongly recognized, which results in inaccurate data alignment. For this reason, the user needs to confirm the result of the automatic alignment using the reverse engineering software and needs to manually remove the wrongly extracted parts. In other words, manual modeling is necessary.

## *3.2. Drone-Based Point Cloud Data*

Drone-based photogrammetry can acquire data of large buildings and terrains. As this method is applicable to large areas, it is recognized as an alternative or supplementary approach for conventional measuring devices [39]. With this advantage, drone-based photogrammetry has been used to measure tasks performed in diverse fields such as building construction, civil work, cultural property management, disaster prevention, and agriculture [40–42]. However, this method that employs a drone produces different outputs depending on the weather and brightness of the photo. Besides, it is difficult to obtain close-up images, and a large relative error tends to occur according to the skill of workers and the performance of equipment. In recent times, there have been numerous studies that aim to improve such disadvantages of drone-based photogrammetry in several ways. The majority of those studies focus on verifying the accuracy of data and enhancing it up to a suitable level. In particular, a marker is used for point matching in order to reduce the error range of drone-based scanning [37]. As shown in Figure 5, drone-based photogrammetry can extract point clouds by implementing various software programs such as Pix4D and Context Capture (Bentley), and it can also capture hardly accessible sites at high altitudes. Thus, this method is being more widely used for data acquisition while monitoring, managing, and inspecting facilities.

**Figure 5.** Geometric information obtained by a drone.

## **4. Verification of Accuracy of Point Cloud Data**

## *4.1. Selection of Target Object and Identification of Recognition Rate*

This study acquired point cloud data and verified the accuracy of data obtained using LIDAR, which can scan both the exterior and interior of buildings and using a drone that could capture an area inaccessible to managers. This study also examined a method of acquiring available data using post-processing, and finally determined the accuracy and error of the data acquisition according to building shapes.

In this study, three buildings were selected to acquire point cloud data, which were obtained from the framework of those buildings, that is, from columns, girders, beams, and slabs. Building A consisted of two stories and a rooftop. The framework of this building included 12 columns, 20 girders, 23 beams, and 17 slabs. Building B also comprised two stories and a rooftop. The framework of this building included 25 columns, 36 girders, 40 beams, and 62 slabs. Building C consisted of five stories and a rooftop. For Building C, after point cloud data were acquired by using a drone, ultimate data were obtained by image matching. The base data employed for accuracy verification were acquired by comparing the data that were generated by aligning point clouds with measurements. Table 3 presents details of Buildings A, B, and C, where point cloud data were collected for accuracy verification.


**Table 3.** Target buildings used to acquire point cloud data.

This study adopted the visual alignment where data were visually aligned and rotated in the X, Y, and Z axes of the same space. After 3D point cloud data of Building A were completely aligned, the recognition rate of data acquisition was determined for the members of the framework, which include columns, girders, beams, and slabs. In the case of Building A, when the scanning was conducted, the framework had already been completed, but the finishing work had not yet started. For this reason, data for the member of the framework could be easily acquired, and the scanning conditions were similar to those of real construction sites. Fifty rounds of scanning were carried out, and the total duration was 6 h. To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50–60%. Thus, the recognition rate of the members was 100%, and the cloud data could be reliably acquired using LIDAR.

Building B was not under construction but had already been completed. However, as special finishing work had not yet been conducted, the members of the framework could be determined and were thus under similar conditions to those of real construction sites. Thirty rounds of scanning were carried out, and the total duration was 3 h. The LIDAR scanning of this building was performed by setting the acquisition density to "medium." To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50–60%. While the acquisition density of LIDAR scanning was set to medium, the recognition rate of the members was 100%. Thus, the point cloud data acquisition using LIDAR was found to be reliable.

Building A was selected to verify the accuracy of object recognition. However, this building had a rooftop that could not be scanned using a terrestrial LIDAR. Therefore, a drone was needed to obtain aerial photos, from which data of the overall external building shape could be aligned. This study utilized a rotary wing drone for an experiment to acquire point cloud data. The aerial shots obtained using the drone for data acquisition should be as accurate as possible to minimize alignment errors. However, there are limitations with realizing data acquisition using a drone. These limitations pertain to battery, safety, and GPS technology, making it almost impossible to acquire the high-quality data required by the user. In the case where the need is for accurate engineering, the quality of the scanned data needs to be examined using an appropriate criterion before the data are applied. For Buildings A and C, 209 and 134 photos were, respectively, obtained by operating the drone. Then, point cloud alignment was carried out using those photos.

### *4.2. Determining Error of Aligned Data*

The error of the point cloud data was determined by comparing the measurement data of Buildings A and B and the LIDAR-based alignment models of the scanned data. For the measurement data, the real distances between each building were measured using a measuring device. For alignment model data, the distance between point clouds was measured by implementing a software program. The error was measured by comparing the dimensions of the external width, the distance between columns, and the height of the column, which corresponded to the width, length, and height of the building, respectively. Table 4 presents the errors obtained by comparing measurements and LIDAR scanning results for Buildings A and B. It shows that in the case of Building A, the average error values were 0.011 m, 0.012 m, and 0.019 m in the external width, the distance between columns, and

the column height, respectively. Meanwhile, in the case of Building B, the average error values were 0.012 m, 0.011 m, and 0.012 m in the external width, the distance between columns, and the column height, respectively.


**Table 4.** Errors of LIDAR-based point cloud data.

According to the BIM guide for 3D imaging, which is published by the General Service Administration (GSA) of the USA, the error range needs to be a maximum of 51 mm for urban design projects and a maximum of 13 mm for architectural designs. Otherwise, the practical accuracy cannot be maintained. In this study, the errors for each item, which were identified by performing comparative measurements, were between 11 mm and 19 mm. This result was remarkably close to 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. The distance between the two end points of a target member in the scanned data was measured by mouse picking. As this method implies an unavoidable error, the above errors indicate that very accurate data were acquired by this study. Consequently, based on the cases of Buildings A and B, the LIDAR-based measurement and alignment of this study is shown to be accurate.

Errors that were present in the point cloud data obtained using a drone were compared in the same way as the LIDAR-based error verification. The target was Building A. As the drone could capture only the external building shape, the measurement data of external members were compared with the drone-based alignment model of scanned data. Table 5 presents the errors between the measurements and the drone-based alignment data for Building A. It shows that the average errors for the width, length, and height of the building were 0.378 m, 0.358 m (distance between columns), and 0.072 m (column height), respectively. These values were far below 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. Such a large gap is attributable to the following intrinsic characteristics of drones. First, because each drone captures a target while flying, it is difficult to acquire accurate data. Second, images obtained by a drone need to be converted to point clouds and then imported into a software program that can measure distance. These steps result in significant gaps. Accordingly, this study used the drone-based point cloud data only for the parts for which data could not be acquired using LIDAR.

**Table 5.** Errors of drone-based point cloud data.


## **5. 3D Modeling of Point Cloud Data**

#### *5.1. Creation of 3D Model of Point Cloud Date for Target Object*

Upon verification of the accuracy of point cloud data, which had been acquired using drone- and LIDAR-based scanning, respectively, it was shown that the data obtained by LIDAR scanning had a higher accuracy than those acquired by the drone. However, the progress data acquisition is likely to include inaccessible areas such as rooftops, and there may be a risk factor when acquiring data. In such a case, the application of LIDAR to data acquisition may be restricted, which will result in

uncertain parts in the alignment of the whole point clouds. In this regard, by combining the datasets that have been acquired by a drone and a LIDAR, respectively, the loss of data can be prevented, thus improving the accuracy of progress data for construction sites. As shown in Figure 6, to improve the accuracy of progress data, this study combined two types of point cloud data. The mixing process can be summarized as follows.


**Figure 6.** Combination of drone-based data and LIDAR-based data.

## *5.2. 3D Polygon Mesh Modeling*

Delanay triangulation (DT) and the Voronoi diagram (VD) are basic concept for 3D polygon mesh modeling. DT is a division in which points on a plane are connected in triangles to divide the space such that the minimum value of the cabinet is maximal, and the outer source of any triangle does not include anything other than the three vertices of the triangle. In other words, of various triangulations, the division in which each triangle is as close as possible to the regular triangle. Meanwhile, a VD is a division of a plane into polygons that contain each of these points when there are points on the plane. When there are points on the plane, the two adjacent generation points should be connected to the line, and a vertical equal division of this line should be drawn. In this way, a vertical isomeric line is drawn, creating a polygon with a vertical isomeric line that divides the sides into polygons. DT and VD are in a dual relationship, and if one is known, the other can immediately be obtained.

Figure 7a shows VD and the DT of the same set of points. The VD is created by sequentially linking the center of the circumcircle of DTs with generating points as a common vertex, and by linking points between adjacent VD areas, DT can be generated for these points. For 3D stereoscopic modeling from 3D point clouds, the use of DT allows for polygon mesh to be obtained from a collection of points on the surface. The triangulation in 3D is called tetrahedralization or tetrahedrization [43]. A tetrahedralization is the partitioning of the input domain into a collection of tetrahedra that meet at only shared faces (vertices, edges, or triangles). Polygons are typically ideal for accurately representing the results of measurements, providing an optimal surface description. However, the results of

tetrahedralization are much more complicated than those of a 2D triangulation. Therefore, this study was performed by utilizing the Commercial Modeling Software Package. The Leica Cyclone platform was used for 3D point cloud data visualization and processing, and the Leica 3D Reshaper platform was used for polygon mesh model generation Figure 7b,c.

**Figure 7.** Combination of drone-based data and LIDAR-based data.

Mixed-point cloud data can be configured into a 3D model using a modeling process. This process generates a polygon from the outline of the point cloud. After the polygon model of each member is generated, an ultimate 3D model is completed by an editing process. However, this modeling method cannot reflect the details of the acquired data. Construction projects usually include the installation of molds and the casting of concrete, which may cause some errors or bent surfaces that were not originally planned. Manual 3D modeling sets the surfaces of each member and allocates heights in the form of a straight line. Accordingly, detailed errors such as a small slope or bends on a target surface cannot be modeled. Nevertheless, such errors can be detected compared with the actual plan. Figure 8 illustrates a representative process of 3D modeling for a completely aligned point cloud.

**Figure 8.** Three-dimensional modeling process for point cloud.

## *5.3. Determination of Errors in the Created 3D Model*

The acquisition of accurate data is the most essential part of the reverse engineering using the progress data acquired from a construction site. The acquisition of accurate data during the proposed process in this study is based on the 3D shapes of buildings. Accordingly, error identification is necessary for the 3D shape of a target building. This study also determined the shape of the created 3D model. The volume of the 3D model was compared with the actual data. The amount of concrete poured for the construction was used as the actual data. Table 6 presents the locations, date, and volume (m3) of concrete poured in Building A. Concrete was poured 6 times, and the total volume was 522 m3.



In the case of Building A, the initial data acquired by LIDAR were limited to the above-ground part, and back filling parts, such as sub-slab concrete and foundation, were excluded. In other words, the 3D model was generated by acquiring the point cloud data for the above-ground part. Accordingly, the volume data of the poured concrete were compared over 468 m3, which included PIT, 1F, 2F, protective concrete, and the rooftop.

As with other commercial software programs, a software program that automatically creates a 3D model after a point cloud is imported enables the length, area, and volume of each object to be identified. The volume of the 3D model of Building A was measured to be 479 m3. When the actual data were compared with the volume data of the 3D model, the difference was 11 m3. This result corresponded to a difference of less than 3% compared with the actual data. Thus, the 3D model data showed relatively little error compared with the volume based on the actual data, demonstrating high accuracy.

## *5.4. Visualization of Construction Progress*

In order to track the progress of a project, the current status should be compared with the planned status. This study examined an overlap-based method of comparing the BIM model, which provides the as-planned data of a project, and the point cloud-based 3D model, which shows as-built data. For the comparison, the point cloud-based 3D model needs to be imported into the BIM model. However, in the case where two types of 3D models were used, they were implemented on different software bases. File conversion is required for the importing of data. Figure 9 shows the process involved in comparing two models.

**Figure 9.** Comparison of the BIM model and the point cloud-based model.

### **6. Conclusions**

This study proposed methods that can be used to track the progress of construction projects, and each of the proposed methods was verified. With respect to for data acquisition, the drone- and LIDAR-based point cloud data acquisition methods were examined, and the accuracy of data was verified with respect to their application to actual construction projects. LIDAR-based point cloud data had error rates of roughly 11–19 mm, indicating a high accuracy level. However, the drone-based data showed a considerably low accuracy level. Because the progress data are based on the 3D shapes of buildings, errors in the 3D shapes were also examined. In the case of Building A, the 3D model based on point cloud data had a difference of 11 m<sup>3</sup> when compared to the actual data. This value represented a difference of less than 3% compared with the actual data, thereby demonstrating a low error rate.

In order to track the progress of a project, the current status should be compared with the planned status. The proposed overlapping method used in this study for the BIM model and the point cloud-based 3D model enabled the actual progress to be visualized and compared to the corresponding plan. Therefore, it is expected to permit project managers to more easily track project progress and identify precise statues when progress has not proceeded as planned. This provides the advantage of progress management being carried out through the establishment of future construction plans and the review of schedules. It is also believed that various reports and related data using visualized three-dimensional models will help project participants and stakeholders greatly. All additional accumulated data could also be used as the basis for the maintenance phase after the end of the project or for similar projects in the future.

Based on results obtained, the data acquisition method proposed in this study appears to be very efficient and can enable project managers to assess the progress and comprehensively manage projects. In particular, as decisions can be made quickly based on rapid information delivery, the incidence of workers' errors and the accompanying need for reconstruction can thus be prevented, leading to reductions in time and cost overruns. However, this study showed that errors and omissions in the alignment of point cloud data caused the poor-quality data alignment. The representative causes

were the reflexivity of laser emitted on the surfaces of objects, the distance, and the atmospheric environment. The path of laser was also problematic. If it is possible to omit a specific section or to utilize an independent one that does not need to be aligned with other sections, the problem may be trivial. However, if the section is an essential one that interfaces with different sections, the problem should be resolved.

**Author Contributions:** In this paper, S.K. (Seungho Kim) collected the data and wrote the paper. S.K. (Sangyong Kim) analyzed the data and conceived the methodology. D.-E.L. developed the ideas and designed the research framework. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2018R1A5A1025137).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Simple and Sustainable Prediction Method of Liquefaction-Induced Settlement at Pohang Using an Artificial Neural Network**

## **Sung-Sik Park 1, Peter D. Ogunjinmi 1, Seung-Wook Woo <sup>1</sup> and Dong-Eun Lee 2,\***


Received: 8 April 2020; Accepted: 9 May 2020; Published: 13 May 2020

**Abstract:** Conventionally, liquefaction-induced settlements have been predicted through numerical or analytical methods. In this study, a machine learning approach for predicting the liquefaction-induced settlement at Pohang was investigated. In particular, we examined the potential of an artificial neural network (ANN) algorithm to predict the earthquake-induced settlement at Pohang on the basis of standard penetration test (SPT) data. The performance of two ANN models for settlement prediction was studied and compared in terms of the R<sup>2</sup> correlation. Model 1 (input parameters: unit weight, corrected SPT blow count, and cyclic stress ratio (CSR)) showed higher prediction accuracy than model 2 (input parameters: depth of the soil layer, corrected SPT blow count, and the CSR), and the difference in the *R*<sup>2</sup> correlation between the models was about 0.12. Subsequently, an optimal ANN model was used to develop a simple predictive model equation, which was implemented using a matrix formulation. Finally, the liquefaction-induced settlement chart based on the predictive model equation was proposed, and the applicability of the chart was verified by comparing it with the interferometric synthetic aperture radar (InSAR) image.

**Keywords:** settlement; artificial neural network; liquefaction

### **1. Introduction**

The Pohang earthquake (*M*<sup>w</sup> = 5.4) that struck the Heunghae Basin, around Pohang City, on 15 November 2017, had a damaging effect, leading to liquefaction and lateral spreading. Since the event, several attempts have been made to study the post-earthquake damage [1–5]. However, little attention has been paid to the settlement resulting from the liquefaction. This study tried to predict the liquefaction-induced settlement of Pohang by applying a machine learning algorithm to a standard penetration test (SPT) data and proposes a liquefaction settlement chart based on the results. Before constructing a structure on the ground, design is performed based on the ground investigation results. In addition, many sites, including Pohang, have a lot of SPT data. The SPT is a common method to get ground investigation data.

Assessing liquefaction-induced settlements is a major challenge in geotechnical earthquake engineering since a variety of phenomena such as re-sedimentation or reconsolidation (volumetric strain) of the liquefied soil, ground loss due to venting of liquefied soil (i.e., sand boils or ejecta), lateral spreading under zero volume change, soil-structure interaction ratcheting, and bearing capacity failure are associated with them [6]. For numerical analysis, earthquake-induced liquefaction in the free-field can be interpreted as a 1D phenomenon occurring along a vertical soil column in which earthquake-induced cyclic shear and compressive forces increase the pore pressure and thereby cause

a reduction in the transient stiffness and strength of the soil. After liquefaction, reconsolidation occurs in the soil owing to the dissipation of the excess pore pressure (Δ*u*) by means of water flow, and it results in the vertical settlement of the ground surface [7].

Tang et al. [8] classified the significant parameters controlling seismic soil liquefaction into seismic parameters, site conditions, and soil parameters. Out of 22 influence factors, they identified 12 as being significant, and they were the magnitude, epicentral distance, duration, fines content, particle size, grain composition, relative density, drainage condition, degree of consolidation, thickness of the sand layer, depth of the sand layer, and groundwater table. Over the years, researchers have considered some of these significant influence factors for predicting earthquake-induced liquefaction and its effects through machine learning techniques [9,10].

Therefore, simple artificial neural network (ANN) models were adopted to predict liquefactioninduced settlement on the basis of SPT database from the Korea Geotechnical Information DB system [11] and the Pohang earthquake. In the following sections, the research methodology and findings are presented.

## **2. Motivation and Study Objective**

Liquefaction-induced settlement is often calculated by considering numerous parameters and following several complex analytical and numerical procedures. However, obtaining such parameters in the field may not be practicable in most cases, as some of the required data may not be available. Hence, there is a need for an alternative simple settlement prediction procedure that requires a few parameters that are readily obtained from a field observations database. Therefore, the objective of this study is to fill this gap by presenting a tool to predict liquefaction-induced settlement that may occur when an earthquake occurs in the field using SPT data obtained in the past.

## **3. Methodology**

The database used in this study was collected from the Korea Geotechnical Information DB system [11] and the UBCSAND constitutive effective stress model [12]. Through a 1D column analysis, the UBCSAND model estimates the shear-induced deformation from SPT data and earthquake information. SPT data were obtained for five different borehole sites near the epicenter of the earthquake at Pohang. The summary statistics of the data set are presented in Table 1 and the details of the database are in Table A1.



The data set comprised 100 data points (20 data for each borehole) along with the corresponding settlement values. The locations of the boreholes considered in the study are shown in Figure 1.

**Figure 1.** Locations of boreholes in Pohang City considered in this study.

## *3.1. Data Division and Preprocessing*

The settlement prediction process comprises training and testing. Seventy percent of the entire data set was used for training, and the remaining 30% was used for testing. The data were preprocessed before training the algorithm, to ensure quick convergence and minimize the generalization error. This involved scaling the input variables to the range −1 to +1 by using Equation (1).

$$\chi\_{\mathbf{n}} = \left( \frac{(\mathbf{b} - \mathbf{a})}{(B - A)} \times \chi\_{\text{unscaled}} \right) + \left( a - \left[ A \times \frac{(\mathbf{b} - \mathbf{a})}{(B - A)} \right] \right) \tag{1}$$

where *A* and *B* are the minimum and maximum values of the unscaled data set, respectively, and *a* and *b* are the minimum and maximum values of the scaled data set, respectively.

## *3.2. Overview of the Artificial Neural Network Model*

## 3.2.1. Basic Concept of ANN

Artificial neural networks (ANNs) are complex mathematical models inspired by biological neurons, and they emulate biological neural networks. They are widely used for nonlinear system modeling and system identification [13]. A typical ANN consists of an input layer, one or more hidden layers, and an output layer. The numbers of layers and neurons in each layer depend on the complexity of the problem under consideration.

#### 3.2.2. Mathematical Representation of ANN Architecture

A neural network in its simplest form can be used to model the relationship between data points *x* and the corresponding real-valued targets *y*. Mathematically, if our inputs (*x*) comprise *n* features, we can choose weights (*w*) and bias (*b*) such that our prediction (*y*') is given by Equation (2).

$$y' = w\_1 \cdot \mathbf{x}\_1 + \dots + w\_n \cdot \mathbf{x}\_n + b \tag{2}$$

For easy computation, all the features can be collected into a vector **x** and all weights into a vector **w** to express our model compactly using the dot product notation—Equation (3).

$$y' = w^\tau \mathbf{x} + b \tag{3}$$

ANNs can learn by example (supervised learning). In ANNs, a set of input variables are multiplied by adjustable connection weights to produce the output. When input data are fed to an ANN, the ANN adjusts through a feed-forward back-propagation technique to determine the rules governing the relationship between the concerned variables. Figure 2 shows a graphical depiction of a typical feedforward ANN architecture. A neural network is trained using error back-propagation.

**Figure 2.** Feedforward neural network architecture.

Two ANN models were considered in this study, and they are shown in Figure 3. Both models had three input variables. The input variables of model 1 were unit weight (γ), corrected SPT blow count (*N*1(60)), and cyclic stress ratio (CSR), while those of model 2 were depth of the soil layer (*d*), *N*1(60), and CSR.

**Figure 3.** Architecture of the artificial neural network (ANN); (**a**) model 1 and (**b**) model 2.

The choice of input parameters was based on domain knowledge. They were chosen by considering how the seismic and soil properties influence liquefaction-induced settlement. The soil properties considered were γ, *N*1(60), and *d*, while the CSR represented the seismic property. The CSR quantifies the demand imposed on the critical soil layer as a result of the seismic ground motion.

### **4. Results and Discussion**

Table 2 summarizes the performance statistics of the two ANN models used for settlement prediction. For the test data set, models 1 and 2 had R<sup>2</sup> (coefficient of determination) values of 0.8601 and 0.7352 and MAE (mean of absolute errors) values of 0.1941 and 0.3136, respectively.


**Table 2.** Performance statistics of the ANN models.

After the models were trained, the root mean square error (RMSE) and loss were plotted to check the models' performance for the training and test data sets, as shown in Figures 4 and 5. The *x*-axis represents the number of epochs (i.e., the number of times the model ran through the entire training/test data set and updated the weights).

**Figure 4.** Plot of the (**a**) root mean square error (RMSE) and (**b**) loss for ANN model 1.

**Figure 5.** Plot of the (**a**) RMSE and (**b**) loss for ANN model 2.

Figures 6 and 7 show the performance of the ANN models in terms of R<sup>2</sup> for the test data set.

**Figure 6.** Scatter plot showing the performance of ANN model 1 for the test data set.

**Figure 7.** Scatter plot showing the performance of ANN model 2 for the test data set.

A comparison of models 1 and 2 in terms of the prediction accuracy shows that the prediction accuracy of the former is higher. The difference in the *R*<sup>2</sup> correlation between the two models is about 0.12.

From the results shown in Figures 5 and 6, it can be concluded that there exists a strong correlation between the model predictions and the actual settlement in both cases considered.

In this study, ANN models composed of two or more hidden layers were considered, and it was found that the difference in accuracy between models with two or more hidden layers and the model with the single hidden layer was not significant. Therefore, an ANN model using one layer was used.

## *4.1. ANN-Based Numerical Equation*

A simple equation was developed to predict the liquefaction-induced settlement. The optimal ANN model structure used for the purpose is shown in Figure 8, and its associated weights with biases are presented in Table 3.


**Figure 8.** Structure of the optimal ANN model. **Table 3.** Weight matrix and bias vector for the ANN model.

Note: Matrices W1 (8 × 3), B1 (8 × 1), W2 (1 × 8) and B2 (1 × 1) were used in Equation (2)

The optimal-ANN-model-based numerical equation for settlement prediction can be expressed as Equation (4).

$$f\_{\text{sig}} = \sigma(\mathbf{z}) = \frac{1}{1 + \mathbf{c}^{-z}} \tag{4}$$

where *T*<sup>12</sup> is the output variable, namely, the predicted settlement value (*S*), *B*<sup>k</sup> is the bias value at the output layer, *W*kj is the connection weight between the *j*th node in the hidden layer and the *k*th node in the output layer, *Bj* is the bias value of the *j*th hidden node, *W*ji is the connection weight between the *i*th input node and the *j*th hidden node, *Xi* is the *i*th input variable, and *fsig* is the sigmoid transfer function given by Equation (5).

$$\mathbf{S} = \mathbf{T}\_{12} = B\_k + \sum\_{j=4}^{11} \left\{ \mathcal{W}\_{kj} \times f\_{sig}[B\_j + \sum\_{i=1}^{3} (\mathcal{W}\_{ji} X\_i) \right\} \tag{5}$$

For the simplification of the calculation process, the weights and biases were arranged in a matrix form.

#### *4.2. Example of Settlement Calculation Using the ANN Model*

For γ = 18 kN/m3, *N*1(60) = 13, and CSR = 0.34, the input vector **X** is

$$\chi = \begin{bmatrix} 18\\13\\0.34 \end{bmatrix}$$

The normalized input vector (Xn) is calculated from Equation (1) by using the A and B values in Table 1:

$$\mathbf{X}\_{\rm m} = \begin{bmatrix} -0.200\\ 0.040\\ 0.444 \end{bmatrix}$$

Note: (*a*, *b*) = (−1, 1)

The settlement (*S*) is calculated using the normalized input vector as follows:

$$\mathbf{W}1 \times \mathbf{X}\mathbf{n} + \mathbf{B}1 = \begin{bmatrix} -2.231 & 2.729 & -2.500 \\ -8.874 & -3.629 & -15.703 \\ -6.271 & -5.433 & -1.570 \\ -1.00 & 5.470 & -3.295 \\ 5.617 & 7.774 & 1.701 \\ -1.866 & -4.224 & -9.756 \\ -2.116 & 7.453 & -1.157 \\ 0.314 & -1.285 & -4.980 \end{bmatrix} \times \begin{bmatrix} -0.631 \\ -6.179 \\ 0.040 \\ -5.334 \\ -7.317 \\ -0.926 \\ -7.489 \\ -7.489 \\ -6.722 \end{bmatrix} = \begin{bmatrix} -10.187 \\ -11.529 \\ -4.995 \\ -9.640 \\ -7.374 \\ -5.058 \\ -7.282 \\ -9.049 \end{bmatrix}$$

$$\begin{array}{c|c} \text{f}\_{\text{sig}} \left( \text{W}\_{1} \times \text{X}\_{\text{n}} + \text{B}\_{1} \right) & = \begin{pmatrix} 3.76 \text{E} - 0 \\ 9.84 \text{E} - 0 \text{6} \\ 6.73 \text{E} - 0 \text{3} \\ 6.51 \text{E} - 0 \text{5} \\ 6.27 \text{E} - 0 \text{4} \\ 6.32 \text{E} - 0 \text{3} \\ 6.87 \text{E} - 0 \text{4} \\ 1.17 \text{E} - 0 \text{4} \end{array} \right)$$

$$\mathbf{S} = \begin{bmatrix} 0.579 \ -1.853 \ -1.058 - 1.591 \ -0.423 \ 1.964 - 0.852 \ -0.320 \end{bmatrix}$$

$$\begin{bmatrix} 3.76 \times 10^{-5} \\ 9.84 \times 10^{-6} \\ 6.73 \times 10^{-3} \\ 6.51 \times 10^{-5} \\ 6.27 \times 10^{-4} \\ 6.32 \times 10^{-3} \\ 6.87 \times 10^{-4} \\ 1.17 \times 10^{-4} \end{bmatrix} + \begin{bmatrix} 1.006 \end{bmatrix}$$

$$\mathbf{S} = \begin{bmatrix} 1.010 \end{bmatrix}$$

The actual value of the settlement was 1 mm, and the value predicted using the ANN model was 1.010 mm.

#### *4.3. Sensitivity Analysis*

Sensitivity analysis was performed to determine the effect of the input parameters on the settlement prediction. The measure of variable importance was obtained using the permutation importance approach for random forests, described by Breiman [14]. This approach involves measuring the drop in the ANN model performance when a feature is unavailable.

As shown in Figures 9 and 10, the unit weight had the strongest influence on the settlement prediction in the case of ANN model 1, while the depth of the soil layer had the strongest influence on the predicted settlement in the case of model 2. In both cases, *N*1(60) had a stronger influence than the CSR.

**Figure 9.** Relative importance of the input parameters of ANN model 1.

**Figure 10.** Relative importance of the input parameters of ANN model 2.

#### *4.4. Parametric Study and Extrapolation beyond the Training Data*

A parametric study was conducted to verify the validity and robustness of the optimal ANN model, and it involved generating a synthetic data set within the range of the training data set to test the model. For a given unit weight of soil, the settlement was determined based on the unit thickness of each layer. As shown in Figure 11a, the amount of predictive settlement generally increased with increasing a CSR and decreased with an increase in *N*1(60). However, it is necessary to expand the range of *N*1(60) and CSR obtained through the parametric study due to some field data being beyond the range. Therefore, this study proposed a simple settlement chart based on a parametric study as shown in Figure 11b.

**Figure 11.** Variation of settlement with (N1)60 and CSR for γ = 18 kN/m3. (**a**) Settlement relationship between (N1)60 and CSR; (**b**) settlement chart based on the ANN method.

## *4.5. Application of Settlement Chart Based on the ANN Method*

The proposed settlement chart from the optimal ANN model was assessed using the SPT data obtained from three additional boreholes at the Pohang site. The locations of the boreholes and the measured settlement obtained from interferometric synthetic aperture radar (InSAR) imaging are shown in Figure 12.

**Figure 12.** A settlement map from interferometric synthetic aperture radar (InSAR) and a location of extra boreholes (BHs).

The InSAR procedure was recommended by the Remote Sensing Lab at Kangwon National University, Korea [15]. Following the procedure, the settlement was analyzed by the Pohang satellite images between November 4 and 16, 2017, from Google Earth. Such Google Earth images were used to generate the settlement map in Figure 12 by using a freely distributed SentiNel Application Platform (SNAP) program by the European Space Agency [16]. With an average unit weight of 18 kN/m3, N1(60) values were converted from the SPT blow count (NSPT) of boreholes [17]. The CSR can be calculated from Equation (6) [18].

$$\text{CSR} = \left(\tau\_{\text{av}} / \sigma\_{\text{vv}}'\right) = 0.65 (a\_{\text{max}} / \,\text{g}) (\sigma\_{\text{vv}} / \sigma\_{\text{vv}}') \gamma\_d \tag{6}$$

where amax = peak acceleration at the ground surface from the earthquake (this study used the Pohang Earthquake, 0.2712 g); g = acceleration of gravity; σvo and σ'vo are total and effective vertical overburden stresses, respectively; and γ<sup>d</sup> = stress reduction coefficient.

The calculated total settlement for the additional boreholes, 1, 2, and 3, using the optimal ANN model are 17.14, 19.77, and 13.88 mm, respectively, as shown in Table 4. It can be observed that these settlement values are close to those measured by the InSAR imaging. Unlike the numerical analysis approach, the proposed chart between (N1)60-CSR-Settlement from the optimal ANN model has been proven to estimate settlement values with minimal input parameters. For an earthquake with similar impact and magnitude, this simple ANN model can be deployed as a handy tool to obtain liquefaction-induced settlement in the field.


**Table 4.** Predicted settlement due to Pohang earthquake using the proposed settlement chart.

#### **5. Conclusions**

In this study, the potential of an ANN to predict the liquefaction-induced settlement at Pohang was examined. Two ANN models were trained using a back-propagation algorithm. Both models had three input variables. The input variables of model 1 were unit weight, corrected SPT blow count (*N*1(60)), and CSR, while those of model 2 were depth of the soil layer, *N*1(60), and CSR. The output of the models was the settlement (*S*). After the training and testing of the models, it was evident that model 1 had higher prediction accuracy, and the difference in the *R*<sup>2</sup> correlation between the two models was about 0.12. Subsequently, the weights and biases of an optimal ANN model were used to develop a simple predictive model equation, which was implemented using a matrix formulation.

Sensitivity analysis performed using the permutation importance algorithm indicated that the corrected SPT blow count had a stronger influence than the CSR on the predicted settlement. Furthermore, a parametric study showed that for a given unit weight of soil, the settlement decreased with an increase in *N*1(60).

Finally, the simplified relationship between (N1)60-CSR-Settlement was proposed using the optimal ANN model, and the cumulative settlement was predicted by applying the proposed relationship to additional boreholes and compared with the InSAR results. The cumulative settlement had a similar range as the InSAR displacement map. Thus, the simplified relationship of this study can be deployed as a handy tool to obtain liquefaction-induced settlement in the field.

**Author Contributions:** Conceptualization, S.-S.P. and P.D.O.; methodology, S.-S.P. and P.D.O.; software, P.D.O. and S.-W.W.; validation, S.-S.P., P.D.O., and S.-W.W.; formal analysis, P.D.O., S.-S.P., and D.-E.L.; investigation, P.D.O. and S.-S.P.; resources, S.-S.P. and P.D.O.; data curation, S.-S.P. and P.D.O.; writing—original draft preparation, S.-S.P. and P.D.O.; writing—review and editing, S.-S.P. and P.D.O.; visualization, P.D.O. and S.-W.W.; supervision, S.-S.P.; project administration, S.-S.P. and D.-E.L.; funding acquisition, S.-S.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2018R1A5A1025137).

**Conflicts of Interest:** The authors declare no conflicts of interest.

## **Appendix A**




**Table A1.** *Cont.*


**Table A1.** *Cont.*

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Analysis of Deformation Characteristics of Foundation-Pit Excavation and Circular Wall**

## **Xuhe Gao 1,\*, Wei-ping Tian <sup>1</sup> and Zhipei Zhang <sup>2</sup>**


Received: 7 March 2020; Accepted: 10 April 2020; Published: 14 April 2020

**Abstract:** The surrounding ground settlement and displacement control of an underground diaphragm wall during the excavation of a foundation pit are the main challenges for engineering safety. These factors are also an obstacle to the controllable and sustainable development of foundation-pit projects. In this study, monitoring data were analyzed to identify the deformation law and other characteristics of the support structure. A three-dimensional numerical simulation of the foundation-pit excavation process was performed in Midas/GTS NX. To overcome the theoretical shortcomings of parameter selection for finite-element simulation, a key data self-verification method was used. Results showed that the settlement of the surface surrounding the circular underground continuous wall was mainly affected by the depth of the foundation-pit excavation. In addition, wall deformation for each working condition showed linearity with clear staged characteristics. In particular, the deformation curve had obvious inflection points, most of which were located deeper than 2/3 of the overall excavation depth. The characteristics of the cantilever pile were not obvious in Working Conditions 3–9, but the distribution of the wall body offset in a D-shaped curve was evident. Deviation between the monitoring value of the maximal wall offset and the simulated value was only 4.31 %. The appropriate physical and mechanical parameters for key data self-verification were proposed. The concept of the circular-wall offset inflection point is proposed to determine the distribution of inflection-point positions and offset curves. The method provides new opportunities for the safety control and sustainable research of foundation-pit excavations.

**Keywords:** circular foundation pit; construction monitoring; numerical simulation; underground continuous wall

## **1. Introduction**

Underground continuous walls have been widely applied as foundation-pit supports due to their high stability, rigidity, and impermeability, in addition to their predictable deformation characteristics. However, for circular anchor foundation pits with underground continuous walls as the predominant retaining structure, monitoring and predicting wall displacement and surface settlement around the foundation pit remain challenges. As such, these factors need to promptly and consistently be monitored, and monitoring data should be accordingly analyzed. Challenges have also inspired scholars to explore new research methods, promoting the application of computer technology in the construction of foundation pits.

Studies on this topic have been conducted. Bolton and Powriet [1] carried out various laboratory tests to study the deformation characteristics of an underground continuous wall under different soil conditions and foundation-pit parameters. They also calculated the deformation and failure conditions of the foundation pit. However, they did not discuss the validity of the used parameters in

the calculation. Pohetal. [2] collated the monitoring data of two foundation pits, and used real-world data to calculate the bending moment generated by the underground continuous wall. Results showed that the bending moment of the underground continuous wall was largely generated due to the cracking of the wall, and that the lateral displacement of the wall was not affected by this factor. However, the study did not provide any description of the monitoring-data collection, nor did it further demonstrate the factors and characteristics of the lateral displacement of the wall. Bose and Som [3] created a more accurate finite-element-analysis program based on the Cambridge model to address deficiencies of the existing model, which is mainly used for calculations and analysis of the internal supports of a foundation pit. However, that study also lacked a demonstration of the validity of the model parameters. Whittle et al. [4] innovatively integrated two-dimensional seepage into the deep-foundation-pit calculation model, and examined soil stress in the deep-foundation-pit engineering of a postal building in Boston on the basis of the finite-element method. However, the study did not discuss the displacement and surrounding settlement of the support structure during excavation of the foundation pit. To better analyze foundation-pit support systems, Kishnani and Borja [5] conducted detailed analysis of the soil structure and seepage into the foundation pit, and analyzed the impact of these two factors on the support system. They determined that the seepage affected earth pressure behind the wall and caused the surrounding ground to settle. The effect of seepage on wall displacement, however, was not discussed. After summarizing multiple theories and practical experiences, Alejano et al. [6] conducted a related investigation on the factors affecting the displacement of typical structural types (filling and excavation). Soil traits were regarded as ideally elastoplastic, and it was noted that the displacement of the soil was not the only factor; the physical properties of the soil and the wall, as well as the location of the erected supports also contributed. That study did not involve ground settlement around the foundation pit, and the quantitative analysis of wall displacement was insufficient. Faheem et al. [7] focused on the poor stability of foundation pits in areas with soft soil from a two- and a three-dimensional perspective. Their study particularly focused on the stability of the bottom of the pit, and presented a detailed simulation using the finite-element method. However, there was no analysis of ground settlement around the foundation pit and the deformation of the supporting structure, and the validity of the parameters in the simulation process was not verified.

Liu and Ding [8] used the finite-element method to study the stiffness coefficient of the Goodman unit, which was determined to affect surface settlement outside the foundation pit and the displacement of the underground continuous wall. That study also failed to verify the validity of the finite-element calculation parameters. Chen et al. [9] investigated the deep foundation pit of a steel plant in Shanghai on the basis of collected monitoring data during foundation-pit construction. They analyzed the deformation and internal structural forces of the circular underground continuous wall supporting the foundation pit that was subjected to the pressure of confined groundwater. The study focused on analysis of existing monitoring data, and did not use finite-element analysis to further demonstrate the deformation characteristics of the supporting structure. Xu et al. [10] collected monitoring data from foundation pits in Shanghai that used underground continuous wall supports, and calculated the deformation law of the underground continuous wall to study the influence of various factors on these laws. Their study focused on regional data collected by statistical analysis, and had limited applicability to early warnings on surrounding surface settlement and wall displacement in special geological environments. Wang and Hu [11] studied the double-layer elliptical supporting structure in the foundation pit of the China Petroleum Building, and aimed to reduce the number of layers supported by the internal structure during the excavation of the foundation pit. The structure was analyzed by force-deformation calculations. It was concluded that a T- or I-shaped underground continuous wall could be used instead of the elliptical wall shape, which could reduce the number of required internal supports. That study lacked monitoring data or finite-element simulation to validate the results, and there was no analysis of surface settlement and wall displacement around the foundation pit. Hu et al. [12] used the foundation pit of a subway station as a research subject, and monitored the variation of the horizontal displacement of the underground continuous wall at the excavation depth during the construction of the foundation pit. A three-dimensional finite-element model was established to simulate the foundation-pit excavation of the subway station, and the calculated deformation characteristics were compared with the monitoring results. Results showed that the difference between the simulated maximal horizontal displacement of the underground continuous wall and the measured value was small, and that the trend in displacement was comparable. However, the study also failed to verify the validity of the finite-element calculation parameters, and did not analyze ground settlement around the foundation pit. Zheng et al. [13] used finite-difference software FLAC3D to numerically simulate the horizontal deformation and surface settlement of a foundation-pit-excavation support structure and compared it with the measured values. Results showed that the maximal horizontal displacement of the underground continuous wall appeared at the top of the wall, and the horizontal displacement curve exhibited a "half-cup" composite shape with multiple inflection points. The settlement curve of the ground surface beyond the wall was an asymmetrical groove-type curve. Similarly, that study verified the validity of the finite-element calculation parameters.

In summary, the existing literature has conducted a large number of theoretical calculations and finite-element analysis of underground continuous walls (including self-programming and commercial software). However, there is almost no argument concerning the method of obtaining parameters. This shows that the method of parameter selection needs further study. If only research results are pursued, and access to key parameters is ignored, such research is questioned by other disciplines, and the sustainability of that work is also threatened. In this study, the anchored circular underground continuous wall of the Humen Second Bridge West foundation-pit project was monitored and simulated. Monitoring data were analyzed to identify the deformation law and other characteristics of the support structure. Three-dimensional numerical simulation of the foundation-pit excavation was conducted in Midas/GTS NX. To overcome the theoretical shortcomings of parameter selection for finite-element simulation, the key data self-verification method was used, and a layer-by-layer algorithm was employed to determine more accurate simulation parameters. The deviation rate was used to quantify the difference between simulated results and measured values. The appropriate physical and mechanical parameters for key data self-verification were proposed and utilized to compensate for the shortcomings of the on-site monitoring data. The concept of the "circular-wall offset inflection point" was proposed to determine the distribution of inflection-point positions and offset curves. The method provides new opportunities for the safety control and sustainable research of foundation-pit excavations.

## **2. Materials and Methods**

## *2.1. Project Overview*

The rock and soil layers in the foundation pit were silt, muddy soil, fine sand, medium sand, coarse sand, strong weathered mudstone, middle weathered mudstone, and microweathered mudstone (Figures 1 and 2). According to these geological conditions and the design requirements of the anchor body, the underground continuous wall adopted a circular structure with an outer diameter of 82.0 m and a wall thickness of 1.5 m. The elevation of the top surface of the pit was 1.00 m, and the elevation of the bottom of the pit was −35.00 to −43.00 m. The bottom of the pit was embedded in mud, siltstone, and moderately weathered mudstone strata. The underground continuous wall was divided into two sections (Sections 1 and 2). Section 1 was three-milled, with a side groove length of 2.8 m, a middle slot length of 1.47 m, and a slot length of 7.07 m; Section 2 had a slot length of 2.8 m. The length of the Sections 1 and 2 groove sections was 0.25 m on the axis of the ground wall, and Sections 1 and 2 had 27 slots. Thus, the trough section was divided into 54 sections (Figure 2). The designed maximal trough depth was 46.0 m. On both sides of the underground continuous wall, a 50 cm diameter cement-powder spray was used to create a pile to reinforce the silt soil with a spacing of 40 cm and a

reinforcement depth of 15.0 m. After construction of the underground continuous wall was completed, the bottom of the wall was grouted.

After construction of the underground continuous wall had been completed, the soil was excavated by the reverse method, and the lining of the pit was layered and constructed. The construction period of each layer was controlled by the excavation of the soil. The excavation depth of the soil was 27 m, and the lining and soil-stratification height were controlled within 3 m. The lining of the pit was constructed from top to bottom. The top and bottom plates were 6 m thick with a concrete-filled core in the middle.

**Figure 1.** Cross-sectional view of geological section along the bridge.

**Figure 2.** Expanded view of slots.

## *2.2. Surface-Deformation Monitoring around Underground Continuous Wall*

Because of the need for surface-settlement monitoring during construction, a group of sensors were arranged to the east, south, west, north, southeast, northeast, southwest, and northwest of the foundation pit. Typical settlement monitoring started from the outside of the foundation pit with 10 monitoring points arranged at equal intervals of 5 m numbered D1-i to D8-i (with i = 1–10). Due to on-site construction-monitoring points that were actually available, only the first five points of valid data were obtained. A total of eight settlement-monitoring sections and 80 surface-settlement monitoring points were set. If the points encountered obstacles, they could be moved in parallel, as shown in Figure 3.

**Figure 3.** Layout of surface-settlement monitoring sites.

## *2.3. Deep-Lateral-Deformation Monitoring of Underground Continuous Wall*

Deep-lateral-deformation monitoring of the underground continuous wall is a key component of monitoring and measuring the deformation of the foundation-pit support, which can directly reflect the safety and stability of the foundation pit and its supporting structures (Figure 4). To ensure the effective functioning of the inclined pipe fitting under the effects of high-pressure concrete, the inclined measuring holes were repeatedly arranged according to the spare hole position in the groove section where the four inclined measuring pipe parts of P1, P3, P5, and P7 were located (P1', P3', P5', P7'). There were a total of 12 inclinometer tubes.

**Figure 4.** Deep-deformation monitoring site layout for underground continuous wall.

#### *2.4. Monitoring-Data Analysis*

The underground continuous wall was divided into 54 slot segments for analysis, as shown in Figure 2. To facilitate the statistical data processing, surrounding-settlement and wall-offset data corresponding to slot segments 2, 15, 28, and 42 were selected. In working-condition simulations, these four slot segments were defined to correspond to the calculation results of the four diagonal directions of the model.

## *2.5. Mohr–Coulomb Strength Criterion*

The Mohr–Coulomb strength criterion states that shear failure is the most fundamental cause of soil failure. The shear strength of any point in the soil is only related to normal stress σ<sup>n</sup> on the plane, such that

$$
\pi\_f = f(\sigma\_n). \tag{1}
$$

This function is a curve in τ*f*-σ co-ordinates, known as the molar-intensity line. The Moore envelope can be approximated as a linear relationship, known as the Coulomb equation, as follows:

$$
\pi\_f = \mathfrak{c} + \sigma\_n \tan \phi,\tag{2}
$$

where τ*<sup>f</sup>* is the shear strength at any point in the soil, and σ*<sup>n</sup>* is the normal stress on the calculated plane.

Equation (3) is the stress condition at any point in the soil under the limit equilibrium state (stress is positive with compression). This equation is known as the Mohr–Coulomb strength criterion. The radius of the stress Mohr circle is

$$R = \left(\frac{c}{\tan\phi} + \frac{\sigma\_{11} + \sigma\_{22}}{2}\right) \sin\phi = c\cos\phi + \frac{\sigma\_{11} + \sigma\_{22}}{2}\sin\phi,\tag{3}$$

where σ<sup>11</sup> and σ<sup>22</sup> are the maximal and minimal principal stress when the plane-soil mass under goes shear failure, respectively.

When the Mohr envelope is tangential to the most stressed Mohr circle in the material, the soil undergoes shear failure. In other words, the magnitude of σ<sup>22</sup> has no effect on shear strength. The Mohr–Coulomb strength criterion is an irregular hexagonal cone in the principal stress space. The projection of the hexagonal cone on the π plane is an irregular hexagon.

The Mohr–Coulomb criterion is widely used, as the constitutive model can accurately reflect the unequal tensile and compressive characteristics of geotechnical materials. However, numerical calculations for this model are prone to nonconvergence due to the discontinuous corners of the hexagonal cone.

## *2.6. Establishing Model of Foundation-Pit Excavation*

Since the classical yield criterion ignores the frictional component of soil shear strength, such criteria can be used for the undrained analysis of saturated soils, such that ϕ = 0. The Mohr–Coulomb criterion surpasses classical criteria and considers the frictional component of the soil, which is more suitable for most scientific research and engineering practice. It is also more widely used in numerical simulation. Finite-element software Midas/GTS NX was used for numerical simulation analysis on the basis of the Mohr–Coulomb constitutive model.

The excavation project described in this study included a two-part supporting structure consisting of the underground continuous wall and the lining. The lower end of the underground continuous wall was embedded in the middle weathered-rock layer, and the embedded depth range was 10–20 m. In numerical simulation, it is necessary to simplify the foundation-pit excavation support model and the construction steps to ensure computational capacity and accuracy. The underground continuous wall retaining structure was constructed before the foundation pit was excavated. The excavation method selected a single layer of flat excavation and added a layer lining after the excavation of each layer was completed. This process continued until all construction steps were performed.

The soil layers are described in Section 2.1. Each soil layer was distinguished by a natural planar interface. According to the construction conditions and the topography of the project, the top surface of the calculation model was selected as the ground and defined as a free surface. The four lateral sides of the design model were also defined to limit horizontal displacement. The bottom plane of the pit was defined to limit vertical displacement. The initial self-weight stress field was the main model load condition. The design calculation model used the Mohr–Coulomb elastoplastic strength criterion. In addition, the river levee was approximately 50 m away from the foundation pit. In the numerical-calculation process, this levee was considered according to the most unfavorable situation for the excavation project.

The size of the design-calculation model was carefully selected to be 300 m long, 300 m wide, and 100 m deep. Errors in slot sections at segmentation were caused by errors on the construction site. The channel sections neighboring certain modelling errors were collected by overlap. Thus, the thickness of the simplified underground continuous wall was calculated as 1.3 m. The model was divided into various sections (Figures 5 and 6).

**Figure 5.** Pit-model grid diagram.

**Figure 6.** Support-structure grid diagram.

The model had a total of 15,840 units and 17,680 nodes. The first layer in the model was a silt layer with a thickness of 2 m; the second layer was a silty-clay layer with a thickness of 5 m; the third, fourth, and fifth layers of silt, and the medium and coarse-sand layers had a thickness of 6 m; the sixth, seventh, and eighth layers were strongly weathered mudstone, moderately weathered rock, and slightly weathered rock layers, with thicknesses of 15, 30, and 30 m, respectively.

The thickness of the underground continuous wall was calculated as 1.3 m. The thickness of the inner lining was 1.5 m in the range of 0–6 m depth, and thickness was 2 m below 6 m depth.

## 2.6.1. Selection of Physical and Mechanical Parameters

In the finite-element model, the parameters of the concrete material were assigned according to the defined specifications. The mechanical parameters of the rock layer were determined by geotechnical testing and the key data self-validation method. The required physical and mechanical parameters to calculate the constitutive equations in the model are shown in Tables 1 and 2.





Note: Elastic modulus: ratio of stress to corresponding strain when ideal material has small deformation. Poisson ratio: ratio of absolute value of transverse normal strain to axial normal strain when material is under uniaxial tension or compression. Angle of internal friction: friction characteristics caused by mutual movement of particles and gluing. Cohesive forces: mutual attraction between adjacent parts within same substance. Unit weight: gravity characteristic of an object due to its gravitation in the natural state.

## 2.6.2. Calculation Process for Excavation-Pit Model

According to the support and excavation process for the circular-underground-continuous-wall foundation pit, pit simulations were calculated and analyzed for nine working conditions. Specifically, steps shown in Figure 7 were performed.

**Figure 7.** Modeling and calculation workflow.

2.6.3. Key Data Self-validation and Divisional-Condition Calculations

Stability analysis and the quantitative calculation of the supporting structure of existing foundation-pit engineering are mainly controlled by several key geotechnical parameters. The determination of parameters has always been a matter of debate in this field. Current practices are <sup>1</sup> obtained by geotechnical tests, <sup>2</sup> based on statistical data obtained from a large number of similar strata, and <sup>3</sup> empirical data. Because obtained parameters by geotechnical tests are different from the actual project, they need to be corrected. The method of statistical data is only applicable to ordinary strata and requires a lot of construction accumulation. Empirical data are easy to use, but are obviously less scientific. In addition, the three existing parameter-acquisition methods have a fatal disadvantage for engineering special geological environments: the parameter-selection method is not universal, and it is less sustainable.

Therefore, for traditional theoretical calculations and finite-element analysis, obtaining a method that could self-verify key data on the basis of project-site-monitoring data is critical to the sustainable development of foundation-pit and geotechnical engineering.

This paper proposes a key data self-validation theory. More specifically, we propose the selection of physical and mechanical parameters for numerical simulation that should be as reasonable as possible. However, due to many potential sources of uncertainty in these values, including theoretical defects that simplify soil and rock into ideal homogeneous materials, acquisition processing, and data conversion, when the parameter-selection basis was not sufficiently convincing, the key data obtained by monitoring were used to verify the results obtained by the simulation. When the deviation rate of the data obtained by the simulation was within a reasonable error range, the physical and mechanical parameters selected for the calculation model were deemed reasonable. Following this, large-scale data calculations were performed.

This method requires trial calculation. During research, parameters obtained from the literature and background data were used for trial calculations, and we calculated the deviation rate multiple times. Finally, the maximal simulated offsets of the walls under the second and third working conditions were 1.31 and 2.25 mm, respectively. The maximal monitored offsets of the wall for the second and third working conditions were 1.58 and 2.57 mm, respectively. That is, the difference between the simulated and monitored values was calculated. The absolute value of the difference divided by the monitoring value was used to quantify the credibility of the simulated value. It was further verified that the parameters used in the simulation were feasible. The deviation rate was calculated as follows:

$$\text{Deviation rate} = \text{(analog value -- monitored value)} \text{(monitored value.} \tag{4}$$

On the basis of this equation, the deviation rate of the wall was −10.39% for the second working condition and −14.22% for the third working condition. Thus, the obtained data from the simulation demonstrated a limited deviation, and the preliminary verification data were valid.

After having determined the appropriate parameters, the input parameters were calculated to obtain the force-deformation characteristics of other conditions. The calculation results were confirmed by result monitoring. Another benefit of this method is that it could expand the scope of the simulation calculations to compensate for the lack of on-site monitoring data.

## **3. Results**

## *3.1. Surface-Settlement-Monitoring Analysis*

The maximal settlement value of the monitoring points was 9.9 mm. For excavation Working Conditions 1–3, surface settlement at each monitoring point increased linearly. For Working Conditions 4–6, the monitoring points generated relatively stable settlement. For Working Conditions 7–9, the settlement at each monitoring point increased linearly. The growth rate in Working Conditions 7–9 was greater than in Working Conditions 1–4 (Figure 8).

**Figure 8.** Settlement at outer edge of Slot Sections (**a**) 2, (**b**) 15, (**c**) 28, and (**d**) 42.

#### *3.2. Wall-Body-Migration Analysis*

Analysis of data presented in Figure 9 yielded the following results. First, the wall deformation of each working condition was linear at an excavation depth of 27 m, and the deformation curve had segmental characteristics. The displacement of the wall body had an inflection point at a certain depth; that is, there was a peak in the displacement curve of the wall. This point gradually moved deeper with increasing depth of excavation and was generally located near the maximal excavation depth. This differed from the deformation characteristics of a cantilever pile (the lower part of the pile is fixed, and the upper part is subject to lateral thrust) because performance of the circular underground continuous wall arose from its own annular restraining force. We termed this point for each working condition the "round-underground-continuous-wall-deformation inflection point". Additionally, for Working Conditions 2 and 3, at some stage of excavation, the bottom of the wall body deviated away from the direction of the foundation pit. This was similar to the deformation characteristics of a cantilever pile. Working Conditions 4–9 did not exhibit a reverse offset at the bottom of the wall, and the final forward offset gradually increased with excavation depth. Finally, the wall-offset curve exhibited D-type distribution (Figure 9), and the maximal offset appeared at approximately 2/3 of the excavation depth. The maximal value of the inflection point was Working Condition 9, which had an offset of 6.1 mm.

**Figure 9.** <sup>O</sup>ffset around wall of Slots (**a**) 2, (**b**) 15, (**c**) 28, and (**d**) 42.

## *3.3. Settlement Analysis around Foundation Pit*

There were only a few buildings and communities around the foundation pit. Thus, the construction machinery and the soil load near the foundation pit were the main factors for settlement. Settlement around the foundation pit is shown in Figure 10 for excavation Working Conditions 2–9.

**Figure 10.** Settlement cloud around foundation pit for Cases (**a**) 2, (**b**) 3, (**c**) 4, (**d**) 5, (**e**) 6, (**f**) 7, (**g**) 8, and (**h**) 9.

Surface settlement of the outer edge of Slots 2, 15, 28, and 42 was also investigated. Due to limitations of the grid and the calculation of the model, settlement analysis was performed for 4, 8, 12, 17, 22, 27, 37, 47, 57, and 70 m depth (Figure 11).

**Figure 11.** Settlement of outer edge of Slot Sections (**a**) 2, (**b**) 15, (**c**) 28, and (**d**) 42.

Figure 11 shows that surface settlement was linear and increased with the excavation depth of the foundation pit. Surface settlement within a range of about 27 m outside the foundation pit rapidly increased with the increase of excavation depth. The amount of ground settlement beyond the surface of the foundation pit, about 50 m, was slightly affected by the excavation depth of the foundation pit. Maximal surface settlement was located near the edge of the foundation pit, with a maximal value of 2.715 mm.

## *3.4. Displacement of Underground Continuous Wall*

During the excavation of the foundation pit, the underground continuous wall was affected by soil stress and became offset. The wall deviation of the foundation pit for each working condition of the excavation is shown in Figure 12.

**Figure 12.** *Cont*.

**Figure 12.** Underground diaphragm wall deviation for Cases (**a**) 2, (**b**) 3, (**c**) 4, (**d**) 5, (**e**) 6, (**f**) 7, (**g**) 8, and (**h**) 9.

The displacement model of the underground continuous wall at Slots 2, 15, 28, and 42 was selected for data processing. Due to limitations of the grid and the operation of the model, analysis of the displacement was performed for the 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, and 40 m positions (Figure 13).

**Figure 13.** *Cont*.

 **Figure 13.** Wall offset of Slots (**a**) 2, (**b**) 15, (**c**) 28, and (**d**) 42.

These results revealed the following. First, the wall-body offset linearly increased with the depth of the excavation. Furthermore, the wall offset of each working condition showed a peak, after which the wall-body offset began to decrease. As excavation depth increased, the maximal offset of the wall shifted deeper. Second, there was no reverse offset calculation; the maximal offset of the wall was concentrated at a depth of approximately 2/3 of the total excavation depth. Third, as the depth of the excavation increased, the wall-offset curve showed D-shaped distribution. The simulated maximal offset was 5.837 mm.

Existing analysis of the deformation of the supporting wall of underground-continuous-wall foundation pits and the surrounding surface settlement mostly focuses on simple theoretical calculations [1,2,9–11] or finite-element analysis [3,4,7,8,12,13] that lack(s) validation of the used parameters. In this study, monitoring data were analyzed to identify the deformation law and other characteristics of the support structure. Three-dimensional numerical simulation of the foundation-pit excavation was conducted in Midas/GTS NX. In the process of realizing the analysis of ground settlement and wall offset around the circular underground continuous wall during construction, this paper demonstrated a key data self-verification method based on monitoring data, a breakthrough in difficulties in the selection of construction-safety calculation and finite-element-analysis parameters of foundation-pit engineering. It provides a new way of parameter selection for the sustainability study of foundation-pit and geotechnical engineering. In addition, we obtained the characteristics of surface settlement and wall offset around the circular underground continuous wall. The inflection point of the displacement of the circular underground continuous wall was proposed. These results are of great significance for the construction guidance of special-shaped underground continuous walls, providing an important reference for the continuous promotion of circular underground continuous walls.

## **4. Discussion**

Monitoring and simulation results and analysis were as follows. The settlement of the surface surrounding the circular underground continuous wall was mainly affected by the depth of the foundation-pit excavation. As excavation progressed, both monitoring and simulation data showed good linearity. Monitored maximal settlement showed that the simulated value was a conservative calculation.

In addition, the deformation of the wall for each working condition showed linearity with clear staged characteristics. In particular, the deformation curve had obvious inflection points, most of which were located deeper than 2/3 of the overall excavation depth. The characteristics of the cantilever pile were not obvious in Working Conditions 3–9, but the distribution of the wall body offset in a D-shaped curve was evident. Deviation between the monitoring value of the maximal wall offset and the simulated value was only 4.31%; thus, monitoring and simulation data were in good agreement. Furthermore, force-deformation characteristics were different from those of the cantilever pile. The monitored value showed more convergence at the bottom of the wall, while analog

values were not obvious. Preliminary analysis suggests that this was because monitoring data showed increased rock-embedded rock mass at the bottom of the wall compared to the simulated data.

## **5. Conclusions**

This study drew three main conclusions. First, it was determined that the surface settlement of a circular underground continuous wall is mainly controlled by the depth of foundation-pit excavation. Both monitoring and simulation data demonstrated increased linearity as excavation progressed. Appropriate physical and mechanical parameters for key data self-verification were proposed and utilized to compensate for the shortcomings of on-site monitoring data, and the extent of surface settlement caused by construction excavation was determined. Second, analysis, monitoring, and simulation results showed that the deformation of the circular underground continuous wall had unique constraint characteristics. The wall offset of each working condition showed a peak, after which the wall-body offset began to decrease. On this basis, the concept of a round-underground-continuous-wall deformation inflection point was proposed. Finally, we determined that the deformation pattern of the circular underground continuous wall showed distinct linearity, the deformation curve had an inflection point, and most of the inflection points were located below 2/3 of the excavation depth. In addition, wall-offset distribution showed an evident D-shaped curve.

The key data self-verification method proposed in this paper can be used as a method to check the validity of simulation parameters, and subsequent research can extend this method to other computing systems. This method is expected to build a bridge between monitoring data and simulation results. The concept of a round-underground-continuous-wall deformation inflection point, proposed in the paper, needs to further be applied to the quantitative relationship between the displacement of the inflection point and excavation depth.

**Author Contributions:** X.G.; conceptualization, methodology, software, validation, data analysis, investigation, resources, data curation, writing—original-draft preparation, review and editing, data visualization, and project supervision. W.-p.T.; conceptualization, validation, and funding acquisition. Z.Z.; validation and project administration. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Western Transportation Construction Science and Technology Project (2006-318-000-07), the China Communications Construction Co., Ltd (CCCC) Technology Research and Development Project (2011-ZJKJ-01), the National Natural Science Foundation of China (51708043), and the Fundamental Research Funds for the Central Universities, CHD (300102219106).

**Acknowledgments:** We would like to thank Editage (www.editage.com) for the English language editing.

**Conflicts of Interest:** The authors declare no conflicts of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **Cutting Waste Minimization of Rebar for Sustainable Structural Work: A Systematic Literature Review**

**Keehoon Kwon, Doyeong Kim and Sunkuk Kim \***

Department of Architectural Engineering, Kyung Hee University, Yongin-si 17104, Korea; charade0820@naver.com (K.K.); dream1968@khu.ac.kr (D.K.)

**\*** Correspondence: kimskuk@khu.ac.kr; Tel.: +82-31-201-2922

**Abstract:** Rebar, the core resource of reinforced concrete structures, generates more carbon dioxide per unit weight than any other construction resource. Therefore, reducing rebar cutting wastes greatly contributes to the reduction of greenhouse gas (GHG). Over the past decades, many studies have been conducted to minimize cutting wastes, and various optimization algorithms have been proposed. However, the reality is that about 3 to 5% of cutting wastes are still generated. In this paper, the trends in the research on cutting waste minimization (CWM) of rebar for sustainable work are reviewed in a systematic method with meta-analysis. So far, the literature related to cutting waste minimization or optimization of rebar published has been identified, screened, and selected for eligibility by Preferred Reporting Items for Systematic Reviews and Meta-Analyses, and the final 52 records have been included in quantitative and qualitative syntheses. Review by meta-analysis was conducted on selected literatures, and the results were discussed. The findings identified after reviewing the literature are: (1) many studies have performed optimization for the market length, making it difficult to realize near-zero cutting wastes; (2) to achieve near-zero cutting wastes, rebars must be matched to a specific length by partially adjusting the lap splice position (LSP); (3) CWM is not a one-dimensional problem but an n-dimensional cutting stock problem when considering several rebar combination conditions; and (4) CWM should be dealt with in terms of sustainable value chain management in terms of GHG contributions.

**Keywords:** rebar cutting waste; minimization; optimization; structural work; systematic literature review

## **1. Introduction**

Reinforced concrete (RC) structures, such as buildings and infrastructure, use enormous amounts of concrete and rebar during the construction phase. In 2012, global concrete and concrete constituent consumption reached about 10 billion m3 [1], and the amount is rapidly increasing every year due to increased demand for RC structures along with global economic development. Rebar, the core resource of RC structures, generates more CO2 per unit weight than any other construction resource. For example, C25/30 concrete generates embodied CO2 (ECO2) of 95 kg-ECO2/t, but reinforcement bar (rebar) generates ECO2 of 872 kg-ECO2/t, which is equivalent to about 9.2 times of the concrete [2]. Therefore, reducing the cutting waste of rebars greatly contributes to the reduction of GHG [3]. Over the past few decades, numerous studies have been conducted on minimizing cutting wastes, and various optimization algorithms have been proposed. However, in reality, cutting wastes are still generated in the process of cutting and bending of rebars, which are at least 3% to 5% [3–7], and as much as 5% [4,6–11] to 8% [12], compared to the volume shown in the structural drawings.

Estimating how much rebar cutting wastes contribute to global GHG is a very difficult task, but to confirm the need for sustainable structural work, the authors follow a three-step estimation process after surveying literature and actual data: (1) analyzing the concrete and rebar ratio after surveying actual project data for concrete and rebar in Korea; (2) estimating the global annual use of concrete and rebar, and CO2 emissions by rebar using global

**Citation:** Kwon, K.; Kim, D.; Kim, S. Cutting Waste Minimization of Rebar for Sustainable Structural Work: A Systematic Literature Review. *Sustainability* **2021**, *13*, 5929. https:// doi.org/10.3390/su13115929

Academic Editor: Nicholas Chileshe

Received: 14 April 2021 Accepted: 21 May 2021 Published: 24 May 2021


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

concrete consumption in 2012, world GDP growth rate [13], and analyzed concrete and rebar ratio; and (3) estimating the global annual rebar cutting wastes and the resulting CO2 emissions, applying the relatively conservative waste rates of 3 to 5% rates identified in the literature mentioned above.

Although the construction environment varies from country to country, in case of high-rise residential buildings in Korea, the analysis of 30 projects, as shown in Table 1, showed that the rebar quantity compared to concrete was found to be about 0.070 ton/m3, and commercial buildings have long-span heavily loaded attributes compared to residential buildings. The analysis of 12 projects showed a result of about 0.119 ton/m3. The average of these amounts is calculated at about 0.077 ton/m3. If this average value is applied to 10.058 billion m<sup>3</sup> [1], as of 2012 as shown in Table 2, about 778.9 million ton of rebar usage is calculated. Applying the world GDP growth rate as shown in Table 2, rebar usage increases every year, which is estimated to be about 947 million tons in 2019.

**Table 1.** Estimation of rebar quantity compared to concrete in reinforced concrete structures.


Source: authors' research results.

**Table 2.** Estimated global annual use of concrete and rebar, and CO2 emissions of rebar.


Source: authors' research results.

For reference, it is impossible to investigate all RC structures around the world to estimate global rebar usage by year. Therefore, despite some errors, it is meaningful to have applied some data of high-rise residential and commercial buildings in Korea. Later, when the data of investigation into various RC structures are added, the range of error will gradually decrease. The application of the world GDP growth in 2012 was also estimated in the same context, as shown in Table 2, because data on global concrete and concrete constituent consumption by year could not be obtained.

Using an estimated global annual use of rebar, if about 0.3416 ton·CO2/ton [14], which is the unit value of rebar CO2 in Korea, is applied, the generation of about 323.5 million tons of CO2 in 2019 is estimated, starting with 266.1 million ton·CO2 in 2012. For reference, the unit value of CO2 is different according to industrial structure by country, so it is not possible to obtain a unified value. Therefore, in this study, the calculation was performed based on data analyzed in Korea.

If a rebar cutting waste rate of about 3 to 5% is applied based on this value, about 23.368 to 38.947 million tons of wastes are generated as of 2012, as shown in Table 3, and the amount keeps increasing every year to reach about 28.411 to 47.352 million tons in 2019. When calculating the corresponding CO2 emission, the amount increases annually from about 7.982 to 13.304 million ton·CO2 in 2012 to reach about 9.705 to 16.176 million ton·CO2 in 2019, as shown in Table 3. If the near-zero cutting waste of rebars is realized, the effect of CO2 emission reduction of up to 16.176 million tons can be achieved, and the corresponding

GHG is reduced. For reference, since the rebars placed into structures vary in length, diameter, and number, it is impossible to combine them without cutting wastes, called zero cutting wastes, by the length of rebars supplied by the steel mill. However, by combining rebars with special lengths supplied by the steel mill, cutting wastes can be reduced to close to zero, which is called near-zero cutting wastes.


**Table 3.** Estimation of global annual rebar cutting wastes and CO2 emissions.

Source: authors' research results.

As shown in Table 2, demand for buildings and infrastructure increases in line with global economy growth and corresponding demand for RC structures increases every year. The increase in RC structures leads to demand chains that increase demand for concrete and rebars, as shown in Table 2, resulting in an annual increase in rebars cutting waste and CO2 emissions such as Table 3. In particular, it is expected in the future to be more concentrated in developing countries where the population is concentrated [15,16]. The increase in global cutting waste of rebars not only causes unnecessary cost losses but also problems of generating large amounts of CO2 in the production, transportation, and processing phases. Therefore, research to realize near-zero cutting waste is critical to implement sustainable rebar work.

So far, many studies have been conducted to optimize the use of rebars or to reduce the cutting waste. However, the near-zero cutting waste has not yet been realized. However, the near-zero cutting waste has not yet been implemented. The study on cutting stock problem (CSP), which is considered to be the beginning of cutting waste minimization (CWM), was first mentioned by Kantorovich in 1939 and was first published in Management Science in 1960 [17]. Therefore, CSP-related literature has been searched for since 1960 in this study. The literature on the optimization of rebar cutting waste was basically targeted from 1990 to 2020, because the CSP-related research in rebar work started in earnest from 1991. In this paper, we performed a search and review of studies related to optimization or cutting waste minimization of rebars that have been conducted so far and identified the status and problems of existing studies. We then proposed the direction of future research to implement near-zero cutting waste and identify its potential.

## **2. Data Sources and Methodology**

#### *2.1. Data Sources*

There are literature databases of various fields around the world, but for the search of articles related to minimal cutting waste of rebars, Scopus, Science Direct, Web of Sciences (WoS), Taylor and Francis Online, Springer Link, American Society of Civil Engineers (ASCE) Library, and Willey Online Library were used. Some dissertations or literature published in internationally uncertified journals were searched for using Google or Google Scholar databases.

### *2.2. Systematic Literature Review*

SLR is an exact and reproducible method for identification, evaluation, and interpretation of predefined fields of study [18]. Kitchenham and Charters defined "a systematic

literature review is a means of identifying, evaluating, and interpreting all available research relevant to a particular research question, topic area, or phenomenon of interest. Individual studies contributing to a systematic review are called primary studies; a systematic review is a form of secondary study" [19]. Since similar SLR methodologies have been proposed by several scholars [20,21], MDPI publisher based in Basel, Switzerland recommends that the authors follow Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [22], checklist, and flow diagram for reporting systematic reviews and meta-analyses.

In construction field, numerous literature review articles have been published, not based on SRL [23–28], before 2010. The reason for this is that the perception of SLR in the construction field was not high. Since 2010, with the exception of some articles [29–34], most review articles have been written based on SLR [18,35–49]. After 2018, many review articles have been written according to PRISMA [18,46–49], and this study also follows the PRISMA statement for systematic reviews and meta-analysis.

#### *2.3. Methodology*

The previously mentioned literature database was sequentially searched using Boolean operator "AND" by keywords, such as rebar, rebar work, rebar optimization, and rebar cutting waste. As a result, Google Scholar found about 79,100 search results for literature related to rebar work, about 16,000 cases of rebar optimization, about 14,000 cases of rebar cutting waste, and about 4410 cases of rebar cutting waste optimization, as shown in Table 4. Google Scholar has confirmed that it includes various reports such as books, content, and dissertations along with academic papers in most databases, as shown in Table 4. In addition to construction, literature of almost all fields, such as medicine and chemistry, is searched by keywords. Additionally, it is confirmed that the search works even if there is a rebar or work in the name. Therefore, searching and reviewing all relevant literature in Google Scholar is an inefficient approach. Since the minimum cutting waste of rebars to be dealt with in this review article is a very specific topic, most of literature is searched in databases such as ScienceDirect, WoS, and ASCE Library. However, Google Scholar was used to search for books, magazines, and documents such as dissertations, which are not well searched for in databases such as WoS and ASCE, and when original text could be downloaded from these databases.


**Table 4.** Search by keyword in literature database (as of 1 December 2020).

As shown in Table 4, a search for literature was performed on Google Scholar, ScienceDirect, Springer Link, and ASCE Library. It has been confirmed through the literature search process that the number of literatures searched for varies depending on the characteristics of each database. For example, topics such as rebar cutting waste correspond to construction engineering; therefore, literature is frequently searched for in databases of engineering fields. In particular, the ASCE Library is a database dedicated to the construction field; hence, many literatures related to this review paper have been searched for. When searching the literature with a keyword of rebar work, which includes all rebarrelated work, many articles are retrieved as shown in Table 4. However, many literatures

include corrosion of rebar, rebar tying tool, rebar cutting and bending machine, and rebar work schedule, and are not related to CWM. When search range is narrowed to rebar optimization and rebar cutting waste, the number of records is reduced dramatically. For reference, rebar optimization is the general word of rebar minimization, and rebar cutting waste literally means the waste remaining after cutting the rebar. Finally, in most databases, searching with rebar cutting waste optimization that has the same definition as CWM results in fewer records. In the case of Scopus and WoS, it is reduced to 9 and 6 records, respectively, but all records are valid. In other databases, many records are identified as RC design optimization.

Based on literature searched for on 1 December 2020, 1811 records were finally identified, as shown in Figure 1, excluding duplicated literature or literature not related to the subject of this study. Among them, duplicated 384 records were screened and 638 and 386 records that were not relevant to the rebar cutting work and rebar cutting optimization were excluded, respectively, to finally select 403 full-text articles. Then, 351 literatures related to design optimization of the RC components or frames were excluded. The reason is that design optimization of RC corresponds to pre-processing research of rebars optimization, while CWM of rebars covered in this study corresponds to post-processing research. As a result of reviewing some of the literature [50–81] related to design optimizations of rebars, it was confirmed that they are related to design optimization of RC components such as slab [50–57], beam [58–61], column [62–65], foundation [64], and wall [66–68], and design optimization of RC frames such as bridges [69–71] and building [72]. In addition, many studies related to design optimization have been well-organized in the review article [37].

**Figure 1.** Flow diagram of the literature review and the analysis process. Source: authors' research results.

#### *2.4. Descriptive Analysis*

Studies in the field of construction project management vary widely, including time, cost, quality, and safety. In the Project Management Body of Knowledge, there are 13 knowledge areas [82], and there is countless management research connected with engineering

technology, and post-processing research such as minimum cutting wastes of rebars is a very narrow and special topic. Therefore, it is confirmed that there are not many articles directly dealing with this topic. As shown in Table 5, 37 articles were published in the journal [3,4,6,8–12,63,83–110] and 11 articles were published in peer reviewed conference publication [7,55,62,71,111–117]. The rest have three dissertations [5,118,119] and one book chapter [120].


**Table 5.** Number of literatures by publication type.

Source: authors' research results.

When examining papers published in internationally certified SCI or SCIE journals, as shown in Table 6, the biggest number of papers, seven, were published in *Journal of Construction Engineering and Management* (*JCEM*) [6,84,87,95,97,101,102]. As of 2019, *Journal Impact Factor* (*JIF*) of *JCEM* is not as high as 2.347, but it is one of the most popular ASCE journals. In addition, two papers were published in *Automation in Construction* [63,94,105], and *Journal of Computing in Civil Engineering* [86,103], and one was published each in the remaining journals. It is notable that each paper was also published in high *JIF* journals, such as the *International Journal of Engineering Science* [85], *Computer-Aided Civil and Infrastructure Engineering* [110], and the *Journal of Advanced Research* [100]. It is assumed as such because the problem of minimizing the rebar cutting waste is important and difficult. *Construction Management and Economics* is not an SCI journal classified by JCR but was included in the list because it is internationally popular [99].


**Table 6.** List of the most popular journals.

Source: authors' research results.

Table 7 shows 37 articles that are classified by countries based on lead authors. According to Table 7, Korea has the largest number of publications, which is 16 articles, followed by Canada with 5, Israel with 4, and Turkey with 3; 5 countries, including Bangladesh, published 2 papers each. Eight countries, including Albania, published 1 paper each. In Korea, the number of rebars per unit area of RC structure has more than doubled due to the

strengthening of seismic design standards in 1988 [121], the strengthening of noise standards between floors in 2000 [122], and the rapid increase in the number of high-rise buildings over 20 stories. Therefore, by continuously conducting studies on rebars design optimization and CWM of RC structures since 1999, the reduction in the use amount of rebar was confirmed.


**Table 7.** Number of literatures by country.

Source: authors' research results.

Figure 2 shows the number of articles published by year. One or two articles were published every year until 2004, but after 2012, the number of articles increased until 2016 with the development of various techniques, including computer-aided design (CAD) and building information models (BIM). It is confirmed that the number of articles dropped sharply to one in 2017 and increased again. In the past, cutting wastes were approached from an economic perspective; however, recently, research has been conducted from a sustainable construction perspective.

## **3. Review Results and Discussion**

#### *3.1. Selection of the Papers*

As shown in Figure 1, the number of literatures corresponding to rebar cutting optimization through the identification, screening, and eligibility process was a total of 403, and 52 literatures related to RC design optimization were selected, excluding 351 literatures that fall under the category of pre-processing research of rebar optimization. They address the problems of rebars cutting waste, corresponding to post-processing research of rebars optimization after RC design. These literatures will be reviewed by factors such as application of optimization techniques and graphic solutions for CWM, range of rebar work process, consideration of lap space position, reflection of length in a special order, consideration of bending margin, and consideration of schedule. The review of selected

literature will not only measure characteristics and trends of the research for CWM of rebars but also present a direction of future research. In addition to the selected literature, there is rebar modeling [123], using BIM solutions for optimized rebar work, and software that creates rebar details using the information generated after the structural design [124]. However, these were excluded in this paper as they are articles written for the promotion of commercial software and are not described mainly on CWM.

## *3.2. Identification of Cutting Waste Minimization-Related Factors*

One-dimensional CSP has been studied not only in rebars but also in all areas of cutting linear stock material such as pipe and timber. Since the publication of Kantorovich's article [17], many articles have been published in various fields related to CSP [8,125–148]. In the case of rebar, research has been vigorously conducted after 1991 with the development of computer science, since it was first introduced as an example of CSP by Kantorovich [3]. This is mainly because the need for CSP in the construction field to build a single building on site was not highlighted much, unlike general manufacturing, which mass-produces large quantities of the same or similar products in factories. Moreover, it was not easy to develop algorithms to deal with varying variables, such as length, diameter, number of required, and point of use of rebar, which are subject to CSP, and algorithms to consider variables, such as bending margin, various market length, and special length.

In this study, factors that influence the analysis of attributes of the literature should be identified for quantitative and quantitative analysis of the final selected literature. The following is a summary of the variables identified during screening and eligibility assessment of full-text articles related to rebar optimization along with the authors' research experience.


In addition, literature can be reviewed by lap splice position (LSP), use of special length (SpL) or stock length (StL) rebars, and schedule.

#### *3.3. Results of Quantitative and Qualitative Review*

#### 3.3.1. Description by Optimization Techniques

Table 8 summarizes the optimization techniques adopted by the literature selected for rebar's CWM. Afzal et al. [37] introduces the definitions, advantages, disadvantages, and application cases of various algorithms in the study of RC structural design optimization. However, the problem of rebar cutting waste is relatively limited in the scope of study compared to RC design, so the literature is summarized by seven optimization techniques, as shown in Table 8. The advantages and disadvantages associated with rebar CWM are described by optimization techniques, and the classification of literature that adopted these techniques is presented in Table 8.


#### **Table 8.** Summary of the adopted optimization techniques.

Source: authors' research results.

In the case of LP, it has an advantage in terms of flexibility to be paired up with other approximations to improve convergence, but there is a disadvantage in terms of being slower in finding special-length-priority or waste-rate-priority solutions under multiple search conditions. In studies of rebar's CWM, 12 articles were selected as optimization techniques [3–5,7,9,12,63,86,93,96,109,114]. The reason for this is that CSP or CWM problems have been adopted most often in modeling and have become more common.

Integral programming (IP) has the advantage of quickly generating solutions under limited search conditions, while many search conditions or requiring a float number solution are challenging. In rebar's CWM study, the second largest number of research articles is adopted in eight research articles [7,12,63,100,101,104–106], despite the fact that it takes considerable time to formulate the problem [101]. IP is one of the long-standing optimization techniques used as one-dimensional cutting stock problems like LP and is said to be the most common in modeling CWM problems.

GA has advantages such as simplicity in programming, proof in finding the global optimum, applicability to diverse problem domains, computational performance, and diversity of solutions, but it also has disadvantages such as being time consuming for formulating a CSP problem under complex combination conditions. As a result of reviewing the literature, seven articles have started to adopt GA since 2004 [8,11,71,83,101,113,118], and most of them have adopted LPs and IPs before. Salem, Shahin, and Khalifa [101] conducted a study comparing CWM using GA and IP models and then verified that GA further reduces the cutting waste of the rebar through a case study. Computational time of GA models is practical for everyday use and, in some cases, the GA model was able to lump the wastes in bigger lengths, thus achieving more savings.

Binary search algorithm (BSA) has an advantage in terms of providing quick-iterated local search for rebars of a specific length to be used in combination but has a disadvantage of long CPU run-time for global search as the increase of rebar combination conditions. BSA has the advantage of quickly performing iterated local search, so the CWM problem should be divided according to the supply schedule of rebars. In this case, there is a

problem that the CWM effect is not greater than global search. BSA has been confirmed to have been adopted by two articles, as shown in Table 8 [10,116].

SA has advantages such as use for combinatorial optimization problems in a discrete search space and simplicity in implementation, while it has disadvantages of large computing time and cost if not providing boundary conditions. Porwal and Hewage asked "which conditions are more suitable?" when LP, IP, GA, BSA, sequential heuristic procedure, and SA are applied to a combination of rebar cutting patterns [6], and Porwal and Hewage proposed integration with BIM and combination with special-ordered length, available market lengths, and SpL of rebars by SA. In addition, the case study suggested that SA models are successful in complex combinatorial optimization programs through controlled randomization.

Heuristic algorithms are algorithms that solve problems in a more efficient way than conventional methods at the expense of optimality, completeness, and accuracy to obtain rapid solutions. Heuristic algorithms are expensive for accurate calculations and are frequently used if approximate solutions are sufficient. Bekdas and Nigdeli [62] optimized RC columns using a metaheuristic algorithm, called a bat algorithm.

HS is a metaheuristic search algorithm that tries to mimic the improvisation process of musicians in finding a pleasing harmony [149,150]. Although HS algorithm has better global optimization capability, its disadvantages are randomness and instability. Nonetheless, search direction of the algorithm is uncertain [150], and HS requires higher number of iterations [37]. HS was applied to optimize costs of materials, including concrete and rebars, by implementing design variables such as width and height of RC column, including diameters and number of reinforcements, and loading condition variables are implemented as harmony vectors [117].

HA is divided into local search-based metaheuristic algorithms, such as SA, and global search-based metaheuristic algorithms, such as GA and HS, to find better solutions. Therefore, HA or SA is more efficient if the target of rebar CWM is a local-search-based problem, and GA or HS is more effective if the target is a global-search-based problem.

Reference numbers written in Table 8 indicate that the corresponding optimization techniques are used in combination, for example, references [7,12,63], used a combination of IP and LP, and reference [101], performed rebar CWM using IP and GA.

### 3.3.2. Description by Rebar Work Process

Rebar work process consists of structural design, drawing work, quantity take-off (QTO), rebar production, and rebar placement. The literature related to structural design optimization of the RC component or frame has been sufficiently reviewed in other articles, and this paper reviews the literature that performed post-processing CWM from drawing creation to rebar placement. Table 9 shows literature review by rebar work process. Strategies to reduce rebar cutting waste are effective only when implemented from the drawing work stage. Accordingly, the top 20 literature, as shown in Table 9 references, suggested reducing cutting wastes in conjunction with drawings, and some literature included a mathematical algorithm that automatically writes rebar drawings using CWM algorithm [6,84,91,105,108,111].


**Table 9.** Summary by rebar work process.

Unlike ordinary materials, quite many variables should be considered in the case of the exact QTO work of rebars. It is a complex task that should reflect the size of stock material, concrete cover, and lap splice length, as well as variables not shown in the drawing, such as bending margin. Thus, studies have been conducted to develop algorithms to automate QTO in several studies [3,6,90,91,107,108,115], where variables such as the length and number of rebars applied could be directly utilized for rebar CWM algorithm. Thus, the second largest number of articles, as shown in Table 8, for reducing rebar cutting waste at the QTO stage is 16.

CWM algorithm has been widely applied to rebar production stages, including cutting and bending [6,8,87,88,91,92,94–96,99,107,111,112,116]. This is because the bar-bending schedule is prepared first before supplying rebars to the site, the cutting list is prepared, and the bar combination is performed by cutting patterns using the list. Several studies have indicated that the cutting wastes of rebars start from the purchase order stage [6,90,91,104]. This is because ordering by market length without analysis of optimal cutting patterns is a major factor in increasing cutting waste.

As for the study on reducing cutting waste in rebar replacement, 12 articles were published, as shown in Table 9 references. In the rebar placement stage, studies are divided into two—one is to prevent loss or waste caused by a mismatch in field installation after cutting, bending, and fabrication of rebars [84,89,97,105,112,118], and the other is to perform a detailing design and installation planning as an optimization method considering the productivity of the rebar placement [2,10].

The reference numbers written multiple times in Table 9 indicate that the corresponding article was performed on several stages of rebar works. For example, references [98] and [103] were performed for drawing work, QTO, and rebars placement stages, and references [84,89,97,105,110,120] were performed for drawing work and rebars placement stages.

## 3.3.3. Description by Other Factors

Because the location, size, and structural performance of RC components such as column, beam, foundation, slab, and wall are different, the length of the rebar generated after structural design is very diverse. Since rebar has the characteristic of being repeatedly installed, if the rebar is determined to have a certain length after structural design, the cutting patterns appear simple and the effect of CWM is also significant. LSP should be partially adjusted to satisfy the structural design codes to arrange the length of rebar in RC components constant. Several studies, as shown in Table 10, have revealed that the effect of CWM is significant when LSP was adjusted [6,7,12,86,105,108,112,119].


**Table 10.** Summary by other factors.

As shown in Table 10, 10 rebar CWM-related articles are described for rebars of stock length (StL) or market length. This is because the CSP study on the optimization of the material of one-dimensional stock length stored in the factory is the beginning of rebar CWM. In particular, in the case of factory production, materials sold at a fixed length in the market are stored, and the combination of cutting pattern optimization is performed for mass production. However, in the case of a construction project, various lengths and number of rebars must be combined, so it is difficult to reduce cutting wastes using stock length. Therefore, rebar combination is needed by SpL [3,4,6,10,90].

The use of SpL can further reduce cutting wastes, in contrast to the use of StL [3,6,10]. However, the minimum quantity and pre-order time must be satisfied to order rebars with SpL. The length of rebars should be adjusted so that it is combined with SpL. Eventually, additional algorithms should be developed to adjust LSPs easily and quickly. The references in Table 10 are written several times, because the corresponding articles considered multiple factors for CWM. For example, Porwal and Hewage [6] incorporated StL and SpL as well as LSP into the study for CWM.

## *3.4. Discussion*

If near-zero cutting waste of rebar, one of the most ECO2 generating resources in construction materials, is realized, environment-friendly sustainable construction is implemented and the waste of high-cost resources is prevented. The results of SLR analysis on studies that attempted to reduce rebar cutting wastes showed that there were relatively many design optimizations of RC literature corresponding to pre-processing research, while the rebar CWM-related literature corresponding to post-processing research was 52 articles. The results of systematic critical review on CWM of rebar are summarized as follows:


Although there have been many CWM studies on StL so far, it has been confirmed that the research focusing on SpL will be expanded in the future.

In general, rebars are sold as linear rods in the market. Therefore, most studies on minimum cutting waste have focused on algorithms to solve one-dimensional cutting stock problems because contractors purchase, cut, and bend them. With the development of software and hardware solutions for engineering programs, techniques such as GA, BIM, AR, VR, and integrated project delivery have been added. It is assumed that there is a fundamental problem that the cutting waste rate is not yet reduced to near-zero despite those studies.

In this study, we have confirmed that cutting wastes can be significantly reduced in the following two cases. First, the use of coiled rebars can reduce cutting wastes to near-zero. The coiled rebar, which has been used since the 1990s, automatically performs rebar cutting and bending by machine. Initially, coiled rebars with a diameter of 10 mm to 16 mm were processed, but recently, coiled rebars with a diameter of 50 mm are automatically cut and bent by machine [151]. Global near-zero cutting waste can be achieved if machines are available that automatically perform straightening, cutting, and bending after coiled rebars are produced and supplied in all countries. However, not many countries have an industrial structure that satisfies such a supply chain. Except for some countries in Europe and North America, most countries do not yet supply coiled rebars. Therefore, CWM research should be continued until the rebar supply chain is globally established.

Second, if mechanical rebar couplers are used, steel quantities are reduced compared to lap spaces and ECO2 is also reduced proportionally. This study summarizes the comparisons between lap splices and mechanical couplers, as shown in Table 11. Lapping length and weight are different from high-tensile deformed bars 10 mm (D10) to 32 mm (D32) in diameter. The weight of couplers of each diameter is different, but the overall effect of reducing ECO2 is significant when couplers are used. In the small case of D10, a weight of 0.166 kg/EA and an ECO2 difference of 0.145 kg-ECO2/EA are generated. In the large case of D32, a weight of 14.237 kg/EA and an ECO2 difference of 12.415 kg-ECO2/EA are generated, respectively. In particular, with mechanical rebar couplers, ECO2 reduction effect can be achieved from at least 84.7% to up to 96.4% compared to splice lap.


**Table 11.** ECO2 comparison between splice lap and coupler.

Source: authors' research results.

However, when comparing the cost of rebar splice and mechanical couplers, couplers cost is higher up to D25, as shown in Table 12, but couplers are cheaper in rebars of above D29. For reference, splice cost is calculated by multiplying rebar cost per ton and splice weight, and rebar cost includes material, shop drawing work, cutting and bending, and installation cost. RC structures use various diameters of rebars. As shown in Table 11, mechanical couplers for all sizes of rebar are more advantageous for ECO2 reduction than splice laps. However, as shown in Table 12, the cost of rebar couplers from D10 to D25 is more expensive than lap splice. However, the analysis results in Tables 11 and 12 may be different in each country because the types of mechanical couplers are diverse in shape and the rebar work cost is different. In Korea, the use of a mechanical coupler for rebars

with D25 mm or larger in diameter has little cost reduction effect, but it has been confirmed that the ECO2 reduction effect is remarkable. In the U.S., despite the fact that mechanical butt splices provide a variety of benefits, it is realized that the cost is still higher than lap splices [152]. If carbon tax is applied, the cost benefit is generated as much as the ECO2 reduction in Table 11, and related research should be added.


**Table 12.** Cost comparison between splice lap and coupler.

Exchange rate: 1120 Won/USD as of 25 February 2021, Bank of Korea. (Source: authors' research results).

#### **4. Conclusions**

During this review research, several facts have been identified in addition to findings identified from meta-analysis, as described in the Section 3. Various CWM algorithms have been developed and have contributed to reducing cutting wastes. However, it was confirmed that many algorithms have two principal problems to be applied in the field. First, although some algorithms can theoretically implement CWM, it is difficult to apply in practice when various site conditions are reflected. Second, some algorithms can reduce cutting wastes of some or major RC components, such as columns, beams, and slabs, but cannot reduce the cutting wastes of the entire structure of a project to near-zero. A description of identified findings after literature review is as follows:


**Author Contributions:** Conceptualization, K.K. and S.K.; methodology, S.K.; validation, K.K., D.K., and S.K.; formal analysis, K.K. and S.K.; investigation, K.K. and D.K.; resources, K.K. and D.K.; data curation, K.K. and D.K.; writing—original draft preparation, K.K. and S.K.; writing—review and editing, K.K. and S.K.; supervision, S.K.; project administration, S.K.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MOE) (No. 2017R1D1A1B04033761).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Data sharing is not applicable to this article.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **Abbreviations**


## **References**


## *Technical Note* **Inter-Floor Noise Monitoring System for Multi-Dwelling Houses Using Smartphones**

## **Suhyun Kang 1, Seungho Kim 2, Dongeun Lee <sup>3</sup> and Sangyong Kim 1,\***


Received: 12 May 2020; Accepted: 19 June 2020; Published: 22 June 2020

**Abstract:** The noise between the floors in apartment buildings is becoming a social problem, and the number of disputes related to it are increasing every year. However, laypersons will find it difficult to use the sound level meters because they are expensive, delicate, bulky, etc. Therefore, this study proposes a system to monitor the noise between the floors, that will measure the sound and estimate the location of the noise using the sensors and applications in smartphones. To evaluate how this system can be used effectively within an apartment building, a case study has been performed to verify its validity. The result shows that the mean absolute error (MAE) between the actual noise generating position and the estimated noise source location was measured at 2.8 m, with a minimum error of 1.2 m and a maximum error of 4.3 m. This means that smartphones, in the future, can be used as low-cost monitoring and evaluation devices to measure the noise between the floors in apartment buildings.

**Keywords:** inter-floor noise; multi-dwelling houses; smartphone application; real-time monitoring system

## **1. Introduction**

## *1.1. Background*

Population concentration due to urbanization has led to housing shortages, and many cities opted for the construction of multi-dwelling houses, which can be supplied in large quantities at a relatively low cost, as a solution [1]. In multi-dwelling houses, however, the residents are easily exposed to the noises of neighbors, as the walls and slabs are shared with other households. The continuous exposure to external noises of the residents of multi-dwelling houses may cause physical and mental health problems, such as high blood pressure, annoyance, and sleep disorders [2–4]. As such, inter-floor noise has also caused discord amongst neighbors, including an elevated number of disputes, assaults, and even arson [5–7].

To address disputes related to inter-floor noise, it is essential to secure objective noise data. Sound level meters are generally used to obtain objective noise data. It is difficult, however, for non-experts to use sound level meters, because they are expensive, delicate, and bulky [8]. The recent technical development of smartphones has opened up a possibility where they can be used as substitutes for sound level meters [9–11].

Smartphones are powerful mini-computers with various sensors (e.g., microphones, accelerometers, gyroscopes, and GPS) and are owned by the majority of the population. They can be used as low-cost noise monitoring tools with available broadband internet access [12].

A number of studies have been conducted lately to examine the accuracy of smartphone noise measurement applications (apps). Murphy and King [11] tested the accuracy of several noise measurement apps on two platforms (Android and iOS) using 100 smartphones. The test results showed that one of the apps was very accurate in measuring the noise levels with errors less than ±1 dB from the actual sound levels in the reference value range. The conducted study indicated that noise measurement apps have a potential to be used as sound level meters in the future. Zamora et al. [13] proposed environmental noise-sensing units using smartphones. According to these experimental results, if the smartphone application is well tuned, it is possible to measure noise levels with an accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones. Garg et al. [8] proposed an averaging method for accurately calibrating the noise acquired through a smartphone microphone. This method achieves an accuracy of 0.7 dB.

Smartphones also provide an inexpensive and flexible infrastructure for the measurement of overall environmental noise (e.g., noise and air pollution) in cities. Various related studies have shown that smartphone apps are useful for environmental monitoring evaluation [14–17]. Although the aforementioned studies verified the accuracy of smartphone noise measurement apps and their potential as environmental monitoring tools, studies on the possibility of using smartphones to address the inter-floor noise problem are not sufficient.

The problem to be solved in relation to inter-floor noise is to identify the noise types and locations of those noise sources [18]. This is important, since some disputes have resulted from misunderstanding of the noise sources by listeners [18]. Most studies on inter-floor noises, however, are focused on noise measurement [3,19], noise reduction measures [20,21], and annoyance measurement [22,23]. If smartphones can identify objectively and reliably the noise source locations and noise types in real time, they can contribute to dispute mediation.

## *1.2. Motivation and Objective*

Inter-floor noise is transmitted to neighboring households in multi-dwelling houses, and unpleasant sounds disturb other house residents. In South Korea, where most people live in multi-dwelling houses, 88% of the apartment residents are under stress due to inter-floor noise [24]. In South Korea, most apartments have been constructed in the wall column structure style since the 1980s, due to reasons of constructability, economic efficiency, and a reduction in the construction period. In apartments with the wall column structure, all four apartment sides are made of concrete, with a large vibration transfer coefficient. Thus, the airborne sound that is generated on the upper floor and the vibration that is generated at the bottom of the upper floor are easily transferred to the lower floor [25].

In particular, the wall column structure apartments built before 2005 in Korea generally used a concrete slab thickness ranging from 135 mm to 150 mm, but in recent years, with the emergence of frequent inter-floor noise problems, a new regulation was established to standardize the slab thickness to be at least 210 mm [3]. Despite the legal regulations on the slab thickness, the number of complaints related to inter-floor noise has increased from 8795 in 2012 to 28,231 in 2018 (Figure 1).

This phenomenon appears to have occurred because there was no solution for noise mitigation for the existing apartments built before 2005, when the regulations on the slab thickness were enacted. The regulations can be applied only to the newly built apartments because improved construction methods, such as reinforced thicknesses of the walls and floor slabs and application of floating floors, have not been made available for the existing apartments. However, there has been an increase in the number of complaints related to inter-floor noise in new apartments built under new regulations. The study conducted by Park, Lee and Lee [3] verified that the slab thickness did not have any effect in lowering the indoor noise level.

**Figure 1.** Trend of inter-floor noise complaints (Korea Environment Corporation).

The increase in the inter-floor noise complaints has led to conflicts and disputes among neighbors [26]. Emotional reactions to noise problems even led to a number of retaliatory crimes between neighbors, such as arson and murder [27]. As the conflicts caused by inter-floor noise expanded to a social problem, the South Korean government established a 'center for inter-floor noise mitigation between neighbors' in 2012, to oversee the disputes related to inter-floor noise. The center, however, has no legal rights and on-site investigation for objective noise measurement and shows some limitations in solving the inter-floor noise problem, due to a lack of manpower. The inter-floor noise problem is still unsolved, and thus, more effective measures are required to resolve the occurring disputes.

As noise is judged from a subjective perspective due to its environmental nature, conflicts due to a difference in opinions cannot be avoided. To resolve such conflicts, it is necessary to prove the fact that a noise level higher than the inter-floor noise criterion occurred, state its duration, and the degree of damage caused. Therefore, this study proposes an inter-floor noise monitoring system for measuring the inter-floor noise and estimating the noise time and location, by utilizing sensors and mobile applications of widely available smartphones. The proposed system enables recording various data related to inter-floor noise, and it is expected to be used as an important tool for resolving disputes related to inter-floor noise in the future.

#### **2. Research Method**

In this study, a system to monitor inter-floor noise using smartphones is proposed. To verify the validity of the system, apartment B, completed in 1996 and located in Gyeongsan City, Gyeongsangbuk-do, South Korea, was selected as a case study site. For inter-floor noise monitoring, an inter-floor noise monitoring application was developed using sensors built into smartphones. To this end, the functions of such sensors were identified and used to achieve the target functions for the inter-floor noise monitoring system.

Table 1 shows the smartphone sensors and their functions, that were used in this study in order to implement the developed application. The microphone was used to obtain the sound pressure level (SPL). The accelerometer and gyroscope were used to measure the vibration acceleration level (VAL) created by a heavy impact on part of a building. Moreover, GPS was used to locate the smartphone

and to measure the timing of the occurring noise. Wi-Fi was used to transfer the obtained inter-floor noise information to a server.



The developed inter-floor noise monitoring application requires a certain level of sound as a baseline for determining inter-floor noise. In this study, the legal criteria existing for the case study site (i.e., for South Korea) were applied. Inter-floor noise is largely divided into floor impact noise (e.g., running and walking sounds), which is generated when the energy is applied directly to the floor, and airborne sound (e.g., conversation and musical instrument sounds). Therefore, when a floor impact occurs, inter-floor noise must be determined by measuring the SPL of the lower floor and the vibration acceleration level generated by construction components (e.g., ceilings, walls, and windows). Table 2 shows the criteria for each type of inter-floor noise, as specified by the Ministry of Environment and the Ministry of Land, Infrastructure and Transport of South Korea.


**Table 2.** Criteria of noises between floors (Korea Ministry of Government Legislation).

In the case of floor impact noises, inter-floor noise is determined when 'LAeq 1 min' exceeds 43 dB in the daytime and 38 dB at night, or when 'LAmax' exceeds 57 dB in the daytime and 52 dB at night. LAeq 1 min corresponds to the average value of noise measured for one minute, using a sound level meter. LAmax denotes noise with the highest dB value among the noises generated during the measurement period. In the case of airborne sounds, inter-floor noise is determined when 'LAeq 5 min' exceeds 45 dB in the daytime and 40 dB at night. The length of airborne noise detection was extended to five minutes, to reflect the long-lasting characteristics of television noise or musical instrument sounds. Therefore, in this study, inter-floor noise was determined by applying the above-mentioned criteria to the smartphone application.

## **3. Construction of the Monitoring System for Measurement of Inter-Floor Noise and Estimation of Noise Source Location**

## *3.1. System Design*

Figure 2 shows the configuration of the proposed monitoring system for the measurement of inter-floor noise levels and the estimation of noise source locations. In general, the system contains four steps. In the first step (the inter-floor noise sensing step), noise and vibration data are obtained from the place where data acquisition is required. Data is collected using the microphone, gyroscope, and accelerometer embedded in a smartphone. The decibel value and vibration velocity (i.e., noise data) are acquired every second, and the surrounding noise is recorded every minute. The acquired noise and vibration data are then transferred to a web server through Wi-Fi wireless communication in the second step (the inter-floor noise data transfer step). In this case, the transferred data consist of the ID and location of the measuring device, noise acquisition time, decibel level (dB) values, and vibration velocity (m/s2). The web server stores the transferred data in a database in real time.

**Figure 2.** System Architecture.

Figure 3 shows a schema of tables that are stored in a database. The database consists of a number of tables, such as the NoiseHistory, DeviceList, and RecordList. Each table contains noise data, information on noise measuring devices, and recorded files. In the NoiseHistory table, the ID of the device that transferred the data, acquisition time, decibel values, and vibration velocity are stored. When the decibel value is higher than the threshold, "1" is recorded in the noise field. In this instance, noise is determined using the criteria displayed in Table 1. Information on the ID and location of each device is stored in the DeviceList table. Information on the files recorded by each device is stored in the RecordList table. In the third step, the developed application estimates the location of the noise source, based on the records stored in the database. The application stores the noise data values in real time, converts them into decibel values, and determines the noise location using the estimation algorithm. In the final step, the acquired inter-floor noise information is visualized on the user's smartphone screen.


**Figure 3.** Database Schema.

Figure 4 shows the application execution screen. The information that can be found in the application includes the timing of occurring noise, the noise measurements at that time, the estimated noise location and the noise type. The location at which the noise occurred is displayed on the floor plan of the measurement site and is located at the bottom of the application. The noise type (e.g., floor impact or airborne noise) can be determined using the recorded vibration values. It is determined as floor impact noise if there is vibration information when the noise occurred, or as airborne noise if there is no vibration information available.

## *3.2. Noise Source Location Estimation Method Used in This Study*

Previous studies on sound source location estimation have been conducted using specialized equipment, such as microphone arrays. Those studies were also arranged for limited experimental environments [28,29]. The proposed system, however, uses only smartphones, thereby providing a method for many people to easily estimate noise source locations. In this study, an attempt was made to estimate noise source locations using differences in the sound intensity. For this method, hardware configuration and operation are very simple, even though it is difficult to calculate the exact distance to the sound source. The purpose of this study is not in finding the exact location of noise, but rather in estimating the approximate noise source occurrence area.

Due to the nature of sound, a lower decibel value is measured as the distance increases. Based on this phenomenon, a method of estimating noise sources using the proportions of the decibel values measured through four smartphones is described. As shown in Figure 5a, it is assumed that noise measurement devices (*T* = {*T*1, *T*2, *T*3, *T*4}) are placed in the form of a grid in two-dimensional coordinates. Each device has a decibel value (dB) and coordinate information (*x*, *y*). In this study, among the noise measuring devices (*T*), three devices (*S*1, *S*2, *S*3) are arbitrarily selected according to the decibel level to locate the noise source. As shown by Equation (1), among the devices (*T*), the device with the largest decibel value (dB) is designated as *S*1.

$$S\_1 = \mathbf{Max.d}\mathbf{b}(T) \tag{1}$$

For example, when a noise or vibration takes place, assuming that the highest decibel value was observed in *T*<sup>1</sup> among the devices (*T*), the *T*<sup>1</sup> device is set as *S*1. Subsequently, as shown by Equation (2), the device (*T*) located on the horizontal line of *S*<sup>1</sup> is selected as *S*2.

$$\mathcal{S}\_2 = \begin{cases} \; \; \; if \; \mathcal{S}\_1 \cdot y = T\_2 \cdot y \; \text{then } T\_2\\ \; \; \; \; else \; T\_3 \end{cases} \tag{2}$$

Here, *S*<sup>2</sup> is a device which has the same *y*-coordinate value as, but a different *x*-coordinate value to, *S*1. Lastly, as expressed by Equation (3), the device having the largest decibel value among the devices other than the devices designated as *S*<sup>1</sup> and *S*<sup>2</sup> is selected as *S*3.

$$S\_3 = \{ t \cdot y = S\_1 \cdot y \land t \cdot \mathbf{x} \neq S\_1 \cdot \mathbf{x} \mid t \in T \}\tag{3}$$

When it is assumed that *T*1·db = 80, *T*2·db = 40, and *T*3·db = 60, the placement of *S*1, *S*2, and *S*<sup>3</sup> can be expressed as shown in Figure 5b. In this case, the approximate values of X and Y that serve as the estimated location coordinates of the noise source are obtained using Equations (4) and (5).

$$\chi = \begin{cases} \; \; \; if \; S\_1 \cdot \mathbf{x} > S\_2 \cdot \mathbf{x} \; then \; \; \frac{S\_1 \cdot db}{S\_1 \cdot db + S\_2 \cdot db} \cdot width \mathbf{h} \\\; \; \; else \; \; \frac{S\_2 \cdot db}{S\_1 \cdot db + S\_2 \cdot db} \cdot width \mathbf{h} \end{cases} \tag{4}$$

$$Y = \begin{cases} \; \; \; if \; S\_1 \cdot y > S\_3 \cdot y \; then \; \; \frac{S\_1 \cdot db}{S\_1 \cdot db + S\_3 \cdot db} \cdot height \\\; \; \; else \; \; \frac{S\_3 \cdot db}{S\_1 \cdot db + S\_3 \cdot db} height \end{cases} \tag{5}$$

Width means the distance between *S*<sup>1</sup> and *S*2, and height is calculated as the distance between *S*<sup>1</sup> and *S*3. Figure 5 shows the estimated noise source locations using Equations (4) and (5).

**Figure 4.** Application execution screen.

**Figure 5.** Methods for estimating the noise source location.

## **4. Experiment and Performance Evaluation**

## *4.1. Experiment Overview*

In this study, inter-floor noise data were acquired using four smartphones to estimate the noise source locations, and one smartphone was used to display the inter-floor noise data in real time for the user. Thus, a total of five smartphones were used in the experiment.

## 4.1.1. Software/Hardware Configuration

Table 3 shows the software components used in the experiment. In this study, JSP programming language was used based on Apache Tomcat (a web application server—WAS) in a Windows 10 Pro operating system for system development. Moreover, the database was managed by linking Apache Tomcat with MySQL. Android 5.0 APIs was used as an operating system to control smartphones.


**Table 3.** Software component.

Table 4 shows the hardware components used in the experiment. As the noise source locations were estimated using the differences in the sound intensity acquired from four measuring devices, only one smartphone model was used for the same conditions. Hardware was easily obtained, and devices with the sensors required for system implementation were selected.

#### **Table 4.** Hardware component.


### 4.1.2. Experimental Environment and Method

To evaluate the performance and applicability of the proposed system to measure inter-floor noise and track the noise source locations, the experiment was performed in an apartment that serves as a representative for the residential type of multi-dwelling houses. Table 5 shows the overview of the experiment site.


**Table 5.** Profile of the experiment place.

The floor of the experiment site consisted of a reinforced concrete slab (180 mm), insulating materials (20 mm), lightweight concrete (40 mm), cement mortar (40 mm), and floor finishing materials (Figure 6). To collect noise and vibration data, smartphones were installed on the ceiling of each room (Figure 7). The exact installation locations can be found on the floor plan (Figure 4). The smartphone located at the bottom left corner was then designated as the origin, and the scales were marked at 24.2 cm intervals in the horizontal direction and at 23 cm intervals in the vertical direction.

As for the noise generation type, real impact sources (e.g., human footsteps and dropped objects) were used rather than standard impact sources (i.e., impact balls), to create an environment similar to real inter-floor noise in the experiment. At certain points over the ceiling, random noises were generated for over 20 s at a time (i.e., impacts of >70 dB, human voices, musical instrument sounds).

The experiment was repeated 100 times, whilst the noise occurrence locations were randomly changed, and the actual noise occurrence locations were then compared to the estimated locations displayed in the application.

#### *4.2. Experimental Evaluation Method and Results*

To evaluate the performance of the system, the errors between the actual noise occurrence locations and the estimated noise source locations were obtained using the mean absolute error (*MAE*). *MAE* was calculated using Equation (6).

$$\text{MAE} = \frac{1}{n} \sum\_{i=1}^{n} distance(project\_{i\prime}, ePoint\_{i\prime}) \tag{6}$$

where *rpointi* is the epicenter of the *i*-th actual noise and *ePointi* is the estimated location of the *i*-th noise. Figure 8 shows the *distance* function to obtain the absolute error between the actual noise epicenter and the estimated location.

Table 6 shows the experiment results. The calculated mean absolute error (MAE) was 2.8 m, while the minimum and maximum errors were 1.2 and 4.3 m, respectively.

**Figure 6.** Cross section of the slab.

**Figure 7.** A smartphone installed on the ceiling of the multi-dwelling house.

**Figure 8.** Distance function.

Exact noise source locations could not be identified with the calculated values, but they were sufficient to distinguish among noise occurrence areas (Room 1, Room 2, Room 3, or Room 4) of the study site. Therefore, the proposed system performed the following four target functions using the smartphone sensors and the developed application: (1) it displayed the degree of inter-floor noise (dB) and recorded its values in the application by using the smartphone microphone devices; (2) it detected vibration using accelerometers and gyroscopes and classified the types of inter-floor noise (e.g., floor impact noise, airborne noise); (3) it estimated the noise source locations using the differences in the sound intensity and visualized the locations on the apartment floor plan; and (4) it provided reports of inter-floor noise on an hourly, daily, and monthly basis. Such reports are generated based on the information stored in the database, so that the recorded data can be accessed if a dispute occurs.


**Table 6.** Measured differences (m) between the actual and estimated noise sources.

#### **5. Conclusions**

This study proposed a system capable of monitoring inter-floor noise in real time, using smartphone sensors and a developed application. The designed noise monitoring system makes it possible to record the timing of the noise and its type (i.e., floor impact noise or airborne noise), acquire the exact noise values (e.g., LAeq 1 min, LAmax, and LAeq 5 min), estimate the location where the noise took place, and keep record of noise files by using the smartphone application.

An experiment was performed to evaluate the performance of the system and its applicability to multi-dwelling houses. The experiment results showed that the mean absolute error (MAE) was 2.8 m, and the minimum and maximum errors constituted 1.2 and 4.3 m, respectively. Although the exact locations of the noise sources could not be identified with these values, it was possible to establish the noise occurrence areas by a room on the apartment floor plan. Therefore, it is concluded here that the tested system can easily acquire objective noise data without any help of agencies specializing in inter-floor noise measurements. It is also expected that this system can reduce unnecessary misunderstandings among neighboring residents, by estimating the types and locations of inter-floor noise. Accordingly, in the case of having an inter-floor noise dispute, the inter-floor noise data stored in the database can be accessed through the application.

While a recent increase in the number of discarded smartphones has caused problems such as the waste of resources and pollution of soil by heavy metals, recycling the discarded smartphones using the results of this study is expected to contribute to solving social problems. However, given that the proposed system does not have any noise-data filtering feature, there is a possibility of violating the privacy of others. Therefore, a number of criteria is yet to be met for the future application usage: (1) a method of estimating exact noise locations using smartphone sensors has be developed, (2) a calibration method to measure the accuracy of sound should be administered, and (3) the privacy of neighbors and personal data collection should be sufficiently protected.

**Author Contributions:** Conceptualization, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim), and D.L.; data curation, D.L.; formal analysis and investigation, S.K. (Suhyun Kang), S.K. (Sangyong Kim); methodology, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim); resources, D.L.; software, S.K. (Suhyun Kang) and S.K. (Seungho Kim); supervision, S.K. (Sangyong Kim) and D.L.; validation, S.K. (Suhyun Kang), S.K. (Sangyong Kim), S.K. (Seungho Kim), and D.L.; visualization, S.K. (Suhyun Kang); writing—original draft, S.K. (Suhyun Kang), S.K. (Seungho Kim); writing—review and editing, S.K. (Sangyong Kim) and D.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2018R1A5A1025137).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
