Next Article in Journal
Magnetic Skyrmion-Based Spiking Neural Network for Pattern Recognition
Next Article in Special Issue
Enhancing the Stability and Placement Accuracy of BIM Model Projections for Augmented Reality-Based Site Management of Infrastructure Projects
Previous Article in Journal
Performance Optimization Design of Diagonal Flow Fan Based on Ensemble of Surrogates Model
Previous Article in Special Issue
Performance Analysis of Construction Cost Prediction Using Neural Network for Multioutput Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stacking Heterogeneous Ensemble Learning Method for the Prediction of Building Construction Project Costs

1
Department of Architectural Engineering, Andong National University, Andong 36729, Korea
2
Department of Architectural Engineering, GyeongSang National University, Jinju 52828, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9729; https://doi.org/10.3390/app12199729
Submission received: 30 August 2022 / Revised: 20 September 2022 / Accepted: 21 September 2022 / Published: 27 September 2022
(This article belongs to the Special Issue Advances in BIM-Based Architectural Design and System)

Abstract

:
The accurate cost estimation of a construction project in the early stage plays a very important role in successfully completing the project. In the initial stage of construction, when the information necessary to predict construction cost is insufficient, a machine learning model using past data can be an alternative. We suggest a two-level stacking heterogeneous ensemble algorithm combining RF, SVM and CatBoosting. In the step of training the base learner, the optimal hyperparameter values of the base learners were determined using Bayesian optimization with cross-validation. Cost information data disclosed by the Public Procurement Service in South Korea are used to evaluate ML algorithms and the proposed stacking-based ensemble model. According to the analysis results, the two-level stacking ensemble model showed better performance than the individual ensemble models.

1. Introduction

The accurate cost estimation of a construction project plays a very important role in successfully completing the project. Any construction cost estimation is developed based on specific parameters related to construction cost. However, in the early stage, it is not easy to accurately estimate the construction cost because the information necessary to predict the construction cost has not yet been determined. Therefore, research on the prediction of construction cost in the early stage has attracted the attention of many researchers [1,2].
Elfaki et al. [1] has studied 92 papers published from 1985 to 2020 dealing with construction cost estimation. Most of the proposed estimation techniques tried to present a construction cost prediction model that can be used in the pre-bidding stage and were expected to be used as a means to support the decision making of managers. These models use similar past project information to estimate construction costs. The most popular machine learning (ML) techniques that were used in the reviewed papers are the artificial neural networks (ANN) and regression analysis (RA), respectively. In the construction management, the artificial neural networks (ANN) and the support vector machine (SVM) were the most common machine learning techniques [1].
These ML techniques, such as ANN and SVM, use a historical database to predict the construction cost. Most of these papers used only a single model or a hybrid model that improved a single model. Each model has its own advantages and disadvantages. For example, multi-layer perceptron (MLP) is good at handling noisy data but has the problems of overfitting and becoming trapped at the local minimum, and SVM selects the best hyperplane that is less prone to overfitting [3]. Therefore, it may be possible to produce better results by combining several models rather than using one model. Ensemble learning that typically generates several models that are combined to make a prediction has recently attracted more attention in the field of ML. The use of ensemble methods generally has the advantage of increasing accuracy and robustness compared to using a single model [4,5,6].
The conventional ensemble methods include bagging-, boosting- and stacking-based methods [5]. Recently, the ensemble technique has been applied to various construction-related fields, including real estate appraisal [7], housing price prediction [8], energy consumption forecasting [9,10,11], energy performance prediction [12], the prediction of high-performance concrete compressive strength [13], etc.
Stacking [14], also called meta ensembling or stacked generalization, is another well-known method used to increase diversity and generalization. Stacking combines individual models as a base learner and trains a meta learner with the output of the base learners. Although it is applied to the problem of classification, the results [15] of comparing the performance of many ML models show that different stacking approaches give a relatively better performance. In the context of ensemble learning regression, there has been work that proved that, if the stacking model is properly combined, the squared error of the prediction can be smaller than the average squared error of the base learners [5]. In terms of the performance on network intrusion detection systems, stacking is the only method that was able to reduce the false positive rate by a relatively high amount [16].
Although the ML ensemble algorithm is being actively introduced in many fields, there is a lack of applications of the ML ensemble algorithm to the construction estimation domain. Chakraborty et al. [17] have suggested a hybrid boosting-based model—natural gradient boosting and light gradient boosting—to improve the accuracy and reliability of the construction cost estimates during the VE phase. Meharie et al. [18] have applied a stacking ensemble ML algorithm for predicting the cost of highway construction projects. However, an estimation model to determine the future cost of early-stage building construction projects using stacking-based ensemble machine learning has not yet been suggested.
Recently, with the development of information technology, a lot of information on the cost of public building construction has been accumulated. The Public Procurement Service in South Korea publicly discloses this information. This information provides a good opportunity to create and test the ML algorithm.
The objective of this study is to present an ML algorithm using this open information to estimate the cost of the building construction project at an early stage of the project. We suggest a two-level ML method composed of a stacking heterogeneous ensemble algorithm by combining random forest (RF), SVM and CatBoosting.
This paper is structured as follows. In Section 2, we describe the related work of ensemble machine learning and hyperparameter tuning with Bayesian optimization. In Section 3, we suggest the details of the proposed stacking ensemble method. In Section 4, the experimental results used to assess the performance of the proposed method are presented, and the results are discussed. Section 5 draws the conclusion of this paper.

2. Literature Review

2.1. Ensemble Learning

Ensemble learning is defined as a technique of combining several weak models instead of using a single powerful model to help make accurate predictions. These weak models (ensemble) are integrated in some way to obtain the final prediction [4]. The outlines of the algorithms used in this study are briefly described as follows.

2.1.1. Bagging

Bagging, also known as bootstrap aggregating [19], is the method of randomly creating samples of data out of a single dataset with replacement. The bagging method provides the different random subsets of the training data to each learner. Bagging is to secure diversity in terms of data. Bagging is suitable for models with a high model complexity, such as decision tree (DT) algorithms. DT predicts by iteratively dividing the space of input variables into non-overlapping multi-dimensional rectangles, and the result is similar to the shape of a tree. The result of the DT is tree-like; DT is simple and easy to apply but may have a lower predictive performance than more complex models. This problem with DT can be improved with a bagging decision tree (BDT) model.
AdaBoost [20], an abbreviation of Adaptive Boost, is a classification-based machine learning model. After setting the initial model as a weak learner, AdaBoost sequentially compensates for the weaknesses of the previous model by weighting the data that the previous model did not predict well. Large errors made by earlier models can be compensated by the subsequent models. Finally, a strong learner is generated by linearly combining these weak learners.
Random forest (RF) [21] is a specialized bagging for decision tree algorithms using two methods to increase the diversity of ensemble-bagging and randomly selected predictor variables. In terms of datasets, sampling with replacement is introduced, and in terms of variables, randomly selected variables from the p variables are used. In other words, RF introduces diversity not only in terms of data but also in terms of variables.
In general, the prediction error of a model can be considered to include bias and variance. Creating a model that accurately predicts the construction cost is ultimately finding a model that minimizes bias and variance. Bias indicates how well the model captures the underlying relationship between features and target outputs, whereas variance indicates how much the range of predictions fluctuates in the training data [22]. Bagging tends to reduce variance more than bias and does not work well with a relatively simple model.

2.1.2. Boosting

Boosting is a method of changing the target value to the extent that the previous model did not fit. In the case of AdaBoost, it is a method of adjusting the weight of data selection. The boosting method uses some measurement, ensuring that it is substantially different from the other members. A typical gradient boosting machine (GBM) [23] fits an additive model (ensemble) in a forward stage-wise manner. In each stage, a weak learner is introduced to compensate for the shortcomings of existing weak learners. In GBM, “shortcomings” are identified by gradients. Both high-weight data points and gradients tell us how to improve our model. Gradient boosting iteratively builds a sequence of approximations. Boosting aims to increase the accuracy of the model by sequentially combining weak learners, but boosting is sensitive to noisy data and outliers and is susceptible to overfitting because it focuses on data that previous models did not predict well. Common approaches to reducing variance are cross-validation and bagging (bootstrapped aggregated ensemble). On the other hand, reducing bias is commonly carried out with boosting [22].
In relation to boosting machine learning, improved algorithms were presented in the order of XGBoost [24], LightGBM [25] and CatBoost [26]. XGBoost is a technique to quickly process performance for large amounts of data based on GBM. XGBoost introduces methods such as parallelized tree building, tree pruning using a depth-first approach, cache awareness and out-of-core computing, regularization for avoiding overfitting, in-built cross-validation capability, etc.
LightGBM uses some techniques such as gradient-based one-side sampling and exclusive feature bundling. Conventional GBM needs to, for every feature, scan all the data instances to estimate the information gain of all the possible split points. It takes a lot of time because it must scan every feature and all the data instances. LightGBM introduces a gradient-based one-side sampling that targets objects with large gradients and excludes objects with small gradients from sampling. It also uses exclusive feature bundling that combines exclusive features into one variable to reduce the number of features.
CatBoost introduces techniques such as ordered target statistics and ordered boosting to improve the conventional boosting method. The ordered target statistics technique is used to improve the target information leaking problem, by which the target value of an instance is used to compute its feature value when the categorical feature is replaced with target statistics. The ordered boosting technique is used to avoid the prediction shift problem; the dataset used for training in each step of boosting should be independent.

2.1.3. Stacking

In contrast to other ensemble learning algorithms, stacking combines different learning algorithms on a single dataset. In the first step, a set of base-level learning models (regressors) is generated. In the second step, a meta-level model (regressor) is trained by the outputs of the base-level learning models. In a stacking ensemble, since the prediction results of the multiple models in the first step are combined and used as the meta learner’s input, the accuracy of prediction can be improved while reducing biases.
Designing a systematic method to combine base models is of great importance [22]. If the ensemble models use one induction algorithm, it is classified as homogeneous; otherwise, it is classified as heterogeneous. The heterogeneous ensemble of the dissimilar algorithms would result in a good prediction performance [18]. This heterogeneous ensemble is also used in the previous study [18,27,28].
The important thing that must be mentioned about stacking is the number of base learners. Many base learners do not always increase the prediction accuracy. In order to increase the accuracy of the stacking ensemble, it is important to apply different models using diverse learning strategies or parameters rather than the number of base-learners [29,30]. Therefore, the combination of the base learner and meta learner needs to be determined through repeated experiments, and the optimal hyperparameter values of the base learners need to be properly determined.

2.2. Hyperparameter Tuning with Bayesian Optimization

ML algorithms each have their own hyperparameters, and good performance can only be achieved when appropriate hyperparameters are set in the learning process. It is very tedious to manually adjust proper parameters, and it is not easy to optimize the parameters for the best performance. In general, when creating an ensemble model, searching methods such as grid search, random search and Bayesian optimization are used for tuning the hyperparameters of each base model.
Bayesian optimization aims to approximate the unknown function with surrogate models such as the Gaussian process and an acquisition function [22]. Bayesian optimization predicts the performance of the predictable range of the surrogate models based on the true value, finding the acquisition max point of the parameter that can improve the performance the most and then proceeding with learning based on this value. As new observations are continuously made, the surrogate model and acquisition function are changed. This process is repeated to find the optimal parameter.
Bayesian optimization tries to gather observations with the highest information in each iteration by striking a balance between exploring uncertain hyperparameters and gathering observations from hyperparameters close to the optimum [31]. Therefore, in the stage of training the base learner of stacking ensemble learning, if the hyperparameters of individual models can be optimized using Bayesian optimization, the accuracy of the model can be increased.

3. Proposed Stacking Ensemble Model

3.1. Stacking-Based Ensemble Learning Modeling

As mentioned earlier, a systematic method to combine base models is very important in stacking ensemble learning, and it is necessary to design the structure of the model in a way that increases diversity. We intended to select three base learners that ensure diversity and performance to eventually create a well-performing ensemble model. The proposed stacking model structure is displayed in Figure 1.
We first individually evaluate the performance of ML models and then select a base learner to use for the first-level training of the stacking ensemble model. A five-fold cross-validation technique is employed to evaluate individual models. As a result of evaluation and many repeated experiments, RF, SVM and CatBoost were selected as base learners.
In the step of learning the base learner, we need to optimize the hyperparameter. The optimal hyperparameter values of the base learners were determined using Bayesian optimization with cross-validation. Tuning the hyperparameters of base learners increases the accuracy at the first level and increases the accuracy of the stacking model that combines these models. The prediction results of the tuned base models are deployed as the inputs of linear regression (LR)—a second-level prediction model used to find the optimal building construction cost.

3.2. Model Evalution

To evaluate the performance of the proposed method, it was compared with a single ML model and the ensemble learning model. The performances of the models were compared in terms of the coefficient of determination ( R 2 ) and the root mean square error (RMSE).
  • R 2   is generally used to measure the performance of regression-based machine learning models. R 2 is the proportion of the residual sum of squares with the total sum of squares, which indicates how well the model fits the observed data. The residual is the difference between the observed data and the value predicted by the model. R 2 is expressed as:
    R 2 = 1 i = 1 n ( T i P i ) 2 i = 1 n ( T i μ ) 2
  • RMSE is the square root of the mean squared error, which is expressed as:
    RMSE = i = 1 n ( T i P i ) 2 n
    where n is the number of samples to be evaluated in the training, P i is the model output related to the sample i ( i = 1 ,   2 ,   ,   n ) , T i is the target output and μ is the mean of the targets in Equations (1) and (2). For R 2 , the larger the value is, the better the model performance is, and it cannot exceed 1. On the other hand, lower values of RMSE are desirable.

4. Experiments and Results

4.1. Data

4.1.1. Data Acquisition and Description

We have utilized a construction cost dataset that the Public Procurement Service has been providing on the open site (http://pcae.g2b.go.kr/pbs/psa/psa0000/index.do accessed on 15 November 2020) in South Korea. This site discloses the construction cost information of new construction or extension construction projects procured by the Public Procurement Service.
Among the dataset, projects with more than KRW 30 billion are excluded because, in terms of Korea’s public bidding system, the contract method for projects costing more than KRW 30 billion is complex, and the number of cases is small. As a result, the construction cost data for 775 building projects during the period 2015–2021 were utilized. Figure 2 shows the distribution of the collected cost data. We can find that the number of construction cases decreases when the construction amount exceeds KRW 20 billion. If the number of data is small, such as in a construction project of more than KRW 20 billion, the prediction accuracy may be lowered.

4.1.2. Variables Selection

We can obtain information on the factors that affect construction costs at the sites mentioned above. These factors contain information on independent variables—gross floor area (m2), site area (m2), building area (m2), building height (m), typical floor height (m), the number of floors, the number of base floors, the number of parking spaces, the use of the building, etc. The number of independent variables is important, as opposed to learning parameters, and it directly affects the estimation model’s accuracy and computational efficiency [2]. Therefore, after calculating the Pearson correlation coefficient, independent variables with low correlation coefficient values were excluded. Finally, six independent variables—gross floor area, building area, building height, the number of floors, the number of base floors and the number of parking spaces—were used in the experiment. Construction cost is expressed in the unit of KRW 1 million because the numerical scale is large when expressed in won. The descriptive statistics of the independent and dependent variables are given in Table 1.
Scatter plots showing the relationship between each independent variable and the construction cost are given Figure 3. We can see that the correlation between the total floor area and the construction cost is the highest, followed by the building area.

4.1.3. Data Preprocessing

The differences in the measurement unit of the features could affect the performance of some models. In particular, LR, SVMs and NNs are sensitive to feature scaling [27]. Therefore, the data are normalized, as shown in the following equation.
x i = X i min ( X t r ) max ( X t r ) min ( X t r )
where x i is the normalized value for X i , X t r is the original training set and max() and min() mean the maximum and minimum values for a given dataset.
There are several ways to deal with missing data in the collected dataset, but we used the method of replacing the missing data with an average value, which is a commonly used method.

4.2. Performance of Individual ML Models

After data preprocessing, the performance of individual models was compared. To evaluate the performance of individual ML models, it is necessary to find appropriate hyperparameters for each model. First, the raw dataset is randomly partitioned into two subsets: a training set (80%) and a testing set (20%). The grid search provided by the scikit learn package was performed. In this study, the scikit learn package and some libraries of Python were utilized. Table 2 shows the initial ML models and their hyperparameter values searched for in the hyper-parameter space by the best cross-validation score. The hyperparameter of the DT model represents the value applied to the BDT model.
A five-fold cross-validation technique is employed to compare the performance of individual models, as it is a well-known technique for cross-validation. The hyperparameters required for individual models were applied as values searched by the grid search applied in the previous step. Table 3 shows the five-fold mean values of RMSE and R2 calculated for the training dataset and the testing dataset according to individual models. As can be seen in Table 2, the individual models showed relatively similar performances, with an R2 value of 0.89 for the testing data. The exception is the BDT model. Most ML models do not have much of a difference between the R2 value for the training dataset and the R2 value for the test dataset, indicating that they have a relatively even generalization ability.

4.3. Results and Discussion

Based on the evaluation results of individual models in the previous step, the combination of the base learner and meta learner were determined through repeated experiments. First of all, since most ML models do not have much of a difference between the R2 values for testing data, these models can be selected as the base learners. Given that the heterogeneous integration of different types of base learners can enhance the generalization ability of the model, we selected RF, SVR and CatBoost as base learners. Adaboost, XGboost, LightGBM and CatBoost all belong to the Boosting Ensemble, but since CatBoost is the most recently released as an improvement on the common boosting algorithm, it was selected as the base learner.
In the first-level prediction model, RF, SVM and CatBoosting were generated individually using the scikit learn package in Python. In the second-level forecast, an LR algorithm was selected to combine the three basic-learning algorithms and to generate final prediction results. As we saw earlier, since there is no significant difference in the performance of individual models, we did not use the method of assigning weights to the output of the base learner.
Table 4 shows the five-fold cross-validation results (RMSE and R2) calculated for the training dataset and the testing dataset by the proposed model. The results demonstrate that the values of RMSE and R2 are almost constant regardless of the number of cross-validation executions. The average R2 of the five-fold cross-validation was calculated to be 0.91, which is an improvement of 0.02 from the previously calculated R2 value of 0.89 for individual models. As a result of optimizing the hyperparameter using Bayesian optimization in the process of training the base learner, the R2 values of RF, SVM and Catboost were calculated to be 0.900, 0.897 and 0.906, respectively. Since the result value of the meta learner was 0.91, it can be seen that the R2 value of each base learner is improved.

5. Conclusions

In the early stages of construction, it is difficult to predict the construction cost because there is not much confirmed information. Accurate construction cost forecasting in the early stages of a construction project is essential to the success of the construction project. Since most clients start a project with a limited budget, the size of the building, such as the number of floors or the total floor area, must be determined within the limited budget range. Therefore, a model that predicts the construction cost in the early stage can provide objective information when the construction project participants make decisions. Many studies have been conducted to predict the construction cost using past data. In this study, a two-level stacking heterogeneous ensemble algorithm that combines RF, SVM and CatBoosting was suggested as a decision-making tool to estimate the cost of the building construction project at an early stage of the project. This approach is currently gaining acceptance for machine learning.
After literature reviews, we evaluated the performance of individual ML models and then selected a base learner to use for the first-level training of the stacking ensemble model using the collected data. The optimal hyperparameter values of the base learners—RF, SVM and CatBoost—were determined using Bayesian optimization with cross-validation. According to the analysis results, the two-level stacking ensemble model showed a better performance than the individual ensemble models. We compared the performance of various models that can be used to predict the construction cost at an early stage of construction and presented the stacking-based ensemble model that shows the best performance. Since this model predicts the construction cost using relatively limited parameters, it can contribute to making objective decisions for participants in the early stages of a building construction project.
There have been few studies that have applied stacking ensemble learning to predict construction costs at the early stages of a building construction project. Recently, various machine learning techniques have been developed, and there are ML algorithms that we have not considered, so it is necessary to conduct experiments on a wide range of ML algorithms. In addition, in order to increase the accuracy of the ML algorithm, the selection of independent variables is also important. In order to improve the accuracy of the construction cost prediction, the selection of independent variables that can improve the performance of the ML algorithm also needs to be dealt with in depth.

Author Contributions

U.P. conceived the experiments, analyzed the data and wrote the paper; Y.K. and H.L. prepared the data for the analysis; S.Y. supervised the research. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 22AATD-C163269-02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationship that could have influenced the work reported in this study.

References

  1. Elfaki, A.O.; Alatawi, S.; Abushandi, E. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey. Adv. Civ. Eng. 2014, 2014, 107926. [Google Scholar] [CrossRef]
  2. Hashemi, T.S.; Ebadati, O.M.; Kaur, H. Cost estimation and prediction in construction projects: A systematic review on machine learning techniques. SN Appl. Sci. 2020, 2, 1703. [Google Scholar] [CrossRef]
  3. Kalagotla, S.K.; Gangashetty, S.V.; Giridhar, K. A novel stacking technique for prediction of diabetes. Comput. Biol. Med. 2021, 135, 104554. [Google Scholar] [CrossRef]
  4. Mendes-Moreira, J.; Soares, C.; Jorge, A.M.; Sousa, J.F.D. Ensemble approaches for regression: A survey. ACM Comput. Surv. 2012, 45, 1–40. [Google Scholar] [CrossRef]
  5. Ren, Y.; Zhang, L.; Suganthan, P.N. Ensemble Classification and Regression-Recent Developments, Applications and Future Directions [Review Article]. IEEE Comput. Intell. Mag. 2016, 11, 41–53. [Google Scholar] [CrossRef]
  6. Wu, H.; Levinson, D. The ensemble approach to forecasting: A review and synthesis. Transp. Res. Part C Emerg. Technol. 2021, 132, 103357. [Google Scholar] [CrossRef]
  7. Wang, S.; Zhu, J.; Yin, Y.; Wang, D.; Cheng, T.C.E.; Wang, Y. Interpretable Multi-modal Stacking-based Ensemble Learning Method for Real Estate Appraisal. IEEE Trans. Multimed. 2021, 1. [Google Scholar] [CrossRef]
  8. Srirutchataboon, G.; Prasertthum, S.; Chuangsuwanich, E.; Pratanwanich, P.N.; Ratanamahatana, C. Stacking Ensemble Learning for Housing Price Prediction: A Case Study in Thailand. In Proceedings of the 2021 13th International Conference on Knowledge and Smart Technology (KST), Bangsaen, Chonburi, Thailand, 21–24 January 2021; pp. 73–77. [Google Scholar]
  9. Gao, W.; Huang, X.; Lin, M.; Jia, J.; Tian, Z. Short-term cooling load prediction for office buildings based on feature selection scheme and stacking ensemble model. Eng. Comput. 2022, 39, 2003–2029. [Google Scholar] [CrossRef]
  10. Pinto, T.; Praça, I.; Vale, Z.; Silva, J. Ensemble learning for electricity consumption forecasting in office buildings. Neurocomputing 2021, 423, 747–755. [Google Scholar] [CrossRef]
  11. Reddy, A.S.; Akashdeep, S.; Harshvardhan, R.; Sowmya, S.K. Stacking Deep learning and Machine learning models for short-term energy consumption forecasting. Adv. Eng. Inform. 2022, 52, 101542. [Google Scholar]
  12. Mohammed, A.S.; Asteris, P.G.; Koopialipoor, M.; Alexakis, D.E.; Lemonis, M.E.; Armaghani, D.J. Stacking Ensemble Tree Models to Predict Energy Performance in Residential Buildings. Sustainability 2021, 13, 8298. [Google Scholar] [CrossRef]
  13. Chou, J.; Pham, A. Enhanced artificial intelligence for ensemble approach to predicting high performance concrete compressive strength. Constr. Build. Mater. 2013, 49, 554–563. [Google Scholar] [CrossRef]
  14. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  15. Džeroski, S.; Ženko, B. Is Combining Classifiers with Stacking Better than Selecting the Best One? Mach. Learn. 2004, 54, 255–273. [Google Scholar] [CrossRef]
  16. Syarif, I.; Zaluska, E.; Prugel-Bennett, A.; Wills, G. Application of bagging, boosting and stacking to intrusion detection. In Proceedings of the International Workshop on Machine Learning and Data Mining in Pattern Recognition, Berlin, Germany, 13–20 July 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 593–602. [Google Scholar]
  17. Chakraborty, D.; Elhegazy, H.; Elzarka, H.; Gutierrez, L. A novel construction cost prediction model using hybrid natural and light gradient boosting. Adv. Eng. Inform. 2020, 46, 101201. [Google Scholar] [CrossRef]
  18. Meharie, M.; Mengesha, W.; Gariy, Z.; Mutuku, R. Application of stacking ensemble machine learning algorithm in predicting the cost of highway construction projects. Eng. Constr. Archit. Manag. 2021, 29, 2836–2853. [Google Scholar] [CrossRef]
  19. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  20. Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. icml 1996, 96, 148–156. [Google Scholar]
  21. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  22. Shahhosseini, M.; Hu, G.; Pham, H. Optimizing ensemble weights and hyperparameters of machine learning models for regression problems. Mach. Learn. Appl. 2022, 7, 100251. [Google Scholar] [CrossRef]
  23. Bartlett, P.; Freund, Y.; Lee, W.S.; Schapire, R.E. Boosting the margin: A new explanation for the effectiveness of voting methods. Ann. Stat. 1998, 26, 1651–1686. [Google Scholar] [CrossRef]
  24. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  25. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T. Lightgbm: A highly efficient gradient boosting decision tree. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  26. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
  27. Zhou, C.; Zhou, L.; Liu, F.; Chen, W.; Wang, Q.; Liang, K.; Guo, W.; Zhou, L. A Novel Stacking Heterogeneous Ensemble Model with Hybrid Wrapper-Based Feature Selection for Reservoir Productivity Predictions. Complexity 2021, 2021, 6675638. [Google Scholar] [CrossRef]
  28. Cui, S.; Yin, Y.; Wang, D.; Li, Z.; Wang, Y. A stacking-based ensemble learning method for earthquake casualty prediction. Appl. Soft Comput. 2021, 101, 107038. [Google Scholar] [CrossRef]
  29. Ribeiro, M.H.D.M.; dos Santos Coelho, L. Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series. Appl. Soft Comput. 2020, 86, 105837. [Google Scholar] [CrossRef]
  30. Ribeiro, M.H.D.M.; da Silva, R.G.; Moreno, S.R.; Mariani, V.C.; dos Santos Coelho, L. Efficient bootstrap stacking ensemble learning model applied to wind power generation forecasting. Int. J. Electr. Power Energy Syst. 2022, 136, 107712. [Google Scholar] [CrossRef]
  31. Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
Figure 1. Proposed stacking model structure.
Figure 1. Proposed stacking model structure.
Applsci 12 09729 g001
Figure 2. Distribution of collected construction cost data.
Figure 2. Distribution of collected construction cost data.
Applsci 12 09729 g002
Figure 3. Scatter plot showing the relationship between each independent variable and the construction cost.
Figure 3. Scatter plot showing the relationship between each independent variable and the construction cost.
Applsci 12 09729 g003
Table 1. Parameters and attributes for the input and target variables.
Table 1. Parameters and attributes for the input and target variables.
Parameter TypeParameterAverageStandard DeviationMinimumMaximum
Input
variables
Gross floor area (m2)8135.85342.932536,699.0
Building area (m2)2628.31791.7122.811,392.40
Building height (m)20.57.95.270
Number of floors4.11.9120
Number of basement floors0.90.604
Number of parking spaces94.8147.231830
Target
variables
Construction cost
(million ₩)
10,260.36113.2251.028,622.0
Table 2. Initial ML models and their hyperparameter settings.
Table 2. Initial ML models and their hyperparameter settings.
ML ModelsHyperparametersValues
ANNactivation
batch_size
epochs
initializer
optimizer
relu
16
100
normal
adam
SVMRegularization parameter (C)
epsilon
Kernel type
RBF gamma
400
0.005
RBF
0.010
RFn_estimators
max_depth
min_samples_leaf
715
5
4
DTmin_samples_split
max_depth
min_samples_leaf
max_leaf_nodes
25
6
10
30
Adaboostn_estimators
learning_rate
114
0.089
XGBoostmax_depth
max_depth
learning_rate
200
3
0.023
LightGBMn_estimators
max_depth
learning_rate
100
4
0.056
CatBoostiterations
learning_rate
depth
l2_leaf_reg
250
0.050
2
0.2
Table 3. Performance comparison of the individual ML models.
Table 3. Performance comparison of the individual ML models.
ModelsTraining DatasetTesting Dataset
RMSER2RMSER2
ANN1973.050.902012.300.89
LR1991.610.892026.000.89
SVR2006.200.892041.480.89
BDT937.080.982115.270.88
RF1632.910.932018.720.89
Adaboost1700.850.922086.060.88
XGboost1557.830.931974.730.89
LightGBM1509.550.941977.160.89
CatBoost1712.980.922003.190.89
Table 4. Five-fold cross-validation accuracy results of the proposed model.
Table 4. Five-fold cross-validation accuracy results of the proposed model.
Cross-ValidationTraining DatasetTesting Dataset
RMSER2RMSER2
11775.010.921644.830.91
21706.940.921933.240.90
31776.340.921649.950.92
41729.900.921821.910.91
51695.120.921951.540.91
Average1736.660.921800.290.91
Stdv 137.750.002148.150.007
1 Standard deviation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, U.; Kang, Y.; Lee, H.; Yun, S. A Stacking Heterogeneous Ensemble Learning Method for the Prediction of Building Construction Project Costs. Appl. Sci. 2022, 12, 9729. https://doi.org/10.3390/app12199729

AMA Style

Park U, Kang Y, Lee H, Yun S. A Stacking Heterogeneous Ensemble Learning Method for the Prediction of Building Construction Project Costs. Applied Sciences. 2022; 12(19):9729. https://doi.org/10.3390/app12199729

Chicago/Turabian Style

Park, Uyeol, Yunho Kang, Haneul Lee, and Seokheon Yun. 2022. "A Stacking Heterogeneous Ensemble Learning Method for the Prediction of Building Construction Project Costs" Applied Sciences 12, no. 19: 9729. https://doi.org/10.3390/app12199729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop