Next Article in Journal
Reduced Graphene Oxides Decorated NiSe Nanoparticles as High Performance Electrodes for Na/Li Storage
Previous Article in Journal
Measurements of Forces and Selected Surface Layer Properties of AW-7075 Aluminum Alloy Used in the Aviation Industry after Abrasive Machining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learned Prediction of Compressive Strength of GGBFS Concrete Using Hybrid Artificial Neural Network Models

1
School of Civil, Environmental and Architectural Engineering, Korea University, Seoul 02841, Korea
2
School of Agricultural Civil & Bio-Industrial Engineering, Kyungpook National University, Daegu 41566, Korea
*
Authors to whom correspondence should be addressed.
Materials 2019, 12(22), 3708; https://doi.org/10.3390/ma12223708
Submission received: 28 September 2019 / Revised: 31 October 2019 / Accepted: 6 November 2019 / Published: 10 November 2019
(This article belongs to the Section Materials Simulation and Design)

Abstract

:
A new hybrid intelligent model was developed for estimating the compressive strength (CS) of ground granulated blast furnace slag (GGBFS) concrete, and the synergistic benefits of the hybrid algorithm as compared with a single algorithm were verified. While using the collected 269 data from previous experimental studies, artificial neural network (ANN) models with three different learning algorithms namely back-propagation (BP), particle swarm optimization (PSO), and new hybrid PSO-BP algorithms, were constructed and the performance of the models was evaluated with regard to the prediction accuracy, efficiency, and stability through a threefold procedure. It was found that the PSO-BP neural network model was superior to the simple ANNs that were trained by a single algorithm and it is suitable for predicting the CS of GGBFS concrete.

1. Introduction

Numerous researchers have attempted to enhance the sustainability of concrete by not only reducing the amount of carbon dioxide (CO2) generated from the production of the Portland cement, but also increasing the durability of concrete, which can benefit the environment through the conservation of resources and the reduction of waste [1]. One commonly used strategy is to utilize recycled aggregates and mineral admixtures, such as fly ash, ground granulated blast furnace slag (GGBFS), and silica fume as a partial replacement for cement or aggregate in concrete [1,2,3]. The use of such industrial by-products has been found to improve the mechanical properties and durability of concrete, reducing CO2 emissions, conserving energy, and mitigating the adverse environmental effects of concrete [4].
Blast furnace slag is a by-product that was obtained in the production of iron in a blast furnace. When the molten blast furnace slag is quenched with water and finely ground to a cement parcel size, it is transformed into GGBFS. GGBFS, as a latent hydraulic material, reacts with calcium hydroxide (Ca(OH)2) in the presence of water, forming calcium silicate hydrate (C-S-H), which is primarily responsible for the strength of cement-based materials [5,6]. Through this pozzolanic reaction, the use of GGBFS as a supplementary cementitious material might reduce the early strength, but it increases the ultimate strength and significantly improves the microstructure and durability of hardened concrete [7,8,9].
Several empirical equations and mathematical models have been developed for estimating the compressive strength (CS) and other properties to minimize the experimental task that is required for concrete mix design [10,11,12]. These equations are generally in regression form based on the results of a series of experiments. However, selecting a suitable regression equation (linear, nonlinear, exponential, etc.) for each analysis requires considerable experience and multiple techniques, and the accuracy of analysis decreases as the number of explanatory variables increases [13,14,15]. In recent years, numerical modeling for such relationships has been accomplished by constructing an artificial neural network (ANN) model, which is capable of learning and generalizing from examples through the trial-and-error method without any presumptions [13,16]. ANNs can not only produce correct or nearly correct solutions to incomplete tasks, but also generate evidential results, even when the data are poor or insufficient [17,18]. Owing to these advantages, numerous researchers have applied ANNs for predicting the CS and other properties of concrete [4,19,20,21]. Bilim (2009) [21] used ANN models that were trained by several different back-propagation (BP) algorithms to predict the CS of GGBFS concrete based on concrete ingredients and age. Bakhta Boukhatem et al. (2011) [22] investigated the efficiency factor of GGBFS related with concrete strength by using ANNs.
In most studies employing ANN models for the estimation of concrete properties, a BP algorithm was used to train the network [19,20,21]. Nevertheless, the BP algorithm has some disadvantages: it can be easily trapped in local minima depending on the selection of initial parameters and it may be unreliable (with a low prediction accuracy), relying on training data [23,24]. Combinations of BP and several metaheuristic algorithms have been proposed as alternatives to overcome these drawbacks. Among the metaheuristic algorithms, particle swarm optimization (PSO) has been often integrated with BP algorithm to improve the performance of predictive models due to its simplicity and wide applicability. The hybrid PSO-BP algorithm uses the global search ability of PSO algorithm and the fast-converging capabilities of BP algorithm so that the ANN models with it can converge to true global optimization more accurately and rapidly than the models with a single algorithm. The effectiveness and superiority of this hybrid algorithm have been proven in various fields [25,26,27,28]. Bo et al. (2017) [27] proposed a hybrid PSO-BP neural network for wind power forecasting, and its performance was compared to the network that was trained by the conventional BP algorithm. The results of their study showed that the performance prediction of the developed hybrid algorithm is superior to the basic BP algorithm. Wang et al. (2015) [28] used the PSO-BP neural network to enhance the performance of the integrated navigation system and indicated that neural networks with the hybrid PSO-BP algorithm can compensate and estimate the navigation error more effectively than the conventional neural networks. However, few studies have been performed on the use of the hybrid algorithms to develop ANN models for predicting concrete properties.
In this study, three different ANN models using BP, PSO, and hybrid PSO-BP algorithms were developed for predicting the CS of GGBFS concrete based on the concrete mix ingredients and curing temperature. The prediction results of these models were compared to investigate the beneficial effects of combining the BP and PSO algorithms and select the best intelligent system for the estimation of GGBFS concrete strength.

2. Database

It is necessary to prepare data and construct a database for training and testing the prediction model to develop ANN-based models for predicting the CS of GGBFS-incorporated concrete. The 269 experimental data that were used in this study were collected from several reports [11,29,30,31,32,33,34,35,36,37]. All of the data contained complete sets of information regarding the mix design proportion, curing condition, and experimental CS of GGBFS concrete. The variables were selected according to all of the available data samples. The input parameters included the curing temperature (T), water to binder ratio (w/b), GGBFS to total binder ratio (GGBFS/B), water (W), fine aggregate (FA), coarse aggregate (CA), and superplasticizer (SP). The output variable was the CS at 28 days, which ranged from 17 to 80 MPa. Details regarding the chemical and mechanical properties of the concrete components are presented in [11,29,30,31,32,33,34,35,36,37]. Table 1 presents the minimum and maximum values of each parameter, and Appendix A presents a database containing all of the data.

3. Methodology

3.1. Artificial Neural Network

ANNs are massive parallel systems that are composed of simple, highly interconnected processing units, i.e., artificial neurons, which process information. ANNs are effective for engineering applications and they have been widely used to solve diverse problems due to its ability learning from examples [16,38]. ANNs can be classified into different types depending on the architecture and information flow procedure [15]. Among them, the multilayer feedforward network consisting of an input layer, one or more hidden layer(s), and an output layer is the most commonly used network, where all of the neurons in each layer only have connections to the neurons of successive layers, not to neurons in the same layer [15,17]. Every node in a layer is connected to the nodes in the adjacent layers with different weights. The typical elements of a neuron are shown in Figure 1: inputs, a summation function, an activation function, a bias, and an output. In every neuron except for the input neurons, signals from the previous layer (xi) are multiplied by an associated adaptive weight (wij), which indicates the connection strength of the neuron with a particular input, and the summation function is then applied to the weighted signals [39]. Finally, the bias of the neuron (bj) is added to the aggregate signals, which forms the net input of the neurons (ni). This process can be mathematically expressed as:
n i = w i j x j + b i .
The output (yi) of the neuron is then obtained by applying an activation function (f) to the net input (ni):
y i = f ( n i ) .
The activation function limits the amplitude of the output of a neuron within a manageable range and introduces nonlinear properties to the neuron. In general, the hyperbolic tangent function is a commonly used activation function in multilayer models [15].
Training ANNs is a process of updating the connection weights and biases, so that the network exhibits desired or interesting behavior. In the course of training, the network architecture and parameters are adjusted by the iterative simulation with the given training examples to minimize the error function, which is often represented as the root mean squared error (RMSE), and to produce outputs that are equal or close to the targets [39,40]. Instead of following a set of rules that are specified by experts, ANNs automatically learn underlying rules from the given examples [41]. The steps used for training the network are called the learning algorithm.

3.2. Back-Propagation

The BP algorithm is the most widely used algorithm for training ANNs [42]. It is a gradient-based procedure to minimize the error between the network outputs and the desired outputs, adjusting the weights and biases by a small amount at a time [15,17]. It comprises two procedures: a forward stage and a backward stage. In the forward procedure, the input signals move forward through the network and the error is calculated in the output layer. Subsequently, the error is propagated backward from the output layer to the input layer, updating parameters for the direction in which the performance function most rapidly decreases [40,42]. The change of the weights during each iteration is calculated, as follows:
Δ w k = α Δ w k 1 η E w ,
where w is the weight, Δ w k and Δ w k 1 are the changes in the weight w at k and k−1 iteration, α is the momentum factor, and η is the learning rate. The entire procedure is repeated until the performance of the network reaches an acceptable level.

3.3. Particle Swarm Optimization

PSO is a stochastic optimization technique for finding the best solution, which is inspired by the social behavior of biological organisms to locate desirable positions in a given area through cooperation and competition [43,44,45,46]. In PSO, some entities, called particles, are scattered in the search space, and the position of each particle represents a possible solution to the optimization problem in the n-dimensional search space [46,47]. Each particle moves iteratively through the problem space to find the optimal locations, while remembering the best position it has ever visited and communicating with other particles.
The position and velocity of the particles are randomly initialized at the beginning of the process and, during every iteration, each particle accelerates toward its own personal best solution discovered so far, as well as the global best position found thus far across the whole population [48]. The velocity and position of each particle are updated via the following equations at every step t [49]:
v t + 1 = w × v t + c 1 × r 1 × ( p b e s t x t ) + c 2 × r 2 × ( g b e s t x t ) ,
x t + 1 = x t + v t + 1 ,
where v t + 1 , v t , x t + 1 , and x t represent the new velocity, current velocity, new position, and current position of the particles. r 1 and r 2 are random numbers uniformly distributed in the range of (0, 1) [50], giving the particles good state space exploration ability. c 1 and c 2 are referred to as acceleration coefficients, which represent the strength of attraction toward the personal best position ( p b e s t ) and the global best position ( g b e s t ), respectively [50,51]. The velocity ( v t ) is updated based on its current values multiplied by the inertia weight and the distances from its current position to the personal best and the global best. The particle position ( x t ) is adjusted according to the newly computed velocity ( v t + 1 ). Subsequently, the fitness of each updated position is evaluated, and the personal best and global best are updated during each iteration. This process is repeated until the expected position is obtained or the termination criteria are satisfied.

3.4. Hybrid PSO-BP Algorithm

The hybrid PSO-BP algorithm that is proposed herein is an optimization method that combines the PSO with the BP. Although the BP algorithm is the most widely used training algorithm for ANNs, it can easily fall into the local optimal solution, and its performance depends on the initial weights of the ANN [23,27]. If the initial weights and biases are far from the optimal values that can give the global optimal solutions, the ANN might become stuck at the local minimum [23]. Many researchers have combined the BP algorithm with metaheuristic optimization algorithms, such as PSO, genetic algorithm, and harmony search algorithm, to overcome these shortcomings of the BP algorithm and enhance the accuracy of models [23,45,52]. Among them, PSO has been often used to improve the performance of BP training in ANNs due to its simplicity and wide applicability [27,52].
The hybrid PSO-BP algorithm employs the global search ability of the PSO algorithm to obtain the initial weights and biases of the ANN that can lead the network to converge to the global minimum of the error function, and it uses the fast-converging capabilities of the BP algorithm. The near-global optimal initial weights and biases that were obtained by the PSO algorithm were applied in BP training to find true global optimization and improve performance of the ANN. Figure 2 describes the overall calculation process of the PSO-BP algorithm that was used in this study. Section 4 provides details regarding the determination of the parameters and the modeling of the PSO-BP ANN for predicting the CS of concrete.

4. Development of CS Prediction Models

This section presents the procedures for developing the ANN models while using the BP, PSO, and PSO-BP algorithms for predicting the CS of GGBFS concrete. As previously mentioned, the curing temperature (T), water to binder ratio (w/b), GGBFS to total binder ratio (GGBFS/B), water (W), fine aggregate (FA), coarse aggregate (CA), and superplasticizer (SP) were used as the input parameters for the CS prediction models. To construct and evaluate the network models, the dataset was divided into training and testing sets; 80% of the data were used for training and the remaining 20% were employed for testing. For the BP algorithm, 10% of the training dataset was used for validation. The test set was not applied in training, but it was used to evaluate the generalization performance of the developed network. All of the models presented in this study were developed while using MATLAB R2018a.

4.1. BP ANN

There are several BP algorithms that can be applied in ANNs, such as the Powell Beale conjugate gradient, BFGS Quasi Newton, and Bayesian regularization. Among these BP algorithms, the Levenberg–Marquardt algorithm, which has been used most commonly in training networks, owing to its high speed and robustness, was adopted in this study to train the ANNs [53,54]. It has been utilized for developing predictive models for concrete properties, and its effectiveness as compared with other BP algorithms has been proven [14,15,21].
Before the training of the network, all of the input and target values were normalized within the range [−1, 1] while using the following equation:
V n o r m = 2 ( V V m i n V m a x V m i n ) 1 ,
where V and V n o r m represent the raw and normalized values, respectively. V m a x and V m i n indicate the largest and smallest values of V , respectively. Normalization of the data can improve the efficiency of learning and simplify the design procedure [39].
The performance of ANNs depends strongly on the network architecture and parameters, including the number of hidden layers, number of neurons in each hidden layer, and activation functions. According to various researchers, ANNs with only one hidden layer can solve almost all engineering problems [55,56,57] and generally produce excellent results [53]. Therefore, all of the ANN-based predictive models that were constructed in this study had a single hidden layer. Figure 3 shows the architecture of the CS prediction ANN models. The hyperbolic tangent function and linear function were used as the activation functions of the hidden and output neurons, respectively.
As highlighted by several researchers, determining the number of neurons in the hidden layer (Nh) is a critical task, because this number significantly affects the performance of ANNs. However, there is no theoretical rule for selecting the proper value of Nh. Therefore, in this study, it was determined through trial and error. Several different ANNs were constructed with various values of Nh within a reasonable range based on previously proposed empirical equations [55,58,59,60,61,62], and their performances were evaluated while using the coefficient of determination (R2) to obtain the optimal value. Table 2 presents the equations used to decide the Nh range for the CS model. As shown, the Nh range (2,21) was selected for the CS prediction BP ANN. The models with different Nh values were each run 10 times, and the average R2 values of both the training and testing sets were computed to determine the optimal number of hidden neurons. The BP model with 15 hidden neurons exhibited the best performance; thus, a 7-15-1 architecture was applied to the BP ANN models in this study. Additional details regarding the specifications of the best BP ANN model for predicting the CS are presented later.

4.2. PSO ANN

The PSO ANN represents the ANN model that was trained by the PSO algorithm, in which the positions of the particles indicate the weights and biases of the ANN. The parameters that are associated with PSO and the ANN should be selected properly to achieve the best performance of the PSO ANN. However, the parameters that lead to the minimum of the cost function are not the same in all cases, and there is no theoretical approach for identifying the optimal values. In this study, to construct a robust and accurate predictive model, the ANN parameter, i.e., the network architecture, and the PSO parameters, including the number of particles in the swarm (Nop) and the acceleration coefficients (c1, c2), were determined through parametric analyses. The inertia weight (w)—one of the PSO parameters—was taken as a random number within the range of (0, 1) [25,63]. Various values that were suggested in the previous studies were considered to find the optimal parameters, as shown in Table 3.
Each time that a network was trained, the training was stopped when the termination criteria were satisfied, i.e., the iteration number reached the limit of 2000 or the improvement in the cost function was <10−8 for 100 successive iterations [25]. The models with different parameters were each trained five times, and R2, as a performance measure, was calculated for the training and testing data in every run. The best model was selected according to the average values of R2 through the same method that was described in the previous section. The best result was obtained when the number of hidden neurons was 15 (as in the case of the BP ANN), the swarm size was 30, and c1 and c2 were 1.5 and 2.5, respectively.

4.3. PSO-BP ANN

Hybrid algorithms combining PSO and BP have been used in ANNs to solve several engineering problems, owing to their fast convergence and global optimization capability. In the PSO-BP network model, the PSO algorithm attempted to find the near-global optimal initial points instead of random initial weights for the BP training of the ANN. The parameters that were associated with both algorithms were specified as the values determined in Section 4.1 and Section 4.2.

5. Evaluation of CS Prediction Models

The CS prediction ANN models trained by the BP, PSO, and PSO-BP algorithms were evaluated and compared. Each model was run 15 times with different training and testing data, and the results were evaluated with regard to the prediction accuracy, efficiency, and stability through a threefold procedure.
The four statistical indices that were employed to evaluate the performance capacity and prediction accuracy of each CS prediction model. The RMSE, mean absolute error (MAE), mean absolute percentage error (MAPE), and coefficient of determination (R2) were the main criteria that were used for performance measurement. These indices are defined as follows:
MAE = 1 n i = 1 n | t i o i |
RMSE = 1 n i = 1 n ( t i o i ) 2
MAPE = 1 n i = 1 n | t i o i t i |
R 2 = [ i = 1 n ( o i o ¯ ) ( t i t ¯ ) i = 1 n ( o i o ¯ ) 2 i = 1 n ( t i t ¯ ) 2 ] 2
where o is the predicted value of the compressive strength, t is the experimental value, n is the total number of data, o ¯ is the mean value of the predicted strength, and t ¯ is the mean value of the experimental strength. Lower values of the MAE, RMSE, and MAPE and higher values of R2 indicate a better predictability of the models.
Table 4 presents the performance indices of the best BP ANN, PSO ANN, and PSO-BP ANN models. As shown, among the developed models, the model that was trained by the hybrid algorithm had the lowest MAE, RMSE, and MAPE, as well as the highest R2, for both the training and testing datasets, which indicated that this model could predict the CS with the highest accuracy. Furthermore, the difference between the statistical performance results for the training and testing data was the smallest for the hybrid model. This result reveals the PSO-BP network model has better generalization performance than the other models.
Figure 4, Figure 5 and Figure 6 present the relationships between the experimental CS and the values that were predicted by the BP, PSO, and PSO-BP networks, respectively. The BP and PSO-BP models exhibited R2 values of >0.9 for both the training and testing datasets, which indicated that these models can provide reliable outputs with a high degree of fitness to the actual values. Thus, they are suitable for predicting the CS of GGBFS based on the mixture constituents and curing temperature. The relatively high R2 values of the proposed PSO-BP model suggest that it has the potential for estimating strength more accurately than the other models.
To perform a detailed assessment, the computational efficiency of each model was evaluated while using the SR [64], which is given by the following equations:
e p i = | t i o i t i | × 100 % , S R = N B e p N × 100 % ,
where e p i is the relative error and t i and o i are the measured and predicted values, respectively, of the i th data entry in the dataset. N B e p is the number of data entries, the relative error of which is smaller than the restrained error bound B e p (i.e., the number of entries within the area e p i < B e p ), and N is the total number of data in the considered set. The SR is the percentage of data that have equal or smaller relative error than the specified error criterion and it has been used for the estimation of the numerical efficiency and validity of the developed models in several studies [64]. The SR of each model was computed with the variation of the restrained error B e p from 0 to 100%. Figure 7 and Table 5 show the obtained results. When B e p was 5%, the SR for the PSO-BP ANN model was 64.1%, and those for the conventional BP ANN and the PSO ANN were 49.2% and 30.2%, respectively. These results indicate that 64.1% of the data were well-predicted by the hybrid model, with accuracy of e p < 5 % . As shown in Figure 7, for all values of B e p , including 5%, the SR of the PSO-BP network model was greater than that of the other models. Additionally, for the PSO-BP ANN, the relative error of the entire data was not greater than 22%; that is, when the restrained error B e p was 22%, the SR was 100%. In comparison, the prediction errors of all the data for the ANNs that were trained by the BP algorithm alone and the PSO algorithm alone were equal to or smaller than 43% and 35%, respectively. These results indicate that the hybrid prediction model has better validity and efficiency than the other models for predicting the CS of GGBFS concrete.
Finally, to evaluate the stability of the developed models, the standard deviations of the RMSE for the models that were trained with 15 randomly selected training samples were calculated and compared. An ANN-based predictive model can give different outputs and have different performance for the same inputs, depending on the initial weight and bias values or the data-splitting method [24]. This property can cause significant problems in practical application [53,65]. Therefore, the stability of an ANN model must be validated prior to use [65,66]. In this study, as mentioned previously, each model was trained 15 times while using different combinations of training and testing sets and, then, the standard deviation ( S ) of the RMSE was computed while using Equation (12) to evaluate the stability of the developed models. The standard deviation indicates the sensitivity of the prediction performance of a model to the data used to train and develop it. A model with higher standard deviation is more strongly dependent on the training observations.
X ¯ = 1 N k = N X k , S = 1 N k = 1 N ( X k X ¯ ) 2 .
Here, N is the total number of training data and X k is the RMSE for the kth training set. X ¯ denotes the mean value of the RMSE for models that are trained by a specific algorithm with 15 randomly selected training samples. Figure 8 and Table 6 show the standard deviations and mean values for the BP, PSO, and PSO-BP ANN models that are based on the training and testing datasets. The BP ANN model had lower means and higher standard deviations of RMSE than PSO ANN model. These results show that the ANN models trained by BP algorithm have better prediction accuracy, but lower stability than the PSO ANN models. The standard deviations and means of the PSO-BP ANN model for both the training and testing data were smaller than those of the other models, which indicates that the model based on the hybrid algorithm was less influenced by the data splitting. Moreover, the difference between the standard deviations for the two datasets was the smallest for the PSO-BP model. As a result, it can be concluded that the PSO-BP neural network model is the most stable and accurate among the three models for estimating the CS of GGBFS concrete.

6. Conclusions

The ANN models were constructed to predict the CS of GGBFS concrete based on the concrete mix proportions and curing temperature while using three different learning algorithms: BP, PSO, and PSO-BP. The parameters that were associated with each algorithm or neural network were determined via a trial-and-error method, and the proposed models were trained while using 269 data divided into two sets: testing and training. The developed PSO-BP neural network model was compared with ANN models that were trained by either BP or PSO to verify its accuracy, efficiency, and stability in prediction and to prove the synergetic benefits of using the hybrid algorithms.
The PSO-BP neural network model had the lowest values of the RMSE, MAE, and MAPE, as well as the highest values of R2 for both the training and testing data, and the deviation between the results that were obtained from the training and testing data was the smallest for the PSO-BP network. These results indicate that the proposed hybrid model has the best fit for not only training data, but also unseen data.
As shown in Table 5 and Figure 7, the hybrid model also had the highest S R for the specified error limit; i.e., its maximum relative error was smaller than those of the other two models. Additionally, when the models were trained with 15 randomly selected training samples, the PSO-BP network model exhibited the lowest standard deviation and mean values of the RMSE, which demonstrates that its prediction performance was the least affected by data division.
Several performance analyses indicated that the PSO-BP ANN model offers more accurate, reliable, and stable prediction of the CS of GGBFS concrete than the other models. That is, it has the best predictability and generalization performance among the developed models in this study. According to the results, it is obvious that using the hybrid algorithm has synergistic benefits for the performance of ANN models and the proposed hybrid PSO-BP ANN model is reliable for estimating the CS of GGBFS concrete.

Author Contributions

Conceptualization, I.-J.H., Y.-S.Y. and J.-H.K.; investigation, I.-J.H., T.-F.Y. and J.-Y.L.; writing—original draft preparation, I.-J.H.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2019R1A2C2087646).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The dataset used in this research.
Table A1. The dataset used in this research.
NoT (°C)w/b (%)GGBFS/B (%)W (kg/m3)FA (kg/m3)CA (kg/m3)SP (%)CS (MPa)NoT (°C)w/b (%)GGBFS/B (%)W (kg/m3)FA (kg/m3)CA (kg/m3)SP (%)CS (MPa)
1602501636828820.866.2202041.7301568489330.9539.01
26025401636478970.6565.91212044.1501658359180.638.11
36025501636418990.764.13222043501618409240.741.51
46025601636349010.7570.32232042501578459290.840.36
56025701636289030.762.27242040.9501538509340.9543.69
66027.501637039090.7760.34252044.1701658329150.545.79
76027.5401636859110.662.48262042.8701608389220.647.27
86027.5501636789130.6367.78272041.4701558459290.7545.76
96027.5601636719160.765.39282040.1701508519360.941.76
106027.5701636659180.6854.5292044.101658509160.6543.14
11603001637219320.860.68302043.3701628509160.6545.74
126030401637199190.6356.85312042.5501598519180.746.63
136030501637129210.664.07322041.7301568529190.748.72
146030601637069240.6860.24332043.101688578880.60837.83
156030701636999270.750.6342042.8201678478950.60738.55
162044.101658419250.6541.67352042.6301668419000.60738.74
172044.1301658379210.6533.03362042.3501658289110.60740.18
182043.3301628419250.836.01372041.8701638119280.50838.04
192042.5301598459290.8537.68382028.601667578791.30967.38
392028.4201657428841.20868.05592049.2301608639490.129.9
402028.3301647258991.15867.53602049.2301468639490.130.3
412027.9501627079140.85767.62612049.2301288639490.130.1
422027.6701606909290.65666.18622049.2401608639490.132
431043.101688578880.60833.05632049.2401468639490.130.4
441042.8201678478950.60736.03642044.5301607759230.155.3
451042.3501658289110.60736.13652044.5301377759230.149.1
461028.601667578791.30959.42672044.5401607759230.155.9
471028.4201657428841.20859.55682044.5401377759230.154.4
481027.9501627079140.85754.23692044.5501607759230.1552.6
491543.101688578880.60834.32702044.5501377759230.1550.5
501542.8201678478950.60733.78712050301638369651.6339.8
511542.3501658289110.60733.44722050401638349641.6337.4
521528.601667578791.30965.03732050501638349631.6335
531528.4201657428841.20861.07742040301637649682.0448.2
541527.9501627079140.85761752040401637639662.0445.9
552047.401738739490.552.4762040501637619642.0444.1
562047.4501738739490.529.7774555017585310000.7555
572049.2201608639490.1530.47845553017582210140.660
582049.2201468639490.1531.37945555017580110290.659
8045557017579910260.6501006035701756949690.6529
81454501758179680.7491017555017585310000.7561
824545301757959800.64910275553017582210140.659
834545501757749950.554810375555017580110290.656
844545701757719910.554110475557017579910260.648
85453501757419520.839105754501758179680.748
864535301757189620.7361067545301757959800.645
874535501756989740.65351077545501757749950.5549
884535701756949690.65321087545701757719910.5537
896055017585310000.7561109753501757419520.836
9060553017582210140.6621107535301757189620.737
9160555017580110290.6551117535501756989740.6536
9260557017579910260.6431127535701756949690.6531
93604501758179680.749113204201687839720.71546.1
946045301757959800.6481142042301687809680.46344.9
956045501757749950.55481152042501687789660.41344.6
966045701757719910.5536116203701687299810.81750.5
97603501757419520.8401172037301687269770.51556.3
986035301757189620.7391182037501687249740.45557.1
996035501756989740.6532119203801686729811.01860
1202038301686689760.86864.31402085.817.62197291106023.6
1212038501686659720.76866.71412074.6302247071072030
122202701686089651.5271.41422064.141.62316771027034
1232027301686049591.3772.61432057.150240645979034.9
1242027501686019541.2776.21442052.256.2251611927034.5
1252041017069710352.948.31452048.361261578877031.8
12620411017069710352.948.11462066.202326841037035
12720413017069710352.947.11472088.902187351114022.6
12820415017069710352.946.41482075.617.62257081073029
129205601687501080053.11492065.7302306831036036.1
1302056551687501080042.51502056.941.6239647982041.4
1312056851687501080033.5151205150250609924042.3
1322087.602197321111022.71522046.956.2263569864041.5
1332087.617.62157481135018.11532044.261279526799037.5
1342087.6302187311109023.51542059.70239659999040.4
1352074.341.62237071073027155208002247161087027.5
1362065.7502306811033027.81562067.917.62316861041033.7
1372059.556.2238654991027.2157205930236659999041.8
1382055.161248624948025.11582051.441.6247617936047.5
139207502257081075028.91592046.950263570866048.4
1602043.456.22785257960471802041.4701558459290.7545.76
1612040.961.1295477723042.71812040.1701508519360.941.76
162205001858509520.541.51822044.101658509160.6543.14
1632050101858509520.540.11832043.3301628509160.6545.74
1642050201858509520.540.61842042.5501598519180.746.63
1652050301858509520.539.81852041.7701568529190.748.72
1662050401858509520.539.41862047.901639478700.725.5
1672050501858509520.537.51872047.9601639398630.523.3
1682050601858509520.4235.21882045.301639208800.728.2
1692044.101658419250.6541.671892045.3601639138730.5526.6
1702044.1301658379210.6533.031902042.901638948900.732.9
1712043.3301628419250.836.011912042.9601638868820.529.2
1722042.5301598459290.8537.681922045.301639208800.728.5
1732041.7301568489330.9539.011932045.3301639168760.626.2
1742044.1501658359180.638.111942045.3601639138730.5527.1
1752043.0 501618409840.741.51195204201687839720.71546.1
1762042.0 501578459290.840.361962042301687809680.46344.9
1772040.9 501538509340.9543.691972042501687789660.41344.6
1782044.1 701658329150.545.79198203701687299810.81750.5
1792042.8 701608389220.647.271992037301687269770.51556.3
2002037.0 501687249740.46557.12202350301687909951.0531.1
2012032.0 01686729811.018602212350401687899941.0532
2022032.0 301686689760.86864.32222350501687889931.1633.5
2032032.0 501686659720.767662232345.0 3015876010391.4633.9
2042027.0 01686089651.5271.42242345.0 4015875910381.4734.5
2052027.0 301686049591.3772.62252345.0 5015875810371.4836.2
2062027501686019541.2776.2226204201687839720.71546
207205001758009650.645322272042301687809680.46345
2082050101757999630.60531.82282042501687789660.41344.7
2092050301757979600.58529.8229203701687299810.81750.5
2102050501757949570.54528.52302037.0 301687269770.51552
211203001657179240.6556.32312037.0 501687249740.46554
2122030651657059090.354.52322032.0 01686729811.01860.3
2132027651636898870.361.12332032.0 301686689760.86864.7
214235501798199520.7327.82342032.0 501686659720.76767
215235001687939991.0428.72352027.0 01686089651.5272.4
2162345015876410431.4630.72362027.0 301686049591.3773.5
2172355301798179490.8428.92372027.0 501686019541.2777
2182355401798159471.0529.3238560.0 0217.23999120.6524.8
2192355501798149461.0531.2239560.0 10217.23979100.6522.4
240560.0 30217.23969080.6518.82602060.0 201728779960.552.2
241560.0 50217.23959060.6517.22612057.0 201758439970.5546.3
2422060.0 0217.23999120.6526.42622050.0 201768119980.539
2432060.0 10217.23979100.6524.12632045.0 201817699850.537.2
2442060.0 30217.23969080.6522.92642040.0 201877219620.531.6
2452060.0 50217.23959060.6521.72652060.0 301728779960.549
2463560.0 0217.23999120.6527.42662055.0 301758439970.543.4
2473560.0 10217.23979100.6525.72672050.0 301768119980.540.8
2483560.0 30217.23969080.6526.82682045.0 301817699850.539.3
2493560.0 50217.23959060.6525.12692040.0 301877219620.532.1
2502060.0 01728779960.549.7
2512055.0 01758439970.543.3
2522050.0 01768119980.539.7
2532045.0 01817699850.537.9
2542040.0 01877219620.4825.5
2552060.0 101728779960.554
2562055.0 101758439970.546.5
2572060.0 101768119980.543.3
2582045.0 101817699850.533.5
2592040.0 101877219620.525.2

References

  1. Monteiro, P.J.; Miller, S.A.; Horvath, A. Towards sustainable concrete. Nat. Mater. 2017, 16, 698–699. [Google Scholar] [CrossRef] [PubMed]
  2. Tang, Y.; Li, L.; Feng, W.; Liu, F.; Zhu, M. Study of seismic behavior of recycled aggregate concrete-filled steel tubular columns. J. Constr. Steel Res. 2018, 148, 1–15. [Google Scholar] [CrossRef]
  3. Tang, Y.; Li, L.; Wang, C.; Chen, M.; Feng, W.; Zou, X.; Huang, K. Real-time detection of surface deformation and strain in recycled aggregate concrete-filled steel tubular columns via four-ocular vision. Robot. Comput. Integr. Manuf. 2019, 59, 36–46. [Google Scholar] [CrossRef]
  4. Behnood, A.; Golafshani, E.M. Predicting the compressive strength of silica fume concrete using hybrid artificial neural network with multi-objective grey wolves. J. Clean. Prod. 2018, 202, 54–64. [Google Scholar] [CrossRef]
  5. Bentz, D.P. Influence of water-to-cement ratio on hydration kinetics: Simple models based on spatial considerations. Cement Concrete Res. 2006, 36, 238–244. [Google Scholar] [CrossRef]
  6. Chidiac, S.; Panesar, D. Evolution of mechanical properties of concrete containing ground granulated blast furnace slag and effects on the scaling resistance test at 28 days. Cement Concrete Comp. 2008, 30, 63–71. [Google Scholar] [CrossRef]
  7. Cheng, A.; Huang, R.; Wu, J.-K.; Chen, C.-H. Influence of GGBS on durability and corrosion behavior of reinforced concrete. Mater. Chem. Phys. 2005, 93, 404–411. [Google Scholar] [CrossRef]
  8. Özbay, E.; Erdemir, M.; Durmuş, H.İ. Utilization and efficiency of ground granulated blast furnace slag on concrete properties—A review. Constr. Build. Mater. 2016, 10, 423–434. [Google Scholar] [CrossRef]
  9. Song, H.-W.; Saraswathy, V. Studies on the corrosion resistance of reinforced steel in concrete with ground granulated blast-furnace slag—An overview. J. Hazard. Mater. 2006, 138, 226–233. [Google Scholar] [CrossRef]
  10. Neville, A.M. Properties of Concrete; Pearson Education: Bengaluru, India, 1963. [Google Scholar]
  11. Oner, A.; Akyuz, S. An experimental study on optimum usage of GGBS for the compressive strength of concrete. Cement Concrete Comp. 2007, 29, 505–514. [Google Scholar] [CrossRef]
  12. Papadakis, V.; Tsimas, S. Supplementary cementing materials in concrete: Part I: Efficiency and design. Cement Concrete Res. 2002, 32, 1525–1532. [Google Scholar] [CrossRef]
  13. Alshihri, M.M.; Azmy, A.M.; El-Bisy, M.S. Neural networks for predicting compressive strength of structural light weight concrete. Constr. Build. Mater. 2009, 23, 2214–2219. [Google Scholar] [CrossRef]
  14. Chithra, S.; Kumar, S.R.R.S.; Chinnaraju, K.; Alfin Ashmita, F. A comparative study on the compressive strength prediction models for High Performance Concrete containing nano silica and copper slag using regression analysis and Artificial Neural Networks. Constr. Build. Mater. 2016, 114, 528–535. [Google Scholar] [CrossRef]
  15. Dantas, A.T.A.; Leite, M.B.; de Jesus Nagahama, K. Prediction of compressive strength of concrete containing construction and demolition waste using artificial neural networks. Constr. Build. Mater. 2013, 38, 717–722. [Google Scholar] [CrossRef]
  16. Beşikçi, E.B.; Arslan, O.; Turan, O.; Ölçer, A.I. An artificial neural network based decision support system for energy efficient ship operations. Comput. Oper. Res. 2016, 66, 393–401. [Google Scholar] [CrossRef] [Green Version]
  17. Sarıdemir, M. Prediction of compressive strength of concretes containing metakaolin and silica fume by artificial neural networks. Adv. Eng. Softw. 2009, 40, 350–355. [Google Scholar] [CrossRef]
  18. Ince, R. Prediction of fracture parameters of concrete by artificial neural networks. Eng. Fract. Mech. 2004, 71, 2143–2159. [Google Scholar] [CrossRef]
  19. Naderpour, H.; Rafiean, A.H.; Fakharian, P. Compressive strength prediction of environmentally friendly concrete using artificial neural networks. J. Build. Eng. 2018, 16, 213–219. [Google Scholar] [CrossRef]
  20. Uddin, M.T.; Mahmood, A.H.; Kamal, M.R.I.; Yashin, S.M.; Zihan, Z.U.A. Effects of maximum size of brick aggregate on properties of concrete. Constr. Build. Mater. 2017, 134, 713–726. [Google Scholar] [CrossRef]
  21. Bilim, C.; Atiş, C.D.; Tanyildizi, H.; Karahan, O. Predicting the compressive strength of ground granulated blast furnace slag concrete using artificial neural network. Adv. Eng. Softw. 2009, 40, 334–340. [Google Scholar] [CrossRef]
  22. Boukhatem, B.; Ghrici, M.; Kenai, S.; Tagnit-Hamou, A. Prediction of Efficiency Factor of Ground-Granulated Blast-Furnace Slag of Concrete Using Artificial Neural Network. ACI Mater. J. 2011, 108, 56–63. [Google Scholar]
  23. Lee, A.; Geem, Z.; Suh, K.-D. Determination of optimal initial weights of an artificial neural network by using the harmony search algorithm: Application to breakwater armor stones. Appl. Sci. 2016, 6, 164. [Google Scholar] [CrossRef]
  24. Liou, S.-W.; Wang, C.-M.; Huang, Y.-F. Integrative discovery of multifaceted sequence patterns by frame-relayed search and hybrid PSO-ANN. J. UCS 2009, 15, 742–764. [Google Scholar]
  25. Rukhaiyar, S.; Alam, M.; Samadhiya, N. A PSO-ANN hybrid model for predicting factor of safety of slope. Int. J. Geotech. Eng. 2018, 12, 556–566. [Google Scholar] [CrossRef]
  26. Gordan, B.; Armaghani, D.J.; Hajihassani, M.; Monjezi, M. Prediction of seismic slope stability through combination of particle swarm optimization and neural network. Eng. Comput. 2016, 32, 85–97. [Google Scholar] [CrossRef]
  27. Bo, G.; Hejuan, H.; Hui, H.; Yan, R. Hybrid PSO-BP neural network approach for wind power forecasting. Int. Energy J. 2017, 17, 211–222. [Google Scholar]
  28. Wang, Q.; Li, Y.; Diao, M.; Gao, W.; Qi, Z. Performance enhancement of INS/CNS integration navigation system based on particle swarm optimization back propagation neural network. Ocean Eng. 2015, 108, 33–45. [Google Scholar] [CrossRef]
  29. Park, I. Study on the Properties of Concrete with the Ratio of Ground Granulated Blast-Furnace Slag Replacement. Master’s Thesis, Kyungpook National University, Daegu, Korea, 2005. [Google Scholar]
  30. Jin, J. Fresh and Hardened Properties of Concrete According to Slag Replacement Ratio. Master’s Thesis, Chonnam National University, Gwangju, Korea, 2015. [Google Scholar]
  31. Choi, W. Study on the Mix Design Method of Concrete Using Finely Blast-Furnace Slag. Master’s Thesis, Hanyang University, Seoul, Korea, 1999. [Google Scholar]
  32. Moon, H.; Shin, H. Utilization of ready mixed concrete sludge for improving the strength of concrete with GGBF slag. J. Korean Soc. Civ. Eng. 2002, 22, 315–326. [Google Scholar]
  33. Yoon, J.; Lee, M. A study on the compressive strength development of concrete using ground granulated blast-furnace slag of different fineness. J. Archit. Inst. Korea Struct. Constr. 2000, 16, 63–70. [Google Scholar]
  34. Lee, K.; Kwon, K.; Lee, H.; Lee, S.; Kim, G. Characteristics of autogenous shrinkage for concrete containing blast-furnace slag. J. Korea Concr. Inst. 2004, 16, 621–626. [Google Scholar] [CrossRef]
  35. Lee, K.M.; Lee, H.K.; Lee, S.H.; Kim, G.Y. Autogenous shrinkage of concrete containing granulated blast-furnace slag. Cement Concrete Res. 2006, 36, 1279–1285. [Google Scholar] [CrossRef]
  36. Wainwright, P.; Rey, N. The influence of ground granulated blast furnace slag (GGBS) additions and time delay on the bleeding of concrete. Cement Concrete Comp. 2000, 22, 253–257. [Google Scholar] [CrossRef]
  37. Li, Q.; Li, Z.; Yuan, G. Effects of elevated temperatures on properties of concrete containing ground granulated blast furnace slag as cementitious material. Constr. Build. Mater. 2012, 35, 687–692. [Google Scholar] [CrossRef]
  38. Rafiq, M.; Bugmann, G.; Easterbrook, D. Neural network design for engineering applications. Comput. Struct. 2001, 79, 1541–1552. [Google Scholar] [CrossRef]
  39. Armaghani, D.J.; Mohamad, E.T.; Narayanasamy, M.S.; Narita, N.; Yagiz, S. Development of hybrid intelligent models for predicting TBM penetration rate in hard rock condition. Tunn. Undergr. Sp. Tech. 2017, 63, 29–43. [Google Scholar] [CrossRef]
  40. Šipoš, T.K.; Miličević, I.; Siddique, R. Model for mix design of brick aggregate concrete based on neural network modelling. Constr. Build. Mater. 2017, 148, 757–769. [Google Scholar] [CrossRef]
  41. Ballal, T.M.A. The Use of Artificial Neural Networks for Modelling Buildability in Preliminary Structural Design. Ph.D. Thesis, Loughborough University, Loughborough, UK, 1999. [Google Scholar]
  42. Bingöl, A.F.; Tortum, A.; Gül, R. Neural networks analysis of compressive strength of lightweight concrete after high temperatures. Mater. Des. 2013, 52, 258–264. [Google Scholar] [CrossRef]
  43. Shi, Y.; Eberhart, R.C. Parameter selection in particle swarm optimization. In Proceedings of the 7th International Conference on Evolutionary Programming, San Diego, CA, USA, 25–27 March 1998; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  44. Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007. [Google Scholar]
  45. Zhang, J.-R.; Zhang, J.; Lok, T.-M.; Lyu, M.R. A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl. Math. Comput. 2007, 185, 1026–1037. [Google Scholar] [CrossRef]
  46. Kaur, A.; Kaur, M. A review of parameters for improving the performance of particle swarm optimization. Int. J. Hybrid Inf. Technol. 2015, 8, 7–14. [Google Scholar] [CrossRef]
  47. Van den Bergh, F.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Inf. Sci. 2006, 176, 937–971. [Google Scholar] [CrossRef]
  48. Parsopoulos, K.E.; Vrahatis, M.N. Recent approaches to global optimization problems through particle swarm optimization. Nat. Comput. 2002, 1, 235–306. [Google Scholar] [CrossRef]
  49. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  50. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  51. Shi, Y.; Eberhart, R.C. Particle swarm optimization: Developments, applications and resources. In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001. [Google Scholar]
  52. Chang, Y.-T.; Lin, J.; Shieh, J.-S.; Abbod, M.F. Optimization the initial weights of artificial neural networks via genetic algorithm applied to hip bone fracture prediction. Adv. Fuzzy Syst. 2012, 6. [Google Scholar] [CrossRef]
  53. Beale, M.H.; Hagan, M.T.; Demuth, H.B. Neural Network Toolbox™ User’s Guide; The MathWorks Inc.: Natick, MA, USA, 2017. [Google Scholar]
  54. Pala, M.; Özbay, E.; Öztaş, A.; Yuce, M. Appraisal of long-term effects of fly ash and silica fume on compressive strength of concrete by neural networks. Constr. Build. Mater. 2007, 21, 384–394. [Google Scholar] [CrossRef]
  55. Hecht-Nielsen, R. Kolmogorov’s mapping neural network existence theorem. In Proceedings of the IEEE International Conference on Neural Networks III, San Diego, CA, USA, 21–24 June 1987; IEEE Press: Piscataway, NJ, USA, 1987. [Google Scholar]
  56. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signal. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  57. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  58. Ozturan, M.; Kutlu, B.; Ozturan, T. Comparison of concrete strength prediction techniques with artificial neural network approach. Build. Res. J. 2008, 56, 23–36. [Google Scholar]
  59. Hush, D.R. Classification with neural networks: A performance analysis. In Proceedings of the IEEE International Conference on Systems Engineering, Fairborn, OH, USA, 24–26 August 1989. [Google Scholar]
  60. Gallant, S.I. Neural Network Learning and Expert Systems; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  61. Tamura, S.I.; Tateishi, M. Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Trans. Neural Netw. 1997, 8, 251–255. [Google Scholar] [CrossRef]
  62. Sheela, K.G.; Deepa, S.N. Review on methods to fix number of hidden neurons in neural networks. Math. Probl. Eng. 2013, 11, 425740. [Google Scholar] [CrossRef]
  63. Alam, M.N.; Das, B.; Pant, V. A comparative study of metaheuristic optimization approaches for directional overcurrent relays coordination. Electr. Power Syst. Res. 2015, 128, 39–52. [Google Scholar] [CrossRef]
  64. Waszczyszyn, Z.; Ziemiański, L. Neural networks in the identification analysis of structural mechanics problems. In Parameter Identification of Materials and Structures; Springer: Berlin/Heidelberg, Germany, 2005; pp. 265–340. [Google Scholar]
  65. Sayyad, H.; Manshad, A.K.; Rostami, H. Application of hybrid neural particle swarm optimization algorithm for prediction of MMP. Fuel 2014, 116, 625–633. [Google Scholar] [CrossRef]
  66. Twomey, J.M.; Smith, A.E. Bias and variance of validation methods for function approximation neural networks under conditions of sparse data. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 1998, 28, 417–430. [Google Scholar] [CrossRef]
Figure 1. Artificial neuron model.
Figure 1. Artificial neuron model.
Materials 12 03708 g001
Figure 2. Flowchart for the hybrid particle swarm optimization-back-propagation (PSO-BP) algorithm.
Figure 2. Flowchart for the hybrid particle swarm optimization-back-propagation (PSO-BP) algorithm.
Materials 12 03708 g002
Figure 3. Architecture of the compressive strength (CS) prediction neural network model.
Figure 3. Architecture of the compressive strength (CS) prediction neural network model.
Materials 12 03708 g003
Figure 4. Comparison between the experimental CS and that predicted by the BP neural network.
Figure 4. Comparison between the experimental CS and that predicted by the BP neural network.
Materials 12 03708 g004
Figure 5. Comparison between the experimental CS and that predicted by the PSO neural network.
Figure 5. Comparison between the experimental CS and that predicted by the PSO neural network.
Materials 12 03708 g005
Figure 6. Comparison between the experimental CS and that predicted by the PSO-BP neural network.
Figure 6. Comparison between the experimental CS and that predicted by the PSO-BP neural network.
Materials 12 03708 g006
Figure 7. Percentage of data that have equal or smaller relative error than the specified error criterion (SR) for the developed models.
Figure 7. Percentage of data that have equal or smaller relative error than the specified error criterion (SR) for the developed models.
Materials 12 03708 g007
Figure 8. (a) Mean and (b) standard deviation of the root mean squared error (RMSE) for the developed models.
Figure 8. (a) Mean and (b) standard deviation of the root mean squared error (RMSE) for the developed models.
Materials 12 03708 g008
Table 1. Ranges of the input and output parameters in the database.
Table 1. Ranges of the input and output parameters in the database.
ParametersSymbolUnitCategoryMinMax
Curing temperatureT°CInput575
Water to binder ratiow/b%Input2588.9
WaterWkg/m3Input128295
GGBFS to total binder ratioGGBFS/B%Input085
Fine aggregateFAkg/m3Input395947
Coarse aggregateCAkg/m3Input7231135
SuperplasticizerSP%Input02.9
Compressive strengthCSMPaOutput17.277
Table 2. Empirical equations for the number of hidden neurons (Nh).
Table 2. Empirical equations for the number of hidden neurons (Nh).
Empirical EquationReference
0.75NiNeville (1986)[58]
2Ni + 1Hecht-Nielsen (1987)[55]
3NiHush (1989)[59]
2NiGallant (1993)[60]
Ni + 1Tamura (1997)[61]
(4Ni2 + 3)/(Ni2 − 8)Sheela (2013)[62]
Ni is the number of input neurons.
Table 3. Values of the PSO parameters considered.
Table 3. Values of the PSO parameters considered.
Acceleration Coefficient (c1, c2)Swarm Size (Nop)Number of Hidden Neurons (Nh)
c1 = 0.8, c2 = 3.2c1 = 3.2, c2 = 0.8102–21
c1 = 1.333, c2 = 2.667c1 = 2, c2 = 1.520
c1 = 1.714, c2 = 2.286c1 = 2, c2 = 130
c1 = 2, c2 = 2c1 = 1, c2 = 240
c1 = 2.286, c2 = 1.714c1 = 1.5, c2 = 250
c1 = 1.333, c2 = 2.667c1 = 1.5, c2 = 1.5100
Table 4. Obtained statistical performance values for the developed models.
Table 4. Obtained statistical performance values for the developed models.
Statistical IndicesBPPSOPSO-BP
TRTSTRTSTRTS
MAE2.4463.3253.1964.6631.5812.689
RMSE3.1235.0454.4005.8222.2533.332
MAPE0.05950.07780.08210.1140.03920.0644
R20.9430.9060.8840.8610.9710.961
TR and TS represent the training and testing datasets, respectively.
Table 5. Values of the SR for the developed models.
Table 5. Values of the SR for the developed models.
Learning AlgorithmSR (%)Bep (%) (SR = 100%)
Bep = 5%Bep = 10%Bep = 20%Bep = 30%Bep = 40%
BP49.281.494.098.899.644
PSO30.261.785.595.110036
PSO-BP64.989.599.210010022
Bep represents the restrained error.
Table 6. Mean and standard deviation of the RMSE for the developed models.
Table 6. Mean and standard deviation of the RMSE for the developed models.
Learning AlgorithmTRTS
MeanStandard DeviationMeanStandard Deviation
BP3.1850.8283.9591.989
PSO5.0790.5576.7671.170
PSO-BP2.6300.2712.9050.319
TR and TS represent the training and testing datasets, respectively.

Share and Cite

MDPI and ACS Style

Han, I.-J.; Yuan, T.-F.; Lee, J.-Y.; Yoon, Y.-S.; Kim, J.-H. Learned Prediction of Compressive Strength of GGBFS Concrete Using Hybrid Artificial Neural Network Models. Materials 2019, 12, 3708. https://doi.org/10.3390/ma12223708

AMA Style

Han I-J, Yuan T-F, Lee J-Y, Yoon Y-S, Kim J-H. Learned Prediction of Compressive Strength of GGBFS Concrete Using Hybrid Artificial Neural Network Models. Materials. 2019; 12(22):3708. https://doi.org/10.3390/ma12223708

Chicago/Turabian Style

Han, In-Ji, Tian-Feng Yuan, Jin-Young Lee, Young-Soo Yoon, and Joong-Hoon Kim. 2019. "Learned Prediction of Compressive Strength of GGBFS Concrete Using Hybrid Artificial Neural Network Models" Materials 12, no. 22: 3708. https://doi.org/10.3390/ma12223708

APA Style

Han, I. -J., Yuan, T. -F., Lee, J. -Y., Yoon, Y. -S., & Kim, J. -H. (2019). Learned Prediction of Compressive Strength of GGBFS Concrete Using Hybrid Artificial Neural Network Models. Materials, 12(22), 3708. https://doi.org/10.3390/ma12223708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop