**3. Methodology**

The methodology section provides a brief detail about the approaches being made to determine the CS of concrete mathematically as shown in Figure 2. First, the AI processes used in this research are explained. The results obtained from AI data processing techniques are assessed for validity by different statistical parameters. *Crystals* **2021**, *11*, x FOR PEER REVIEW 5 of 24


### **Figure 2.** Adopted methodology for study*.* **Figure 2.** Adopted methodology for study.

### *3.1. Modeling Techniques 3.1. Modeling Techniques*

Machine learning-based modeling has been used in the past to predict the different mechanical properties of materials [40–42]. These types of modeling techniques can be utilized to develop models for prediction of a property of material. They do not need any knowledge of the rudimentary experimental processes. This section of paper provides a brief introduction of the predictive models used in this study. These models are as follow: Machine learning-based modeling has been used in the past to predict the different mechanical properties of materials [40–42]. These types of modeling techniques can be utilized to develop models for prediction of a property of material. They do not need any knowledge of the rudimentary experimental processes. This section of paper provides a brief introduction of the predictive models used in this study. These models are as follow:

### 3.1.1. ANN

lations.

3.1.1. ANN ANN is an artificial data analyzing technique. It is inspired by the learning capability of human brain. The most widely used type of ANN is feedforward back propagation (FFBP). As evident from Figure 3, an FFBP consists of at least three layers , namely, the input layer, hidden layer, and output layer. The nodes of these layers are connected in a proper sequence along with some weights. The input layer nodes do not perform any function on input data. Their function is to just receive the data from outside. It is a hidden layer where data are biased, weighted, and summed up. These processed data are ANN is an artificial data analyzing technique. It is inspired by the learning capability of human brain. The most widely used type of ANN is feedforward back propagation (FFBP). As evident from Figure 3, an FFBP consists of at least three layers, namely, the input layer, hidden layer, and output layer. The nodes of these layers are connected in a proper sequence along with some weights. The input layer nodes do not perform any function on input data. Their function is to just receive the data from outside. It is a hidden layer where data are biased, weighted, and summed up. These processed data are then sent out to the output layers [43,44].

then sent out to the output layers [43,44]. There are basically two types of FFBP, namely, single layer perceptron (SLP) and multiple layers perceptron (MLP). Both types of FFBP have their own advantages and disadvantages. Alhough the SLP is simple and easy to use, it cannot handle nonlinear relations. On the other hand, MLP are complex, but they can be applied to nonlinear re-There are basically two types of FFBP, namely, single layer perceptron (SLP) and multiple layers perceptron (MLP). Both types of FFBP have their own advantages and disadvantages. Alhough the SLP is simple and easy to use, it cannot handle nonlinear relations. On the other hand, MLP are complex, but they can be applied to nonlinear relations.

**Figure 3.** Illustration of ANN.

**Figure 3.** Illustration of ANN*.* Mathematically, an MLP operates in following way: Mathematically, an MLP operates in following way: Step 1: The inputs are summed and weighted as

$$s\_{\vec{j}} = \sum\_{i=1}^{n} \omega\_{\vec{i}\vec{j}} I\_{\vec{i}} + b \vec{j} \, \vec{j} \, \, \, = \, \mathbf{1}, \, \mathbf{2}, \, \mathbf{3}, \, \, \, \ldots, \, \mathbf{h} \tag{1}$$

 = ∑ + = =1 1, 2, 3, ……, h (1) where = number of total inputs, = current input number, = weight between the where *n* = number of total inputs, *I<sup>i</sup>* = current input number, *ωij* = weight between the previous layer, and the *j*th neuron and *b* are used to define the termination of process.

previous layer, and the *j*th neuron and *b* are used to define the termination of process. Step 2: This step includes an activation function. There are various types of activation functions such as sigmoid, ramp, and Gaussian functions. However, this research Step 2: This step includes an activation function. There are various types of activation functions such as sigmoid, ramp, and Gaussian functions. However, this research utilizes sigmoid function which is defined as

$$s\_{\circ} = \frac{1}{1 + e^{-s\_{\circ}}} \text{ j } = \text{ 1, 2, 3, } \dots, \dots, \text{ h} \tag{2}$$

Step 3: This represents the final outputs. The final outputs depend on the outputs calculated by hidden nodes. The final outcome can be expressed as Step 3: This represents the final outputs. The final outputs depend on the outputs calculated by hidden nodes. The final outcome can be expressed as

$$O\_k = \sum\_{j=1}^{h} \left(\omega\_{jk} \cdot s\_j\right) + b'\_{k'} k \;=\; 1, 2, \dots, m \tag{3}$$

$$O\_k = \text{sigmoid}\left(O\_k\right) = \frac{1}{\left(1 + e^{-O\_k}\right)}, k = 1, 2, \dots, m \tag{4}$$

den node. Similarly, ′ = bias output of *k*thoutput node. In this research, 70 percent of the data points are selected randomly for the training In above equation, *ωjk* = weighted connection between *k*th output node to *j*th hidden node. Similarly, *b* 0 *k* = bias output of *k*th output node.

of data, and 30 percent for validation. In this research, 70 percent of the data points are selected randomly for the training of data, and 30 percent for validation.

### 3.1.2. ANFIS 3.1.2. ANFIS

ANFIS is a technique that utilizes the combined effect of artificial neural networks and fuzzy logic [45]. Figure 4 shows a brief illustration of the ANFIS technique. An artificial neural network is used to minimize the chances of error in the output data. Thus , the fuzzy logic is implied to demonstrate the expert knowledge [42]. Fuzzy logic rules are applied as if-then while mathematically programming for the desired input and output datasets. An ANFIS model consists of five layers normally. These are (1) fuzzification, (2) set of rules,(3) normalization, (4) defuzzification, (5) aggregation. ANFIS is a technique that utilizes the combined effect of artificial neural networks and fuzzy logic [45]. Figure 4 shows a brief illustration of the ANFIS technique. An artificial neural network is used to minimize the chances of error in the output data. Thus, the fuzzy logic is implied to demonstrate the expert knowledge [42]. Fuzzy logic rules are applied as if-then while mathematically programming for the desired input and output datasets. An ANFIS model consists of five layers normally. These are (1) fuzzification, (2) set of rules, (3) normalization, (4) defuzzification, (5) aggregation.

**Figure 4.** Illustration of adaptive neuro-fuzzy inference system (ANFIS).

**Figure 4.** Illustration of adaptive neuro-fuzzy inference system (ANFIS)*.* Layer 1 is the fuzzification layer. It contains all the function members of the input Layer 1 is the fuzzification layer. It contains all the function members of the input variables. A Gaussian function is used in this layer to predict the outcome. Mathematically, it can be expressed as

$$
\mu\_{ui}\left(\mathbf{x}\right) = \exp\left[-\frac{\left(\mathbf{x} - a\_i\right)}{2\left\|\mathbf{e}\right\|^2}\right] \tag{5}
$$

( − ) where *a<sup>i</sup>* and *ε<sup>i</sup>* are parameters of a function membership.

where *ai* and *ε<sup>i</sup>* are parameters of a function membership.

cally, it can be expressed as

 () = exp[− 2 2 ] (5) Layer 2 contains nodes which send the output by multiplying the input by certain weightages. This layer utilizes the fuzzy AND logic by using the equation listed below:

$$w\_{\!\!\!\/} = \ \mu\_{\!\!\/\/ \!\!\/}(\mathbf{x}) \times \ \mu\_{\!\!\/\/ \!\/\/ \!\/}(\mathbf{y}) \tag{6}$$

weightages. This layer utilizes the fuzzy AND logic by using the equation listed below: = () × () (6) Layer 3 has an aim to normalize the data. It normalizes the functions of membership. It calculates the ratios between different firing strength using the following expression:

$$
\varpi = \frac{w\_i}{\sum\_i w\_i} \tag{7}
$$

̅ = ∑ (7) Layer 4 is known as the defuzzification layer. It contains nodes that conclude the Layer 4 is known as the defuzzification layer. It contains nodes that conclude the fuzzy logic rules. This layer contains square nodes, which can be expressed by following function:

$$
\overline{w}\_{i} f\_{i} = \; w\_{i} \; \times \; (m\_{i} \mathbf{x} + n\_{i} \mathbf{y} + r\_{i}) \tag{8}
$$

function: where *m<sup>i</sup>* , *n<sup>i</sup>* , and *r<sup>i</sup>* are linear parameters.

̅ = × ( + + ) (8) Layer 5 has a function of aggregation. It sums up the previous layers and presents the final output mathematically as follows:

$$\sum\_{i} \overline{w} f\_{i} = \begin{array}{c} \sum\_{i} w\_{i} f\_{i} \\ \overline{\sum\_{i} w\_{i}} \end{array} \tag{9}$$

∑ All the data points are used for the training of data.

∑̅ = ∑ (9) Off the shelf functionality of MATLAB is used for ANN and ANFIS techniques in this research.

### All the data points are used for the training of data. 3.1.3. MNLR

sion:

Off the shelf functionality of MATLAB is used for ANN and ANFIS techniques in this research. 3.1.3. MNLR MNLR is a technique which is used to model a random nonlinear relationship between the dependent and independent variables. The following equation represents the MNLR [41]:

$$Y = a + \beta\_1 X\_i + \beta\_2 X\_j + \beta\_3 X\_i^2 + \beta\_4 X\_j^2 + \dots + \dots + \beta\_k X\_i X\_j \tag{10}$$

MNLR [41]: = + 1 + 2 + 3 <sup>2</sup> + 4 <sup>2</sup> +.. . . . . . .+ (10) where *a* = intercept, *β* = slope or coefficient, K = number of observations. The above equation can make an estimate for the value of *Y* for each value of *X*.

### where *a* = intercept, *β* = slope or coefficient, K = number of observations. The above 3.1.4. LR

equation can make an estimate for the value of *Y* for each value of *X*. LR is a technique in which there is linear relationship between the dependent and independent variables. It can be represented mathematically as follows:

$$Y = \begin{array}{c} a + \beta\_1 X\_1 + \beta\_2 X\_2 + \beta\_3 X\_3 \ + \dots \dots \dots + \beta\_i X\_i \end{array} \tag{11}$$

The above equation can be utilized to find values of *Y* for each input value *X*. = + 1<sup>1</sup> + 2<sup>2</sup> + 3<sup>3</sup> +.. . . . . . .+

independent variables. It can be represented mathematically as follows:

LR is a technique in which there is linear relationship between the dependent and

(11)

*Crystals* **2021**, *11*, x FOR PEER REVIEW 8 of 24

In above equations of MNLR and LR, "*Y*" stands for compressive strength of RHA. Similarly, the values of "*X*" represents inputs which are age, water content, RHA content, SP content, and the percentage of aggregates. The above equation can be utilized to find values of *Y* for each input value *X*. In above equations of MNLR and LR, "*Y*" stands for compressive strength of RHA.

The models are developed in Microsoft Excel by authors for MNLR and LR techniques using the above equations. Similarly, the values of "*X*" represents inputs which are age, water content, RHA content, SP content, and the percentage of aggregates. The models are developed in Microsoft Excel by authors for MNLR and LR tech-

### **4. Results**

A total of 192 data points are used for all the models and techniques. A total of 134 data points are used for training, and 58 data points for validation. The results of machine learning techniques and regression models are given in Appendix A. **4. Results** A total of 192 data points are used for all the models and techniques. A total of 134 data points are used for training, and 58 data points for validation. The results of machine

learning techniques and regression models are given in Appendix A.

### *4.1. ANN*

Parametric adjustments are made before running the proposed ANN model. These parameters include number of hidden layers, total number of neurons per hidden layer, training function for neural networks, epochs, and maximum number of iterations. These parameters are determined through the hit and trial method in this research. The details of the parametric adjustment are given in Table 4. *4.1. ANN* Parametric adjustments are made before running the proposed ANN model. These parameters include number of hidden layers, total number of neurons per hidden layer, training function for neural networks, epochs, and maximum number of iterations. These parameters are determined through the hit and trial method in this research. The details

**Table 4.** Parametric adjustment of the developed model. of the parametric adjustment are given in Table 4.

niques using the above equations.

3.1.4. LR


MATLAB was used to predict the compressive strength of RBC through ANN. ANN gave the results which were closest to the experimental results. The same can be verified through the statistical parameters of ANN. MATLAB was used to predict the compressive strength of RBC through ANN. ANN gave the results which were closest to the experimental results. The same can be verified through the statistical parameters of ANN.

Training completed at epoch 2

It is noteworthy that the correlation factor for ANN predicted CS (R<sup>2</sup> = 0.98) is quite high. The prediction result for ANN is plotted in Figure 5. It is noteworthy that the correlation factor for ANN predicted CS (R2 = 0.98) is quite high. The prediction result for ANN is plotted in Figure 5.

**Figure 5.** *Cont*.

**Figure 5.** ANN (**a**) training, (**b**) validation, (**c**) testing. **Figure 5.** ANN (**a**) training, (**b**) validation, (**c**) testing. **Figure 5.** ANN (**a**) training, (**b**) validation, (**c**) testing.

### *4.2. ANFIS 4.2. ANFIS*

Similarly, before training the data on ANFIS, parametric adjustments were made. These included total number of epochs and function used for the activation of ANFIS. The parametric adjustments for ANFIS are given in Table 5. Similarly, before training the data on ANFIS, parametric adjustments were made. These included total number of epochs and function used for the activation of ANFIS. The parametric adjustments for ANFIS are given in Table 5. Similarly, before training the data on ANFIS, parametric adjustments were made. These included total number of epochs and function used for the activation of ANFIS. The parametric adjustments for ANFIS are given in Table 5.

(**c**)

*Crystals* **2021**, *11*, x FOR PEER REVIEW 9 of 24

**Table 5.** Parametric adjustments for ANFIS*.* **Table 5.** Parametric adjustments for ANFIS. **Table 5.** Parametric adjustments for ANFIS*.*

*4.2. ANFIS*


experimental values. Figure 7 shows the rules assigned to ANFIS for the optimum outcome. MATLAB is used for ANFIS. The correlation factor for ANFIS predicted CS (R<sup>2</sup> = 0.89) is also quite high. Figure 6 shows that the predicted results are quite close to the experimental values. 0.89) is also quite high. Figure 6 shows that the predicted results are quite close to the experimental values. Figure 7 shows the rules assigned to ANFIS for the optimum outcome.

0.89) is also quite high. Figure 6 shows that the predicted results are quite close to the

**Figure 6.** *Cont*.

Figure 7 shows the rules assigned to ANFIS for the optimum outcome. (**c**) **Figure 6.** ANFIS (**a**) training, (**b**) validation, (**c**) testing.

(**b**)

**Figure 7.** *Cont*.

(**b**)

154


*Crystals* **2021**, *11*, x FOR PEER REVIEW 11 of 24

**Figure 7.** *Cont*.

*Crystals* **2021**, *11*, x FOR PEER REVIEW 12 of 24

**Figure 7.** ANFIS modeling rules. (**a**) 1–29, (**b**) 30–60, (**c**) 61–91, (**d**) 92–122, (**e**) 123–153, (**f**) 154–184, (**g**) 185–192. *4.3. MNLR* **Figure 7.** ANFIS modeling rules. (**a**) 1–29, (**b**) 30–60, (**c**) 61–91, (**d**) 92–122, (**e**) 123–153, (**f**) 154–184, (**g**) 185–192.
