4.2.5. Velocity Limits (fv)

Each particle's new speed is determined as follows:

$$\mathbf{y}\_{\rm i,j}(\mathbf{t}+1) = \mathbf{w} \mathbf{y}\_{\rm i,j}(\mathbf{t}) + \mathbf{c}\_1 \mathbf{r}\_1 (\mathbf{p}\_{\rm i,j} - \mathbf{x}\_{\rm i,j}(\mathbf{t})) + \mathbf{c}\_2 \mathbf{r}\_2 (\mathbf{p}\_{\rm g,j} - \mathbf{x}\_{\rm i,j}(\mathbf{t})); \ \mathbf{j} = 1, 2, \dots, \mathbf{d} \tag{1}$$

where c<sup>1</sup> and c<sup>2</sup> are constants referred to as acceleration coefficients, w is referred to as the inertia factor, and r<sup>1</sup> and r<sup>2</sup> are two independent random numbers distributed evenly within the spectrum. The location of each particle is, thus, modified according to the following equation in each generation:

$$\mathbf{a}\_{\mathbf{i},\mathbf{j}}(\mathbf{t}+1) = \mathbf{a}\_{\mathbf{i},\mathbf{j}}(\mathbf{t}) + \mathbf{y}\_{\mathbf{i},\mathbf{j}}(\mathbf{t}+1), \quad \mathbf{j} = 1, 2, 3, \dots, \mathbf{d} \tag{2}$$

In the standard PSO, Equation (1) is used to calculate the new velocity according to its previous velocity and to the distance of its current position from both its own best historical position and its neighbors' best positions. The value of each factor in Y<sup>i</sup> can be clamped within the range to monitor excessive particles roaming outside the search area, then the particle flies toward a new location.

### *4.3. Monte Carlo Simulation*

The Monte Carlo technique has been commonly used as a variability generator in the training phase of the algorithm, taking into account the randomness of the input space [69–72]. Hun et al. [73] studied the problem of crack propagation in heterogeneous media within a probabilistic context using Monte Carlo simulations. Additionally, Capillon et al. [74] investigated an uncertainty problem in structural dynamics for composite structures using Monte Carlo simulations. Overall, the Monte Carlo method has been successfully applied to take into account the randomness in the field of mechanics [75–80]. The key point of the Monte Carlo method is to repeat the simulations many times to calculate the output responses by randomly choosing values of the input variables in the corresponding space [81,82]. In this manner, all information about the fluctuations in the input space can be transferred to the output response. In this work, a massive numerical parallelization scheme was programmed to conduct the randomness propagation process. The statistical convergence of the Monte Carlo method reflects whether the number of simulations is sufficient, which can be defined as follows [83–85]:

$$f\_{\rm conv} = \frac{100}{m\underline{\text{S}}} \sum\_{j=1}^{m} \mathcal{S}\_{j} \tag{3}$$

where *m* is the number of Monte Carlo iterations, S is the random variable considered, and *S* is the average value of *S*.

#### *4.4. Quality Assessment Criteria*

In the present work, three quality assessment criteria—the correlation coefficient (R), root mean squared error (RMSE), and mean absolute error (MAE)—have been used in order to validate and test the developed AI models. R<sup>2</sup> allows us to identify the statistical relationship between two data points and can be calculated using the following equation [86–92]:

$$\mathbf{R} = \sqrt{\frac{\sum\_{j=1}^{N} (y\_{0,j} - \overline{y})(y\_{p,j} - \overline{y})}{\sqrt{\sum\_{j=1}^{N} (y\_{0,j} - \overline{y})\sum\_{j=1}^{N} (y\_{p,j} - \overline{y})^2}}\tag{4}$$

where *N* is the number of observations, *y<sup>p</sup>* and *y* are the predicted and mean predicted values, while *y<sup>0</sup>* and *y* are the measured and mean measured values of Young's modulus of the nanocomposite, respective *j* = *1*:*N*. In the case of RMSE and MAE, which have the same units as the values being estimated, low value for RMSE and MAE basically indicate good accuracy of the models' prediction output [93,94]. In an ideal prediction, RMSE and MAE should be zero. RMSE and MAE are given by the following formulae [95–99]:

$$\text{RMSE} = \sqrt{\sum\_{i=1}^{N} \left(y\_0 - y\_p\right)^2 / N} \tag{5}$$

$$\text{MAE} = \frac{1}{N} \sum\_{i=1}^{N} |y\_0 - y\_p| \tag{6}$$

In addition, the Willmott's index of agreement (IA) has also been employed in this study. The formulation of IA is given by [100,101]:

$$\text{IA} = 1 - \frac{\sum\_{i=1}^{N} \left( y\_0 - y\_p \right)^2}{\sum\_{i=1}^{N} \left( \left| y\_0 - \overline{y} \right| + \left| y\_p - \overline{y} \right| \right)^2} \tag{7}$$

#### **5. Results and Discussion**

#### *5.1. Description of Parametric Studies*

In order to investigate the influence of PSO parameters on the performance of ANFIS, parametric studies were carried out by varying nrule, npop, wini, c1, c2, and fv, as indicated in Table 2. It is noteworthy that the proposed range was selected by considering both problem dimensionality (i.e., complexity) and computation time. As recommended by He et al. [102] and Chen et al. [48], the PSO initial weight should be carefully investigated. Therefore, a broad range of wini was proposed, ranging from 0.1 to 1.2. The number of populations varied from 20 to 300 with a nonconstant step, whereas the coefficients c<sup>1</sup> and c<sup>2</sup> ranged from 0.2 to 2 with a resolution of 0.2. The number of fuzzy rules varied from 5 to 40. Finally, the f<sup>v</sup> ranged from 0.05 to 0.2.

The relationship between the number of fuzzy rules and the number of total ANFIS weight parameters is depicted in Figure 2. As can be seen, the relationship is linear, showing that as the number of fuzzy rules increases, the number of ANFIS weight parameters increases. For illustration purposes, the number of weight parameters increases from 50 to 370, while the number of fuzzy rules increases from 5 to 40. Additionally, the characteristics of the ANFIS structure are described in Table 3, showing that the Gaussian membership function was used to generate fuzzy rules.


**Table 2.** Values used for parameters in parametric studies. nrule 5 10 15 20 30 40

Parameters Values Used

Materials 2020, 13, x FOR PEER REVIEW 8 of 27

Table 2. Values used for parameters in parametric studies.

Figure 2. Influence of the number of fuzzy rules on the number of total ANFIS weight parameters to be optimized by PSO. **Figure 2.** Influence of the number of fuzzy rules on the number of total ANFIS weight parameters to be optimized by PSO.



#### Number of total parameters 25 × nrule *5.2. Preliminary Analyses*

#### 5.2. Preliminary Analyses 5.2.1. Computation Time

parameter.

5.2.1. Computation Time Figure 3 presents the influence of nrule and swarm parameters on the computation time. It is worth noting that the running time was scaled with respect to the minimum value of the corresponding parameter. For instance, the computation time using nrule = 10 is two times larger than the case using nrule = 5. Additionally, in Figure 3, it is seen that nrule and npop exhibited the highest slope (about 0.75), confirming that these two parameters required considerable computation time. For all Figure 3 presents the influence of nrule and swarm parameters on the computation time. It is worth noting that the running time was scaled with respect to the minimum value of the corresponding parameter. For instance, the computation time using nrule = 10 is two times larger than the case using nrule = 5. Additionally, in Figure 3, it is seen that nrule and npop exhibited the highest slope (about 0.75), confirming that these two parameters required considerable computation time. For all other parameters, the computation time remained constant when increasing the value of the parameter.

other parameters, the computation time remained constant when increasing the value of the

Materials 2020, 13, x FOR PEER REVIEW 9 of 27

Figure 3. Influence of variable increment ratio on running time, noting that both nrule and npop exhibit a slope coefficient of 0.75. **Figure 3.** Influence of variable increment ratio on running time, noting that both nrule and npop exhibit a slope coefficient of 0.75. a slope coefficient of 0.75.

#### 5.2.2. PSO Stopping Criterion 5.2.2. PSO Stopping Criterion

0.04

0.045

0.05

RMSE for training part

5.2.2. PSO Stopping Criterion In this study, 1000 iterations were applied as a stopping criterion in the optimization problem for the weight parameters of ANFIS. Figure 4 shows the convergence of statistical criteria in the function of nrule, whereas Figure 5 presents the convergence of these criteria regarding npop. For the evaluation of RMSE, MAE, and R over 1000 iterations in 6 cases for different nrule, the training parts are given in Figure 4a–c, whereas the testing parts are displayed in Figure 4d–f. It was observed that at least 800 iterations were required to obtain convergence results for RMSE, MAE, and R for all the cases. However, no specific trend could be deduced in order to obtain the best nrule parameter. Finally, it is worth noting that for all the cases of nrule, the values of RMSE, MAE, and R for the testing part were very close. Indeed, the values of RMSE for the testing part ranged from 0.038 to 0.043, the values In this study, 1000 iterations were applied as a stopping criterion in the optimization problem for the weight parameters of ANFIS. Figure 4 shows the convergence of statistical criteria in the function of nrule, whereas Figure 5 presents the convergence of these criteria regarding npop. For the evaluation of RMSE, MAE, and R over 1000 iterations in 6 cases for different nrule, the training parts are given in Figure 4a–c, whereas the testing parts are displayed in Figure 4d–f. It was observed that at least 800 iterations were required to obtain convergence results for RMSE, MAE, and R for all the cases. However, no specific trend could be deduced in order to obtain the best nrule parameter. Finally, it is worth noting that for all the cases of nrule, the values of RMSE, MAE, and R for the testing part were very close. Indeed, the values of RMSE for the testing part ranged from 0.038 to 0.043, the values of MAE for the testing part varied from 0.015 to 0.022, and those of R ranged from 0.95 to 0.97. The evaluation of RMSE, MAE, R over 1000 iterations in 9 cases of npope is shown (Figure 5). Similar results were obtained as for nrule. At least 800 iterations were needed to obtain the convergence results. In this study, 1000 iterations were applied as a stopping criterion in the optimization problem for the weight parameters of ANFIS. Figure 4 shows the convergence of statistical criteria in the function of nrule, whereas Figure 5 presents the convergence of these criteria regarding npop. For the evaluation of RMSE, MAE, and R over 1000 iterations in 6 cases for different nrule, the training parts are given in Figure 4a–c, whereas the testing parts are displayed in Figure 4d–f. It was observed that at least 800 iterations were required to obtain convergence results for RMSE, MAE, and R for all the cases. However, no specific trend could be deduced in order to obtain the best nrule parameter. Finally, it is worth noting that for all the cases of nrule, the values of RMSE, MAE, and R for the testing part were very close. Indeed, the values of RMSE for the testing part ranged from 0.038 to 0.043, the values of MAE for the testing part varied from 0.015 to 0.022, and those of R ranged from 0.95 to 0.97. The evaluation of RMSE, MAE, R over 1000 iterations in 9 cases of npope is shown (Figure 5). Similar results were obtained as for nrule. At least 800 iterations were needed to obtain the convergence results.

of MAE for the testing part varied from 0.015 to 0.022, and those of R ranged from 0.95 to 0.97. The

**Figure 4.** *Cont.*

Iteration

rule=5

rule=10

rule=15

rule=20

rule=30

rule=40

0.04

0.045

0.05

Iteration

Materials 2020, 13, x FOR PEER REVIEW 10 of 27

Figure 4. Convergence of several statistical criteria over 1000 iterations in terms of nrule for the training part: (a) RMSE, (b) MAE, (c) R. Convergence of several statistical criteria over 1000 iterations in terms of nrule for the testing part: (d) RMSE, (e) MAE, (f) R. **Figure 4.** Convergence of several statistical criteria over 1000 iterations in terms of nrule for the training part: (**a**) RMSE, (**b**) MAE, (**c**) R. Convergence of several statistical criteria over 1000 iterations in terms of nrule for the testing part: (**d**) RMSE, (**e**) MAE, (**f**) R. Figure 4. Convergence of several statistical criteria over 1000 iterations in terms of nrule for the training part: (a) RMSE, (b) MAE, (c) R. Convergence of several statistical criteria over 1000 iterations in terms of nrule for the testing part: (d) RMSE, (e) MAE, (f) R.

**Figure 5.** *Cont.*

Iteration

(e)

Figure 5. Convergence of several statistical criteria over 1000 iterations in terms of npop for the training part: (a) RMSE, (b) MAE, (c) R. Convergence of several statistical criteria over 1000 iterations in terms **Figure 5.** Convergence of several statistical criteria over 1000 iterations in terms of npop for the training part: (**a**) RMSE, (**b**) MAE, (**c**) R. Convergence of several statistical criteria over 1000 iterations in terms of npop for the testing part: (**d**) RMSE, (**e**) MAE, (**f**) R.

### 5.2.3. Statistical Convergence

results.

of npop for the testing part: (d) RMSE, (e) MAE, (f) R.

5.2.3. Statistical Convergence In order to take into account variability in the input space, 200 random realizations were performed for each configuration. These realizations increased the influence of the probability density function of inputs on the optimization results. In terms of nrule, Figure 6a–c indicate the statistical convergence of RMSE, MAE, and R for the training part, whereas Figure 6d–f present the statistical convergence of the same parameters for the testing part, respectively. It can be seen that after about 100 random realizations, statistical convergence was reached, which was correct for all the tested cases. Similarly, Figure 7 shows the statistical convergence in terms of npop for both training In order to take into account variability in the input space, 200 random realizations were performed for each configuration. These realizations increased the influence of the probability density function of inputs on the optimization results. In terms of nrule, Figure 6a–c indicate the statistical convergence of RMSE, MAE, and R for the training part, whereas Figure 6d–f present the statistical convergence of the same parameters for the testing part, respectively. It can be seen that after about 100 random realizations, statistical convergence was reached, which was correct for all the tested cases. Similarly, Figure 7 shows the statistical convergence in terms of npop for both training and testing parts. Similarly, 200 random realizations were observed to be sufficient to achieve reliable results.

and testing parts. Similarly, 200 random realizations were observed to be sufficient to achieve reliable

Materials 2020, 13, x FOR PEER REVIEW 12 of 27

Figure 6. Statistical convergence over 200 random realizations in terms of nrule for the training part: (a) RMSE, (b) MAE, (c) R. Statistical convergence over 200 random realizations in terms of nrule for the testing part: (d) RMSE, (e) MAE, (f) R. **Figure 6.** Statistical convergence over 200 random realizations in terms of nrule for the training part: (**a**) RMSE, (**b**) MAE, (**c**) R. Statistical convergence over 200 random realizations in terms of nrule for the testing part: (**d**) RMSE, (**e**) MAE, (**f**) R.

Figure 7. Statistical convergence over 200 random realizations in terms of npop for the training part: (a) RMSE, (b) MAE, (c) R. Statistical convergence over 200 random realizations in terms of npop for the testing part: (d) RMSE, (e) MAE, (f) R. **Figure 7.** Statistical convergence over 200 random realizations in terms of npop for the training part: (**a**) RMSE, (**b**) MAE, (**c**) R. Statistical convergence over 200 random realizations in terms of npop for the testing part: (**d**) RMSE, (**e**) MAE, (**f**) R.

#### 5.3. Parametric Performance *5.3. Parametric Performance*
