*5.1. Gray Wolf Optimization*

In the last section, the performance of the developed equation is evaluated. Some optimization techniques have been introduced to maximize Equation (4) to improve the UC. GWO was implemented to find the maximum UC. To perform GWO, an open-source *Python* library, *Mealpy,* was applied [50]. The main parameters of GWO used the default parameters in Mealpy. To see more about how to implement GWO in *Python*, the reference [50] used in this study is useful.

Before optimization modeling, the range of parameters needs to be determined. As shown in Table 1, the input range of five parameters was selected as the optimization range. The maximum UC is found in the range. The swarm is set to 50, 100, 150, and 250 [22]. Figure 9 shows the fitness variation during GWO. When the swarm is 50, the found UC is the maximum. When LOP is 38.59, PD is 0.247, IC is 2273, Su is 157.46, T is 153.18, the maximum UC is 6098.488.


**Table 1.** The input parameters range.

**Figure 9.** The fitness variation in GWO.

## *5.2. Artificial Bee Colony Algorithm*

ABC was also implemented to compare optimization techniques to find the maximum UC in Equation (4). The default parameters suggested in Mealpy [50] were also considered in this study to construct an ABC optimization model. The optimization range used the parameters range in Table 1. The swarm is set to 50, 100, 150, and 250. Figure 10 shows the fitness variation during the process of ABC. When the swarm is set to 100, 150, and 200, the found UC is the maximum. When LOP is 38.47, PD is 0.240, IC is 2276, Su is 157.46, and T is 170.16, the maximum UC is 6043.64. It is apparent that GWO performs better than ABC in finding the maximum UC.

**Figure 10.** The fitness variation in ABC.

#### **6. Discussion**

GP was adopted to develop the equation for predicting the UC, and the developed equation received R<sup>2</sup> of 0.897 in the training set and R<sup>2</sup> of 0.844 in the testing set. Five regressions revealed that the developed equation can better forecast the UC than previous methods. To analyze the strength of GP for predicting UC, some other widely used machine learning models were also developed to build intelligent models for predicting UC. These widely used models include random forest (RF), gradient boosting machine (GBM), adaptive boosting machine (AdaBoost), ANN, support vector machine (SVM), k-nearest neighbor (KNN), and decision tree (DT). These models were developed according to the default parameters in Scikit-learn [51].

These models were developed using the training set, and the testing set was used to evaluate their performance. An easy way to compare the results of several modeling approaches is to use a Taylor diagram. The Taylor diagram cleverly combines the correlation coefficient, the centered RMSE, and the standard deviation into a polar diagram as a result of these inputs. A cosine connection [52] may be seen in Equation (8) between the correlation coefficient, the center RMSE, and the standard deviation.

$$E'^2 = \sigma\_p^{'2} + \sigma\_a^{'2} - 2\sigma\_p \sigma\_a \text{R} \tag{8}$$

In Equation (8), *E* is the centered RMSE between measured and predicted parameters, *σp* <sup>2</sup> is the variance of predicted parameters, *σ<sup>a</sup>* <sup>2</sup> is the variance of measured parameters, and *R* is the correlation coefficient between measured and predicted parameters.

The Taylor diagrams for the training and testing sets are shown in Figures 11 and 12, respectively. The closer the model is to the reference point, the smaller the centered RMSE of the model, the higher the correlation coefficient between the prediction results and the actual results, and the better performance of the model. According to this graph, a model's performance improves as it gets closer to its associated "Reference" point. It can be found that GP has outstanding performance during the training and testing stages.

**Figure 11.** Taylor diagram of training set (RF: random forest, ANN: artificial neural network, DT: decision tree, SVM: support vector machine, KNN: k-nearest neighbors, GBM: gradient boosting machine, AdaBoost: adaptive boosting machine).

**Figure 12.** Taylor diagram of the test set (RF: random forest, ANN: artificial neural network, DT: decision tree, SVM: support vector machine, KNN: k-nearest neighbors, GBM: gradient boosting machine, AdaBoost: adaptive boosting machine).

Additionally, GWO and ABC were implemented to find the maximum UC. Table 2 shows the optimized parameters. The GWO performs better than ABC. The maximum UC found by GWO is 6089.488, which is an increase of 54% compared to the maximum UC in this database. The maximum UC found by ABC is 6043.64, which is an increase of 52.6% compared to the maximum UC in this database. It is apparent that using the optimized parameters can improve UC.


**Table 2.** Optimized values of input parameters for the maximum UC.
