*3.1. Sensitivity Analysis*

The *SADE*-based approach starts initially with a randomly selected population from the obtained dataset. Then, they are processed using an objective function moving through several trials and error by implementing different sets of the aforementioned ANN parameters till reaching the termination criterion of the minimum possible error. Thereafter, ANN is continued with the optimized choices obtained by *SADE.*

In this study, 20 independent runs are applied to develop the best ANN model in terms of highest *R* and lowest MAPE between the predicted and actual values. Figure 5 shows the sensitivity of ANN to the varying number of neurons in the hidden layer. The four performance indicators considered in this case are *R* and MAPE for both the training and testing datasets. The figure demonstrates that 13 neurons leads to the best fit as it could be shown from the highest values of *R* of 0.96 and 0.95 for training and testing, respectively, as well as the lowest values of MAPE of 2.4 and 2.1% for training and testing, respectively. The figure shows that although different configurations of ANN lead to almost

similar fitness results on the training dataset, there is a significant variance on the testing dataset. This confirms the importance of validating ANN on unseen testing dataset to compare the different ANN models.

**Figure 5.** Sensitivity analysis of the ANN performance for varying number of neurons.

Another governing parameter is the learning algorithm. The Bayesian regularization backpropagation training algorithm (trainbr) shows the best performance as a training function for the developed ANN. Out of 20 independent runs, the performance of the trainbr was superior compared to other training functions in 60% of the runs as shown in Figure 6. This outperformance is due to the fact that that unlike other training functions, the trainbr minimizes a combination of squared errors and weights, and then it identifies the best combination in order to produce an ANN that is able to generalize better (prevent overfitting) than other algorithms. This also can be demonstrated by the high testing *R* achieved by trainbr, as shown in Figure 7.

**Figure 6.** Success rate of each of the training functions out of the 20 independent runs.

**Figure 7.** Comparison analysis between the tested training functions.
