*3.3. Comparison with Previous Models*

To compare the new MSaDE-ANN technique for the RHPs, the AV was predicted using the MSaDE-ANN, and the obtained results were compared with Pitt [41] and Almahdawi et al. [42]. The actual values of AV can be calculated based on PV and YP using Equation (15).

$$\text{AV} = \frac{2\text{PV} + \text{YP}}{2} \tag{15}$$

Figure 6 shows the high accuracy of the developed ANN-AV model for the training dataset using the MSaDE technique. The R was 0.97 and the AAPE was 5.1% when plotting the predicted and actual AV values (570 data points). The same results were obtained for the testing data set, where the R was 0.97 and the AAPE was 5.3%, as can be seen in Figure 6.

**Figure 6.** Prediction of apparent viscosity using the MSaDE-ANN technique.

For the further validation of the AV developed ANN-AV model, a new data set was used (150 data points). Figure 6 shows that the R was 0.96 and the AAPE was 5.8% between the calculated and actual values of AV.

Pit [41] illustrated that the AV can be calculated as a function of MD and FT using Equation (16), while Almahdawi et al. [42] stated that AV can be determined using Equation (17).

$$\text{AV} = \text{MD} \ast (\text{FT} - 25) \tag{16}$$

$$\text{AV} = \text{MD} \ast (\text{FT} - 28) \tag{17}$$

Applying Equations (16) and (17) using the available data sets (900 data points) showed that the ANN-AV model outperformed these models. Figure 7 shows that the coefficient of determination (R2) when plotting the calculated and actual valued of AV was 0.94 when the ANN-AV equation was used, 0.65 when Pitt's equation was used, and 0.64 when Almahdawi's equation was used. Figure 8 shows that the ANN-AV equation yielded the lowest AAPE (5.26%) as compared with the Pitt [41] equation (the AAPE was 31.47%) and the Almahdawi et al. [42] equation, where the AAPE was 24.81%.

**Figure 7.** Coefficient of determination for the apparent viscosity using different techniques.

**Figure 8.** Average absolute percentage error for calculating the apparent viscosity using different techniques.

#### *3.4. Sensitivity Analysis*

As mentioned earlier, the methodology of this study involved 20 independent optimization runs. Figure 9 shows that 18 out of the 20 optimization runs showed that the best training algorithm that achieved the best fit was the trainbr. The remaining two optimization runs were distributed equally between trainlm and trainbfg. This shows the consistency of this training function in achieving 90% of the best-fit performance compared to the other training algorithms. Figure 10 shows the best results achieved by each of the three training algorithms. The figure demonstrates that trainbr achieved the highest R and the lowest AAPE. The outperformance of trainbr could be related to its backpropagation capability of minimizing a combination of the squared errors and weights to determine the optimum combination that produces an ANN that is able to generalize well by preventing the overfitting (preventing increasing values of weights).

**Figure 10.** Performance comparison of the training algorithms.

Figure 11 shows the sensitivity of the ANN performance to the number of neurons. The ANN performance indicators were the correlation coefficient (R) and the AAPE for the training and testing datasets. The figure demonstrates that, generally, increasing the number of neurons enhanced the ANN performance in terms of increasing the R of testing until reaching an optimum value of 30 neurons; then, the performance dropped. The reason for this behavior was the overfitting. Increasing the number of neurons to a very large value resulted in the ANN performing well on the training set but performing poorly on the testing set, which indicates overfitting. This is demonstrated by the figure, as it shows when the number of neurons increased more than 30, the R-value of training set generally increased, but the R-value of testing generally decreased. Therefore, the optimum number of neuron, in this case, was determined to be 30 neurons. Figure 12 shows the topography of the ANN with the optimized number of neurons.

**Figure 11.** Sensitivity analysis of the model performance to the number of neurons.

**Figure 12.** The topography of the artificial neural network.

### **4. Conclusions**

The modified self-adaptive differential evolution technique was implemented to optimize the different variables of an ANN to determine the RHPs of NaCl-WBDIF using actual field measurements (900 data points) of MD, FT, and SP. Based on the obtained results, the following conclusions can be drawn:


It is recommended to develop an automated Marsh funnel system that measures the mud density, Marsh funnel time, and solid percent using automated method, and the yielded results can be used as inputs for the developed ANN models, which can be used to estimate the rheological properties every 5–10 min. This will enable the driller to understand the changes in the drilling fluid properties as well as the changes in the rig hydraulics. This make decisions regarding the required action based on given information much faster.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors wish to acknowledge King Fahd University of Petroleum and Minerals (KFUPM) for utilizing the various facilities in carrying out this research. Many thanks are due to the anonymous referees for their detailed and helpful comments.

**Conflicts of Interest:** The author declares no conflict of interest.
