2.5.3. Building and Implementing of ANN to Predict *PRstatic* Values

In this study, a new ANN model is developed then optimized using *SADE* to get the best predictions with the highest possible accuracy. Using such hybrid system increases the performance of the developed network, as indicated in the reviewed studies. The developed approach with the learning algorithm is applied using MATLAB. At first, data are used to train the network and then the results are tested. The evaluation of the network performance in this study depends on the accuracy degree between the actual and the predicted results in terms of three main factors:


The formulas used for *R*, MAPE, and *R*<sup>2</sup> are listed in Appendix A. MAPE and *R*<sup>2</sup> are considered the most commonly measures used for evaluating the prediction accuracy. More details on MAPE and *R*<sup>2</sup> can be found in [60].

The dataset, including input and output parameters, is used to train the network. Then the network parameters are randomly selected to an optimum choice through iterations. Once the learning algorithm is converged, the determined weights and biases are used to estimate the results using feed forward network structure (FFN) with back propagation learning. This structure contains three layers (input, hidden, and output layers). The number of neurons in the hidden layer are usually estimated using a trial and error technique based on the nature of the problem. The input data are processed through the neurons and their assigned weights and biases; then, the selected transfer function between input/hidden layers is applied to get the response of the hidden layer. Thereafter, the transfer function between hidden/output layers is applied to get the desired output. The network performance is tested based on the results accuracy. Afterwards the error is estimated and propagated back (using back propagation learning) to the earlier layers and *SADE* is applied to optimize the network parameters based on the obtained results to get more precise results. A simplified flowchart for the developed hybrid approach is shown in Figure 3.

**Figure 3.** Flowchart describing the workflow of the hybrid approach ANN-*SADE*. ANN: artificial neural network; *SADE*: self-adaptive differential evolution.

The dataset used for building the ANN model was obtained from drilled wells in the Middle East Region. This set comprises 692 data points representing core data and their corresponding wire-lined log data measurements was used to build the ANN model to estimate the static Poisson's ratio for sandstone formations. In this study, The ANN model was trained using log data, namely *RHOB*, Δ*tcomp*, and Δ*tshear* as input parameters to predict *PRstatic*. The collected data were randomly divided using MATLAB into two partitions. In total, 631 data points, representing 90% of the selected dataset, were used for training the proposed model and 10% of the dataset (61 data points) was used for testing the model. The dataset was divided in the way that testing data points are within the range of the training data as shown in Figure 4. The input parameters should be fed to the ANN model in the following order: *RHOB*, Δ*tcomp*, and then Δ*tshear*. The varying ANN parameters in the optimization process are the number of neurons in the hidden layers, learning rate, training algorithms, number of hidden layers, and transfer functions. Several options of these parameters are used to optimize the model. The optimization process involves tracking the error in the predicted results during training, testing processes through runs of the model for different scenarios. For each scenario, different combinations of these varying parameters are selected and used to train the network. The developed ANN model was optimized using the *SADE* algorithm, which was described earlier to identify the optimized choices of these parameters which result in the most accurate results. Thereafter, the ANN parameters yielding the lowest possible error in the predicted results are selected as the optimized values. The tested options of ANN parameters are listed in Table 2.

**Figure 4.** Testing data ranges with respect to the training data used for developing the ANN model.

**Table 2.** Summary of the tested options of different ANN parameters during the optimization process. trainbr: Bayesian regularization backpropagation training algorithm; elliotsig: Elliot symmetric sigmoid; tansig: hyperbolic tangent sigmoid transfer function; tribas: triangular basis transfer function; pure-linear: linear transfer function; trainlm: Levenberg–Marquardt backpropagation; trainscg: scaled conjugate gradient backpropagation; trainbfg: BFGS quasi-Newton.


#### **3. Results and Discussion**
