Next Article in Journal
A Proactive Approach to Extended Vehicle Routing Problem with Drones (EVRPD)
Next Article in Special Issue
Time Series Classification with Shapelet and Canonical Features
Previous Article in Journal
Broadband Dielectric Characterization of High-Permittivity Rogers Substrates via Terahertz Time-Domain Spectroscopy in Reflection Mode
Previous Article in Special Issue
Multi-Modal Sentiment Analysis Based on Interactive Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm

1
Hebei Instrument & Meter Engineering Technology Research Center, Hebei Petroleum University of Technology, Chengde 067000, China
2
State Key Laboratory of Nuclear Resources and Environment, East China University of Technology, Nanchang 330013, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8260; https://doi.org/10.3390/app12168260
Submission received: 8 July 2022 / Revised: 13 August 2022 / Accepted: 15 August 2022 / Published: 18 August 2022

Abstract

:
In the geophysical exploration field, the sonic log (DT) and shear sonic log (DTS) are frequently used as quick and affordable procedures for reservoir evaluation. Due to the high acquisition costs, DTS is only accessible in a few wells within an oil/gas field. Numerous attempts have been made to establish a precise relationship between DTS and other petrophysical data. In this study, a method based on the deep extreme learning machine optimized by the improved sparrow search algorithm (ISSA-DELM) is proposed to improve the accuracy and stability of the DTS prediction. Firstly, the deep extreme learning machine (DELM) model is constructed by combining the extreme learning machine and the autoencoder algorithm. Secondly, aimed at the defects of the sparrow search algorithm (SSA), an improved sparrow search algorithm (ISSA) with the firefly search disturbance is proposed by merging the iterative strategy of the firefly algorithm and applied to optimize the initial input weights of the DELM. Finally, the ISSA-DELM is applied to the prediction of the DTS in a block of the Ordos Basin in China. The quantitative prediction results show that the RMSE, MAE, and R-square predicted by the ISSA-DELM model are 6.1255, 4.1369, and 0.9916, respectively. The comprehensive performance is better than the ELM, the DELM, and the DELM optimized by the optimization algorithms, such as the genetic algorithm (GA), the particle swarm optimization (PSO), and the SSA. Therefore, it can be concluded that the method provides an effective method for missing DTS estimation.

1. Introduction

Logging is a physical measurement method applied to describe and analyze the underground conditions, which is of great significance to oil/gas exploration and development. Geologists and engineers can build accurate geological models based on the logging data [1,2]. Considering the economy of logging in actual production, the DTS is often missing in the logging curve. However, the DTS is essential information for rock physical analyses, pre-stack reservoir prediction, and reservoir AVO characteristic studies. Therefore, technicians are required to predict the missing DTS precisely. DTS prediction mainly falls into three categories: the empirical formula method, the rock physical method, and the multivariate fitting method. The empirical formula includes the sonic density relations of water-saturated rocks and the experiential relation of the p–s wave. However, due to the strong nonlinear relation between the reservoir parameters and DTS, the empirical formula could produce large errors. The rock physical method mainly includes the effective medium and contact theory models. However, the establishment of a rock physics model requires a high technical level of researchers, and it is difficult to ensure efficiency and accuracy. The multivariate fitting method mainly predicts DTS by establishing a linear relationship between the target curve and other logging curves, but this method has the problem of insufficient expression of equation [3].
In geophysical exploration, the application of the observation data (such as seismic, logging, and magnetic) to identify the reservoir prediction and hydrocarbon is a process of establishing complex nonlinear relationships [3]. The machine learning approach has obvious advantages in solving these kinds of nonlinear problems. The ELM belongs to the single hidden layer feedforward neural network, which has the characteristics of a simple structure and fast learning speed [4]. By referring to the idea of deep learning, the DELM extends the ELM to a multi-layer structure, which can more effectively mine the essential characteristics of data and has a higher feature learning ability [5]. However, the initial input weights of DELM are stochastic, and the training accuracy and time are easily affected by randomness [6]. Therefore, a variety of intelligent optimization algorithms can be used for optimizing DELM [7,8,9,10,11,12,13,14].
It has been demonstrated that the SSA provides an outstanding search performance [15]. Due to the decrease of population variation in the middle and later iterations, the SSA occasionally suffers from drawbacks, such as slow convergence speed, poor solution accuracy, and ease of falling into regional optimal solutions [16]. The ISSA with the firefly search disturbance algorithm [17,18,19], aimed at the defects of the SSA, was introduced to improve the optimization ability of the SSA. Moreover, the ISSA-DELM model is applied to the logging data from a block of the Ordos Basin in China. It can be found that the ISSA-DELM has better prediction performance, and can estimate DTS more precisely than the ordinary models (ELM and DELM) and the hybrid models (GA-DELM, PSO-DELM, and SSA-DELM), which provides a promised method for DTS estimation. The remainder of this paper is organized as follows. Section 2 presents the principle and modeling of ISSA-DELM. Section 3 presents the flow of DTS prediction based on the ISSA-DELM. Section 4 presents the practical application and result analysis by using the logging data from the Ordos Basin in China. Finally, the paper is concluded in Section 5.

2. Principle and Modeling

2.1. Principle of the DELM

The ELM is created based on the one hidden layer and the feedforward neural network. The ELM has the characteristics of randomly selecting the weights, unlike the feedback neural network, which needs to adjust the weights between layers through the backpropagation algorithm, thus reducing the learning time and structural complexity and improving the overall training speed [5]. However, because of the network structure of the single hidden layer, ELM cannot have the ability to capture the effective features of the sample with much data and high dimensions.
To solve the drawbacks of the ELM, the DELM (also known as the multi-layer extreme learning machine) is proposed. The weights of DELM can be initialized by the extreme learning machine autoencoder (ELM-AE). We can see the structure of ELM-AE in Figure 1. The autoencoder learns the features of samples in an unsupervised way. It maps the inputs to the feature vector of the hidden layer through the encoder and then reconstructs the original inputs from the feature vector by the decoder [6].
The DELM is equivalent to connecting the multiple ELM from the perspective of the structure. Compared with the ELM, the DELM can capture the sample characteristics more comprehensively and improve the accuracy of processing high-dimensional inputs. The DELM carries out unsupervised training and learning through the ELM-AE layer by layer, and finally connects to the regression layer for supervised training. The system’s parameters do not need to be adjusted simultaneously [6]. The structure of the DELM network is shown in Figure 2.
Compared to ELM, the DELM is more accurate and generalizable. However, the weights of the input layer are orthogonal random matrices created randomly during the pre-training phase of ELM-AE. In the meantime, only the weight parameters of the output layer are adjusted during the ELM-AE pre-training process using the least square approach, while the weights of the input layer are left unchanged. As a result, the random input weights of each ELM-AE have an impact on the ultimate DELM effect. Therefore, the parameters should be optimized to improve the predictive performance of the DELM.

2.2. Principle of SSA

The SSA is based on the foraging and anti-predation behavior of sparrows [15]. The foraging behavior of sparrows corresponds to the discoverers and followers. A few sparrows with the better positions are selected as discoverers in each iteration for globally searching for food and providing foraging areas and directions for all followers. The remaining sparrows are followers, following the discoverers to compete for food. The anti-predation behavior of the sparrows corresponds to the reconnaissance and early warning mechanism. Some sparrows conduct reconnaissance and give early warning. If danger is found, they will give up food and fly to a new position.
First, the sparrow population (𝑛 sparrows) is expressed as:
X = [ X 1 1   X 1 2     X 1 d X 2 1   X 2 2     X 2 d X n 1   X n 2     X n d ]
where n is the total number of sparrows in the population, and d is the dimension that needs to be optimized.
The fitness of all sparrows in the population can be expressed as:
X = [ f ( [ X 1 1   X 1 2     X 1 d ] ) f ( [ X 2 1   X 2 2     X 2 d ] ) f ( [ X n 1   X n 2     X n d ] ) ]
where X is the fitness of different discoverers. The sparrow with a better fitness value is the first to obtain food and leads the entire population to approach the food source as the discoverer. The discoverer’s position is updated as follows:
x i , j t + 1 = { x i , j t · exp ( i α · iter max ) ,     R 2 < S T x i , j t + Q · L ,     R 2 S T  
where t is the number of current iterations; x i , j t + 1 denotes the jth dimensional position of the ith individual in the t+1 iteration of the population. iter max is the maximum number of the iteration. α is a uniform random number in [0, 1]. Q is a random value under the standard normal distribution; L   is a 1 × d matrix with each element being 1. S T is the alert threshold, in the range of [0.5, 1]. R 2 is the warning value, in the range of [0, 1]. When R 2 S T , the warning is triggered and a sparrow has found the predator. At this time, all sparrows leave the warning area. When R 2 < S T , it means no natural predators fly nearby and the discoverer can continue to search in a wide range.
The follower’s position is updated as follows:
x i , j t + 1 = { Q · exp ( x worst t x i , j t i 2 ) ,   i > n / 2 x p t + 1 + | x i , j t x p t + 1 | · A + · L ,   i n / 2  
where x worst t is the worst position of the current population. x p t + 1 is the best position of the current population. A + = A T ( AA T ) 1 , where A is a 1 × d matrix and each element in the row vector is randomly assigned 1 or −1. n is the size of the sparrow population.
When the population is foraging, some sparrows give alerts. When a natural predator approaches, both the discoverer and the follower will give up the food and fly to another position. SD (generally 10% to 20%) sparrows are randomly selected from each generation in the population to give an early warning. Their position can be updated as follows:
x i , j t + 1 = { x best t + β · | x i , j t x best t | ,   f i >   f g x i , j t + K · ( | x i , j t x worst t | (   f i >   f w ) + ε ) ,   f i =   f g  
where x best t is the current global optimum position. β is a random value under the normal distribution. K is a uniform random value in [–1, 1].   f i ,   f g , and   f w denote the fitness value, global optimum fitness value, and global worst fitness value of the current population, respectively. ε is the minimum constant to avoid the divide-by-zero error.   f i >   f g indicates that the sparrow is at the edge of the population and is easily attacked by the natural predator.   f i =   f g indicates that the sparrow in the center of the population realizes the threat of being attacked by the predator and needs to move closer to other sparrows.

2.3. Improvement of the SSA

2.3.1. Firefly Disturbance Strategy

The firefly algorithm can implement position optimization by simulating the firefly’s luminous behavior [14]. In the SSA, the discoverers tend to jump directly to the vicinity of the current extreme value at the later stage, resulting in an inadequate search range and, thus, being trapped in the local optimum, and the searching accuracy is also affected.
To improve the shortcomings mentioned above, the iterative strategy of the firefly algorithm is introduced after the sparrow search, and the firefly disturbance strategy is applied to the algorithm. By the characteristics of the firefly’s luminous attraction, we can search for individuals with better positions in the neighborhood structure, enhance the diversity of solutions, expand the search area, increase the global exploration ability, and assist to escape out of the local optimum. Meanwhile, the location of the sparrow is further updated to make the overall population search more sufficient and to increase the convergence accuracy. The mathematical modeling involved in the firefly algorithm is as follows:
I is the relative fluorescence brightness of fireflies and is defined as Equation (6):
I = I 0 × exp ( γ r ij )
where I 0 is the maximum fluorescence brightness, which is its fluorescence brightness at zero distance, and the individual with the better fitness value has the higher value of I 0 . γ is the absorption coefficient of light intensity, which reflects the fluorescence attenuation effect of firefly individuals with the increase of distance and the absorption of the propagation media. r ij is the spatial distance between fireflies.
β is the firefly’s attractiveness and defined as Equation (7):
β = β 0 × exp ( γ r ij )
where β 0 is the maximum attraction, i.e., the attraction of an individual light source.
The firefly disturbance position update formula is as follows:
x i = x i + β × ( x j x i ) + α · [ rand ( ) 1 / 2 ]
where x i and x j are the spatial positions of sparrow i and j , respectively. α is the step size control parameter and rand ( ) [ 0 ,   1 ] is a random factor subject to a uniform distribution.

2.3.2. Flow of ISSA

The ISSA flow is as follows:
(1) Initialize the population parameters, such as the population size, the maximum number of iterations, the proportion of discoverers, and the proportion of sparrows that recognize the threat.
(2) Calculate the fitness value of the individual of the current sparrow population and sort it to find out the current optimal value and the worst value.
(3) Select sparrows with better fitness values as discoverers in proportion, and update the location of discoverers according to Formula (3).
(4) The rest of the population is defined as followers, and the location of followers is updated according to Formula (4).
(5) Randomly select individuals in proportion to the population as sparrows aware of the danger, update the location of the sparrows that recognize the threat according to Formula (5), calculate the new fitness value, and perform the updated operation if it is better than the current optimal value.
(6) The fitness function value is calculated as the maximum fluorescence brightness I 0 of each firefly, and the fluorescence brightness I and attraction β of the firefly are calculated according to Formulas (6) and (7), respectively, to determine the search direction of the population.
(7) The firefly disturbance Formula (8) is used to update the location of the population, and the firefly in the optimal location is randomly disturbed.
(8) Calculate the fitness value and retain the optimal individual location, and check whether the stop conditions are met. If so, the algorithm ends and outputs the optimal results; otherwise, turn to step (2).

3. Flow of DTS Prediction Based on the ISSA-DELM

In this study, the ISSA optimizes the input weights of DELM to compensate for the influence of random input weights on the network and improves stability and accuracy. A DTS prediction model based on the ISSA-DELM is constructed. The model contains three hidden layers, each with a different number of nodes: the first hidden layer has 200 nodes, the second has 100 nodes, and the third has 50 nodes. Figure 3 displays the structure and diagram of the prediction model.
The specific flow is as follows:
(1) Eliminate and normalize the outliers from the logging data and divide them into training sets and test sets.
(2) Initialize ISSA parameters.
(3) Determine the network structure of DELM and set relevant parameters.
(4) The objective function trained by DELM is set as the fitness value of ISSA. The root mean square error (RMSE) is applied as the objective function and defined as Equation (9):
RMSE = 1 N i = 1 N ( y k y ^ k ) 2
(5) Update the fitness value according to the search direction and movement speed, and check whether the current weights are optimum, otherwise, perform iteration.
(6) Send the optimum weight values into DELM as the weight values of the input layer.
(7) Send the test data into the ISSA-DELM model for prediction and verify the validity of the model.

4. Practical Application and Result Analysis

4.1. Data Preparation

Two exploration wells drilled in a block of the Ordos Basin in China are investigated in this study. The Ordos Basin has the character of a stable depression tectonic basin, with the metamorphic crystalline rock series of Archaea and Middle Proterozoic as the rigid sedimentary basement, overlying the platform type sedimentary caps of Middle and Upper Proterozoic, Early Paleozoic, Late Paleozoic, and Mesozoic. The Lower Paleozoic is mainly composed of Marine carbonate deposits. The Upper Paleozoic is transformed from coastal marsh and bay lagoon facies to fluvial and lacustrine facies, mainly composed of terrigenous clastic deposits, and only carbonate interbeds developed in the early stage. The Mesozoic is a single fluvial and lacustrine terrigenous detrital deposition, especially characterized by the large thickness and wide distribution of lacustrine deltas in the Triassic Yanchang Formation, which is one of the most important hydrocarbon enrichment horizons in Ordos Basin.
To keep the information of the wells private, we call the two selected wells Well M and Well N, separately. Moreover, the intervals of interest for Well M and Well N are both located in the mainly oil-bearing formation (Upper Triassic Yanchang) in the studied block. The log graphs of Well M and Well N are shown in Figure 4 and Figure 5, separately. The included logging sequence is as follows: depth, caliper (CAL), sonic log (DT), neutron porosity (NPHI), density log (RHOB), gamma ray (GR), shear sonic log (DTS), and true electrical resistivity (RT).
In Table 1, the statistical characteristics of Well M and Well N are displayed. There are 1600 and 960 data points in Well M and Well N, separately.

4.2. Feature Selection

The decision tree-based distributed gradient lifting framework (LightGBM) is used to assess the importance of the features. Compared with extreme gradient boosting (XGBoost), LightGBM uses a gradient-based unilateral sampling algorithm to preprocess sparse data, which speeds up the model training without compromising accuracy [20]. Seven parameters, including DEPTH, CAL, RHOB, NPHI, GR, DT, and RT are input into the LightGBM model to acquire the influence of each parameter on DTS and the order of the characteristic importance. The quantitative scores are shown in Table 2:
To select the important sample features, the features are added to the ordinary model (DELM) one by one in descending order of importance. The change curve of model errors (RMSE) obtained by the comparison test is shown in Figure 6. When the model involves five inputs, the declining trend of RMSE exhibits a major corner, as seen in Figure 6. As a result, a model with five inputs is used to estimate “missing” DTS, which reduces computing complexity and time. The following are the five inputs used in this study: DT, RHOB, CAL, NPHI, and RT logs.

4.3. Optimization and Comparison of Algorithm Parameters

In this paper, it is assumed that Well M has the complete DTS, and the DTS of Well M is used as the training set to establish the ELM, DELM, PSO-DELM, GA-DELM, SSA-DELM, and ISSA-DELM model. The “missing” DTS corresponding to the Well N is predicted. The objective function is established based on the RMSE of the training sample (Well M). The optimization algorithms (GA, PSO, SSA, and ISSA) are used to optimize the input weights of DELM. The optimization iteration curve of the proposed algorithms is shown in Figure 7.
As illustrated in Figure 7, GA is most likely to achieve a local optimum during the optimization process. PSO is unable to obtain higher diagnostic accuracy due to its defects of poor handling of discrete optimization problems. ISSA increases the population’s diversity and widens the search window by adding the logistic chaos mapping to the population’s initialization. To prevent the SSA trapped in a local optimum, the firefly disturbance approach is employed to update the ideal sparrow’s location in time. The ISSA has the fastest convergence time and the lowest error when compared to the other optimization algorithms. As a result, optimizing input weights in DELM via ISSA is more favorable.

4.4. Comparative Analysis of Model Prediction Effect

The ISSA-DELM model and the other proposed models, such as ELM, DELM, PSO-DELM, GA-DELM, and SSA-DELM, are all used to predict the “missing” DTS of Well N. The quantitative results of nested cross-validation are shown in Table 3. It can be observed from Table 3 that the correlation coefficient (R-square) of DELM is better than that of ELM, indicating that the addition of DELM can also improve the prediction accuracy. The R-square of the DELM optimized by the intelligent optimization algorithms (GA, PSO, SSA, and ISSA) is higher than that of the DELM without being optimized. At the same time, it can be seen that the mean absolute error (MAE) and root mean square error (RMSE) of the prediction results of the ISSA-DELM are the lowest, while the R-square is the highest, indicating that compared with other optimization algorithms, the ISSA can obtain more suitable parameters.
The comparisons between the outputs of ISSA-DELM and other models for DTS estimation in Well N and measured DTS values in Well N are shown in Figure 8. When the log curve changes smoothly, the output of the ISSA-DELM model is not very different from that of other models. When the logging curve local mutation (green area) occurs, the results predicted by ISSA-DELM are closer to the measured data than the results of the other models. The reason is that the addition of DELM and ISSA improves the prediction precision and enhances the feature extraction ability. The above analysis proves that the ISSA-DELM model is more suitable for DTS prediction than the ordinary machine learning model.
The cross-plots of original DTS vs. predicted DTS using ISSA-DELM, SSA-DELM, PSO-DELM, GA-DELM, DELM, and ELM in Well N are shown in Figure 9. It can be observed that the predicted results of ISSA-DELM are closer to the original DTS than other models, indicating that the ISSA-DELM is better than other models in fitting multiple local details of DTS.
In conclusion, the ISSA-DELM has excellent prediction effects on both the whole and local mutations of the DTS, fully demonstrating its advantages in predicting the missing DTS.

5. Conclusions

In this study, a missing DTS estimation method based on the ISSA-DELM is proposed by using the conventional petrographical data in a block of the Ordos Basin in China. The ELM, DELM, GA-DELM, PSO-DELM, and SSA-DELM algorithms are used to assess the precision and generalizability of the proposed model. The application of the LightGBM algorithm in conjunction with the ordinary model (DELM) demonstrates that as the estimator model inputs increase, modeling errors decrease. However, the error drop trend is only slight after five input variables. As a result, the optimal modeling inputs are the variables including DT, RHOB, CAL, NPHI, and RT. In comparison to the other models proposed in this research, the ISSA-DELM model has the highest accuracy (R-square: 0.9916) and the lowest error (RMSE: 6.1255 and MAE: 4.1369). Additionally, when the reservoir DTS encounters the local mutation, the ISSA-DELM model performs better at recreating local DTS characteristics. The reason is that the ISSA-DELM model uses the ISSA to optimize the initial weights, which can more efficiently extract the spatial features of the log curve and enhance the model’s capacity for prediction. As a result, it can be confidently stated that the ISSA-DELM model is more suitable for missing the DTS prediction compared to the other models introduced.

Author Contributions

Conceptualization, L.Q. and Z.J.; methodology, L.Q.; software, L.Q.; validation, L.Q., Z.J. and Y.C.; formal analysis, H.S.; investigation, K.X.; resources, K.X.; data curation, K.X.; writing—original draft preparation, L.Q.; writing—review and editing, L.Q.; visualization, H.S.; supervision, Z.J.; project administration, Z.J.; funding acquisition, K.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China (2016YFC0600201), the Academic and Technical Leader Training Program of Jiangxi Province (20204BCJ23027) and the Joint Innovation Fund of State Key Laboratory of Nuclear Resources and Environment (2022NRE-LH-18).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the confidentiality requirements of the data provider.

Acknowledgments

China petrochemical corporation (Sinopec) is gratefully acknowledged for providing useful data and valuable support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shahbazi, A.; Monfared, M.S.; Thiruchelvam, V.; Fei, T.K.; Babasafari, A.A. Integration of knowledge-based seismic inversion and sedimentological investigations for heterogeneous reservoir. J. Asian Earth Sci. 2020, 202, 104541. [Google Scholar] [CrossRef]
  2. Soleimani, M.; Jodeiri Shokri, B. 3D static reservoir modeling by geostatistical techniques used for reservoir characterization and data integration. Environ. Earth Sci. 2015, 74, 1403–1414. [Google Scholar] [CrossRef]
  3. Sun, Y.Z.; Huang, J. Application of multi-task deep learning in reservoir shear wave prediction. Prog. Geophys. 2021, 36, 799–809. (In Chinese) [Google Scholar]
  4. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. IEEE Int. Jt. Conf. Neural Netw. 2004, 2, 985–990. [Google Scholar]
  5. Tang, J.; Deng, C.; Huang, G.B. Extreme Learning Machine for Multilayer Perceptron. IEEE Trans. Neural Netw. Learn. Syst. 2017, 27, 809–821. [Google Scholar] [CrossRef] [PubMed]
  6. Deng, C.W.; Huang, G.B.; Jia, X.U.; Tang, J. Extreme learning machines: New trends and applications. Sci. China (Inf. Sci.) 2015, 58, 5–20. [Google Scholar] [CrossRef]
  7. Abdullah, J.M.; Rashid, T.A. Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process. IEEE Access 2019, 7, 43473–43486. [Google Scholar] [CrossRef]
  8. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef] [PubMed]
  9. Zhao, W.; Wang, L.; Zhang, Z. Supply-Demand-Based Optimization: A Novel Economics-Inspired Algorithm for Global Optimization. IEEE Access 2019, 7, 73182–73206. [Google Scholar] [CrossRef]
  10. Shabani, A.; Asgarian, B.; Salido, M.A.; Gharebaghi, S.A. Search and Rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  11. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems—ScienceDirect. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  12. Wang, D.; Wang, P.; Shi, J. A fast and efficient conformal regressor with regularized extreme learning machine. Neurocomputing 2018, 304, 1–11. [Google Scholar] [CrossRef]
  13. Silva, B.; Inaba, F.K.; Salles, E.; Ciarelli, P.M. Fast Deep Stacked Networks based on Extreme Learning Machine applied to regression problems. Neural Netw. 2020, 131, 14–28. [Google Scholar] [CrossRef] [PubMed]
  14. Mariani, V.C.; Och, S.H.; Coelho, L.; Domingues, E. Pressure prediction of a spark ignition single cylinder engine using optimized extreme learning machine models. Appl. Energy 2019, 249, 204–221. [Google Scholar] [CrossRef]
  15. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  16. Li, Y.L.; Wang, S.Q.; Chen, Q.R.; Wang, X. Comparative study of several new swarm intelligence optimization algorithms. Comput. Eng. Appl. 2020, 56, 1–12. [Google Scholar]
  17. Yang, X.S. Firely algorithm. In Nature-InspiredMeta-Heuristic Algorithms; Luniver Press: Beckington, UK, 2008; pp. 79–90. [Google Scholar]
  18. Liu, R.; Mo, Y.B. An Improved Sparrow Search Algorithm. Comput. Technol. Dev. 2022, 32, 21–26. [Google Scholar]
  19. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Mixed variable structural optimization using Firefly Algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  20. Wang, X.; Zhang, G.; Lou, S.; Liang, S.; Sheng, X. Two-round feature selection combining with LightGBM classifier for disturbance event recognition in phase-sensitive OTDR system. Infrared Phys. Technol. 2022, 123, 104191. [Google Scholar] [CrossRef]
Figure 1. Extreme learning machine autoencoder (ELM-AE) network structure diagram.
Figure 1. Extreme learning machine autoencoder (ELM-AE) network structure diagram.
Applsci 12 08260 g001
Figure 2. Deep extreme learning machine (DELM) network structure diagram.
Figure 2. Deep extreme learning machine (DELM) network structure diagram.
Applsci 12 08260 g002
Figure 3. The structure and diagram of the prediction model based on the ISSA-DELM.
Figure 3. The structure and diagram of the prediction model based on the ISSA-DELM.
Applsci 12 08260 g003
Figure 4. Well logging graph of Well M.
Figure 4. Well logging graph of Well M.
Applsci 12 08260 g004
Figure 5. Well logging graph of Well N.
Figure 5. Well logging graph of Well N.
Applsci 12 08260 g005
Figure 6. RMSE variation curve of different characteristic numbers.
Figure 6. RMSE variation curve of different characteristic numbers.
Applsci 12 08260 g006
Figure 7. Error reduction rate in various iterations of the optimization algorithms (ISSA, SSA, PSO, and GA) for training DELM using Well M.
Figure 7. Error reduction rate in various iterations of the optimization algorithms (ISSA, SSA, PSO, and GA) for training DELM using Well M.
Applsci 12 08260 g007
Figure 8. Visual results of the RHOB prediction results of the Well N.
Figure 8. Visual results of the RHOB prediction results of the Well N.
Applsci 12 08260 g008
Figure 9. The comparison between the measured and anticipated DTS during the testing phases in Well N predicted by the proposed models. (a) ELM, (b) DELM, (c) GA-DELM, (d) PSO-DELM, (e) SSA-DELM, (f) ISSA-DELM.
Figure 9. The comparison between the measured and anticipated DTS during the testing phases in Well N predicted by the proposed models. (a) ELM, (b) DELM, (c) GA-DELM, (d) PSO-DELM, (e) SSA-DELM, (f) ISSA-DELM.
Applsci 12 08260 g009aApplsci 12 08260 g009b
Table 1. Statistical characteristics of Well M and Well N.
Table 1. Statistical characteristics of Well M and Well N.
Well NameThe IndexPetrophysical
Well Logs
Depth
(m)
CAL
(cm)
DT
(us/m)
GR
(API)
NPHI
(v/v)
RHOB
(g/cm3)
RT
(ohmm)
DTS
(us/m)
Well MMin1550.0021.36206.5710.370.061.3611.35358.01
25%1599.9422.11232.0948.470.122.2721.44390.55
50%1649.9422.40238.5980.910.152.3630.78401.26
Mean1649.9323.17251.9181.720.212.3731.52420.57
75%1699.9422.87254.12113.830.222.5536.97422.96
Max1749.8739.43402.10157.150.922.67152.98642.07
Well NMin1880.0021.94187.3814.970.031.216.41341.75
25%1910.0023.12231.2853.010.172.2416.72392.04
50%1940.0024.54237.7986.120.202.3127.24399.45
Mean1940.0025.04250.9693.540.272.2930.43416.96
75%1970.0326.22260.21132.750.342.4743.08425.27
Max2000.0041.88407.97212.320.922.7070.84651.57
Table 2. Importance scores of the parameters based on the LightGBM model.
Table 2. Importance scores of the parameters based on the LightGBM model.
RankFeatureScore
1DT1078
2RHOB728
3CAL491
4NPHI392
5RT309
6GR291
7DEPTH290
Table 3. Comparisons between the ELM, DELM, PSO-DELM, GA-DELM, SSA-DELM, and ISSA-DELM models for DTS estimation in Well N.
Table 3. Comparisons between the ELM, DELM, PSO-DELM, GA-DELM, SSA-DELM, and ISSA-DELM models for DTS estimation in Well N.
ModelRMSEMAER-Square
ELM7.31505.85580.9582
DELM6.64904.32930.9626
GA-DELM6.54884.19380.9698
PSO-DELM6.52664.20750.9806
SSA-DELM6.23414.16150.9860
ISSA-DELM6.12554.13690.9916
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qiao, L.; Jia, Z.; Cui, Y.; Xiao, K.; Su, H. Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm. Appl. Sci. 2022, 12, 8260. https://doi.org/10.3390/app12168260

AMA Style

Qiao L, Jia Z, Cui Y, Xiao K, Su H. Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm. Applied Sciences. 2022; 12(16):8260. https://doi.org/10.3390/app12168260

Chicago/Turabian Style

Qiao, Lei, Zhining Jia, You Cui, Kun Xiao, and Haonan Su. 2022. "Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm" Applied Sciences 12, no. 16: 8260. https://doi.org/10.3390/app12168260

APA Style

Qiao, L., Jia, Z., Cui, Y., Xiao, K., & Su, H. (2022). Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm. Applied Sciences, 12(16), 8260. https://doi.org/10.3390/app12168260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop