Next Article in Journal
A New Predictive Method for Classification Tasks in Machine Learning: Multi-Class Multi-Label Logistic Model Tree (MMLMT)
Previous Article in Journal
Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Siamese-Derived Attention Dense Network for Seismic Impedance Inversion

School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
Mathematics 2024, 12(18), 2824; https://doi.org/10.3390/math12182824
Submission received: 8 August 2024 / Revised: 5 September 2024 / Accepted: 10 September 2024 / Published: 12 September 2024
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning)

Abstract

:
Seismic impedance inversion is essential for providing high-resolution stratigraphic analysis. Therefore, improving the accuracy while ensuring the efficiency of the inversion model is crucial for practical implementation. Recently, deep learning-based approaches have proven superior in capturing complex relationships between different data domains. In this paper, a Siamese-derived attention-dense network (SADN) is proposed, which incorporates both prediction and Siamese modules. In the prediction module, DenseNet serves as the backbone, and a channel attention mechanism is integrated into DenseNet to improve the weight of factors highly correlated with seismic impedance inversion. A bottleneck structure is employed in DenseNet to reduce computational costs. In the Siamese module, a weight-shared DenseNet is employed to compute the distribution similarity between the predicted impedance and the actual impedance, effectively regularizing the distribution similarity between the inverted seismic impedance and the recorded ground truth. The qualitative and quantitative results demonstrate the advantage of the SADN over commonly used traditional networks for seismic impedance inversion.

1. Introduction

Seismic impedance inversion is widely used in industry to estimate the lithology and pore fluid characteristics of subsurface rock layers from seismic data for both quantitative and qualitative predictions of reservoirs [1]. Due to natural and human factors, such as the sampling environment, geological conditions, and sampling costs, the process of geophones collecting pre-stack data is restricted, resulting in incomplete seismic data. Additionally, the seismic inversion problem faces several challenges. Since seismic data have a limited frequency bandwidth, while impedance has a much wider bandwidth, especially in the low- and high-frequency ranges, seismic inversion is a highly ill-posed problem [2]. Furthermore, the actual underground stratum structure is complex, making it difficult to describe using theoretical models. The most restrictive and unsolvable problem is that actual formation structure data are mainly obtained through well logging, which is extremely expensive. This leads to a shortage of labeled data [3].
Seismic impedance inversion methods can be categorized into four main types: wave equation-based inversion [4], prediction-error filter-based inversion [5,6], sparse mathematical transform-based inversion [7,8], and rank reduction-based inversion [9,10]. Despite significant progress and contributions to improve the accuracy and generalize the inversion methods, several challenges remain. First, traditional inversion methods are largely model- or theory-driven, necessitating pre-assumptions. Second, solving large numbers of equations results in low time efficiency, hindering the application of these inversion methods to large-scale seismic data reconstruction. Third, many parameters must be empirically set, making it difficult to set parameters for different seismic datasets in various research areas [11].
With the advancement of big data, hardware, and key neural network technologies, deep learning methods have developed rapidly [12,13,14]. In recent decades, deep learning methods have garnered much attention for addressing seismic-related problems, including impedance inversion [3,15], seismic fault interpretation [16,17], seismic facies analysis [18,19], and lithology identification [20,21]. Adler et al. [22] comprehensively reviewed deep learning solutions to seismic inverse problems, such as velocity, impedance, reflectivity model building, and seismic data bandwidth extension. Free from pre-assumptions and highly efficient inference, deep learning methods have become primary solutions. Several convolutional neural network (CNN)-based models have demonstrated their effectiveness, including the encoder-decoder, U-Net [23], the residual network (ResNet) [12], the recurrent neural network (RNN) [24], and the generative adversarial network (GAN) [25]. Zhang et al. [26] reviewed and compared experiments with the same dataset that were conducted using a conventional CNN [3], a multi-scale CNN [27], U-Net [23], and a GAN [25].
Das et al. [28] were the first to utilize one-dimensional (1D) CNNs to learn the mapping of seismic data to wave impedance, achieving better results than traditional models. Alfarraj and AlRegib [29] fully utilized the memory capability of RNNs to solve the 1D wave impedance inversion problem. To mitigate the influence of limited paired data, Zhang et al. [30] augmented the training data using impedance transition boundary conditions. Li et al. [31] employed GANs to synthesize new well-logging data by learning the consistent spatial distribution. Wang et al. [32] and Cai et al. [33] also attempted to interpret seismic impedance inversion as a style transfer problem by computing the mapping function between seismic and impedance data using Cycle-GAN. To leverage the frequency information in seismic data, the multi-layer wavelet CNN uses discrete wavelet transforms to replace pooling operations and inverse wavelet transforms to replace up-sampling operations in traditional CNN-based models [34]. This approach avoids feature loss caused by down-sampling and retains multi-scale time and frequency features. Inspired by this, Liu et al. [11] proposed a wavelet-based residual deep learning network using U-Net as the backbone to reconstruct seismic data. The model can handle irregularly missing seismic data and consecutively missing seismic data with large gaps. Considering that the weight of soft attention can be learned by calculating gradients, forward propagation, and backward feedback through a neural network, Wu et al. [35] designed a residual attention network with residual modules and two types of attention mechanisms: channel attention and feature map attention across four network branches. Wang et al. [2] trained a network using simulated paired seismic and impedance data and employed a domain adaptation layer to reduce the difference between the features of real seismic data and synthetic seismic data. Furthermore, the trained model was fine-tuned using well-logging data.
However, the well-log and seismic data pairs serving as labels are usually limited and are not sufficient for a supervised machine-learning algorithm. Some researchers [2,35] utilized simulated data to pre-train the network and then used transfer learning to adaptively adjust the network to real field data. The extreme complexity of real seismic impedance data cannot be accurately modeled from simulated data. It is still challenging for the network to yield stable and accurate impedance estimations for three-dimensional (3D) seismic surveys with a huge number of seismic traces and limited wells.
Inspired by the aforementioned methods, a SADN is designed using DenseNet and Siamese modules. Meanwhile, a channel attention mechanism is integrated into DenseNet to improve the weight of highly correlated factors to seismic impedance inversion. In the DenseNet module, the bottleneck technique is utilized to reduce the amount of calculation.
In the Siamese module, the weight-shared DenseNet is used to compute the distribution similarity between the predicted impedance and the real impedance. Section 2 provides an introduction to the structure of the SADN, including the backbone of the prediction network, the attention block, the revised Siamese network, and the loss function. Section 3 details the designed datasets, network training procedures, comparative experiments, evaluation metrics, and comparison results. Finally, the discussion and conclusion are included.

2. Methodology

2.1. Theory

In seismic inversion, seismic data, X, is typically provided, which is usually 3D data acquired through artificial earthquakes. Acoustic impedance, Y, is considered the inversion target, matching the size of X. The aim of this study is to predict Y from seismic data (X) by learning a highly nonlinear relationship between them. In practice, only a small number of Y values can be obtained from well logging, which is typically very expensive and challenging to acquire. Due to the superior modeling capability of complex nonlinear relationships, CNNs are the optimal strategy for calculating the nonlinear mapping function between X and Y. However, since the well-logging data are single trace and the labeled dataset is very limited, it is very difficult to train a robust and precise network that can adaptively perform seismic inversion. Nevertheless, the known well-logging data have high similarity with the unknown well-logging data from the same area. For real data, it is also difficult to reconstruct the two-dimensional (2D) data across different traces.
Considering this, the main intention of this work is to build a 1D network that can accurately predict acoustic impedance in a large unknown area with a limited number of acoustic impedance samples. To achieve this purpose, our model contains two sub-networks: the prediction network and the Siamese network. The prediction network can predict the acoustic impedance from seismic data, while the Siamese network evaluates the similarity of the predicted and real acoustic impedance at a high feature level. In the network, multiple drilling recorded data are utilized as the multi-channel input feature matrix (Figure 1).

2.2. Network

2.2.1. Backbone of the Prediction Network

In recent decades, ResNet [12] and GoogLeNet [36] have attracted significant attention for extracting accurate features from deeper networks. However, with an increase in the network depth, the problem of gradient vanishing arises. Additionally, deeper networks require a large amount of high-quality data. Therefore, to reduce parameters and promote feature reuse from previous blocks, DenseNet-121 is utilized as the backbone for predicting the acoustic impedance, Y, from seismic data, X, in this study.
In DenseNet-121, four DenseNet blocks, transition layers, and a classifier layer are included. The details of the 1D DenseNet-121 are illustrated in Figure 2. In this network, the input seismic data, X, are fed into a 1D convolution layer with a kernel size of 7 and a stride of 2 and then passed through a BatchNorm1D and a LeakyReLU activation function. After a MaxPooling layer, four DenseBlocks with different layers and three transition layers are connected in sequence. Each transition layer comprises BatchNorm, LeakyReLU, convolution (kernel size is 1), and average pooling layers. It is used to connect two adjacent DenseBlocks and reduce the size of feature maps. Here, the compression rate of the transition layer is 0.5.
In each DenseBlock, the sizes of the feature maps are the same, and feature maps from previous layers are concatenated along the channel dimension. A schematic diagram of a DenseBlock is shown in Figure 3. In the dense connectivity of a DenseBlock, the l t h layer receives the feature maps of all the preceding layers x 0 , , x l 1 , which can be expressed as follows:
x l = H l x 0 , ,   x l 1
where x 0 , , x l 1 represents the concatenation of the feature maps produced in the preceding layers 0 , , l 1 of a DenseBlock. In each convolution layer of a DenseBlock, the growth rate g is regarded as a hyper-parameter and also determines the final channel size of the feature map. Suppose the input channel to the DenseBlock is c 0 , the channel size of the l t h layer in this DenseBlock is c 0 + g × l 1 . To avoid an excessively large size of the feature map and improve the computation efficiency, H l is defined as a composite function of six consecutive operations: batch normalization, LeakyReLU, Conv1D (kernel size = 1), batch normalization, LeakyReLU, and Conv1D (kernel size = 3).

2.2.2. Attention Block

Considering the channel features of the input data are comprised of drilling factors, a channel attention module (CAM) is essential to emphasize the core drilling factors and weaken redundant or irrelevant ones. Besides this, since the features along the spatial dimension indicate the various acoustic impedances along time, all the spatial features are equally important. Hence, the CAM is included only after each transitional layer. The CAM utilized is referenced from [37] to effectively fuse global and local features. In this study, the 1D CAM is shown in Figure 4.
For the CAM, the output of this module is a weight matrix. Suppose the output feature map from the preceding transition layer is f i 1 , and the channel attention matrix is m i 1 . The feature map fed into the current DenseBlock can be computed as follows:
f i = 1 + m i 1 f i 1
where i 1 is the order number of the i 1 t h DenseBlock.

2.2.3. Revised Siamese Network

To enable the network to adaptively predict acoustic impedance from unseen seismic data, the predicted acoustic impedance and real acoustic impedance are further fed into a Siamese network to learn their similarity. The Siamese network evaluates the distribution similarity between the two input feature maps.
In our Siamese network, DenseNet-121 is also used as the backbone. For the predicted and real acoustic impedances, the same networks are utilized with shared weights. To further assess the similarity after DenseNet-121, a convolution layer with a kernel size of 1 and a sigmoid activation function is added. The similarity between feature maps d p r e d i c t e d and d r e a l computed by DenseNet-121 and s p r e d i c t e d and s r e a l computed by the whole Siamese network, indicate the distribution difference between the predicted and real acoustic impedances at a high feature level.

2.2.4. Loss Function

The optimization of the entire proposed network involves two steps. First, when the loss function from the Siamese network is propagated backward, the parameters from the prediction network are kept constant. Then, when the loss function of the whole model is propagated backward, the parameters from the prediction network are updated.
In this study, two different loss functions are utilized. To supervise the prediction network, the mean squared error (MSE) is used to evaluate the gap between the ground truth y and the predicted result y calculated from the seismic data x , which is defined as follows:
L M S E = y y 2 . 2
Besides the MSE, the similarity used to evaluate the outputs from the Siamese network is also the loss function for supervising both the prediction and Siamese networks. The similarity function is defined as follows:
L S i m i l a r i t y = d d 2 2 s s 2 2 + y s 2 2 + y s 2 2 ,
where d , d , s   s are the outputs from the Siamese network, respectively, as shown in Figure 5.
In the optimization of the Siamese network, only L S i m i l a r i t y is utilized. In the optimization of the prediction network, the loss function is defined as follows:
L = L M S E + λ × L S i m i l a r i t y ,
where λ is the weight. In this paper, λ = 0.1 .

2.3. Evaluation Metrics

To quantitatively evaluate the proposed network and its comparative models, the mean absolute error (MAE) and the MSE are both utilized. The metrics are defined as follows:
M A E = i = 1 N y y ,
M S E = i = 1 N y y 2 ,
where y and y are the predicted and ground truth values of acoustic impedance, respectively. The range of these metrics is [0, 1]. A lower MAE or MSE indicates more precise acoustic impedance inversion.
In addition, the Pearson correlation coefficient (PCC) [38] is also utilized to evaluate the correlation between the ground truth and predicted values of acoustic impedance from the same well. The PCC is computed using the following equation:
P C C m , m = c o v C m , C m σ m σ m   ,
where c o v · is the covariance between the curves C m and C m , and σ m and σ m are the standard deviations of C m and C m , respectively. If the predicted acoustic impedance is accurate, the correlation is close to 1.

3. Experiments and Results

3.1. Datasets

The proposed method is applied to real seismic data. There are 2,761,680 traces, and each trace has 2200 time sampling points. There are seven wells in the corresponding region, and the data lengths, l , are (607, 613, 588, 562, 597, 605, 571). The first six wells are selected as the labeled traces and the remaining as the blind well for the model development. Eleven different features are used as the input channel features of the network, including the velocity and the depth of the wells. For each well, the data are segmented into lengths of 400, with the starting point moving forward by one step after each segmentation. Hence, the size of each data group is 11 × 400 and ( l 400 + 1 ) groups are generated. All the samples are normalized using the min–max normalization method. For each sample from the same well, normalization is performed using the minimum and maximum values of the corresponding well, respectively.

3.2. Network Training

Python and PyTorch (1.7) are used on an Intel Core i9 12900 CPU, 128 GB RAM, and Nvidia 3090 Ti 32 GB VRAM GPUs. Throughout our experiments, random seeding is used to ensure the reproducibility of the experiments.
For both the prediction and Siamese networks, AdamW is used as the optimizer with a learning rate of 0.0001. The training is performed for 3800 epochs with a minibatch size of 128. The accuracy rate on the validation set is monitored, and the model with the best performance on the validation set is selected.

3.3. Comparative Experiment

In the experiments, the proposed method is evaluated from two aspects. On one side, inspired by the research in [26] that contributed to the comparison of popular deep learning methods for seismic impedance inversion, DenseNet is compared with other mainstream networks, such as SimpleCNN, ResNet, and U-Net. For SimpleCNN, four 1D ResNet blocks are utilized, each containing three convolution operations. For ResNet, the autoencoder framework is used with channel numbers (10, 32, 64, 64, 31, 16, 1). For each constant channel stage, four 1D ResNet blocks with three convolution operations are included. For U-Net, A nine-layer version with channel numbers (16, 32, 64, 128, 256, 128, 64, 32, 16) is used. Each layer is comprised of the combination of Conv1D+DropOut+InstanceNorm+LeakyReLU+Conv1D+DropOut+InstanceNorm+LeakyReLU.
On the other side, motivated by the proposed network, the network based on the proposed model but without a CAM is also compared. Additionally, the network that replaces DenseNet with ResNet-50 from the proposed network is also compared.
For all the aforementioned comparative experiments, the same parameters as the proposed network were utilized.

3.4. Results

Figure 6 shows the visual comparison results of the experiment in real seismic data. Figure 6a is the real seismic data; Figure 6b–e show the impedance results predicted by SimpleCNN, ResNet, U-Net, and DenseNet. It is evident that SimpleCNN has limited generalization capability, leading to average performance on the test data. ResNet, reduces the inversion error to some extent, yielding results that are closer to the actual seismic data compared to SimpleCNN. U-Net addresses the discrepancies between source and target distributions, resulting in more reasonable outcomes. DenseNet further minimizes the inversion error on synthetic data and provides more detail in the results than U-Net, thereby enhancing the resolution of the inversion results.
Figure 7 shows the visualized comparison between the ground truth and the predictions by SimpleCNN, ResNet, U-Net, and DenseNet for well 1. There are fewer sudden changes in well 1. The length of the well data is 607. From the figure, the predicted impedance by SimpleCNN has larger deviations at the beginning and end of the well, especially at the end. In the middle part, at depths of [280, 300], the deviation values increase. For ResNet, due to batch normalization, some prediction results change to zero. For the high-frequency parts, the predicted results have larger deviations from the ground truth. For U-Net, due to the connections between the encoder and decoder parts, the predicted results mostly align with the ground truth. However, at depths of [370, 380], the predicted values are closer to zero. For DenseNet, the results are much closer to the ground truth in both high- and low-frequency areas. In areas with gentle changes, the deviations between the predicted results and the ground truth are smaller.
Figure 8 shows the visualized comparison between the ground truth and the predictions by SimpleCNN, ResNet, U-Net and DenseNet for well 4. There are many fluctuations in well 4. The length of the well data is 562. From the figure, due to the frequent changes, the predicted impedance by SimpleCNN shows larger deviations along the entire well. In the last one-fifth portion, the predicted results are relatively stable but fail to capture the changes in real impedance. For ResNet, due to batch normalization, zero-values still exist. However, because of the ResNet structure, the predicted impedance values are more accurate than those by SimpleCNN. For U-Net, due to the connection between the encoder and decoder parts, the zero-values decrease significantly. But at depths around 380 and 480, the zero-values reappear. For DenseNet, the results are very close to the ground truth in both high- and low-frequency areas. However, at depths of [180, 210], the predicted values deviate significantly from the real impedance.
Figure 9 shows the visualized comparison between the ground truth and the predicted seismic impedance from SimpleCNN, ResNet, U-Net, and DenseNet for the well in the test set. This well also exhibits many fluctuations. The length of the well data is 571. From the figure, due to the frequent changes, the predicted impedance by SimpleCNN shows larger deviations along the entire well. In the middle part of the well, SimpleCNN cannot accurately predict the high-frequency values. For ResNet, due to batch normalization, zero-values also exist. Especially in the test set, zero-values appear more frequently. The predicted impedance is high-frequency while the real ones are low-frequency. For U-Net, due to the connection between the encoder and decoder parts, the zero-values decrease significantly. However, local minimum values frequently appear. For DenseNet, the results are very close to the ground truth in both high- and low-frequency areas. However, at depths of [500, 571], the predicted values deviate significantly from the real impedance. Although the accuracy is improved, the deviations between the real and predicted impedance are larger, especially in the latter parts of the well.
To further quantitatively evaluate the results among the networks, the final training loss during the training stage is compared, as shown in Table 1. The MSE and MAE across the entire test dataset are also computed. As can be seen from the table, the magnitudes of all the networks are the same. DenseNet achieves the lowest MSE and MAE, with an MSE that is 0.009 lower than ResNet and an MAE that is 0.024 lower than U-Net. From the results, due to the re-use of features from different scales, DenseNet is more suitable for impedance inversion. Because DenseNet’s predicted impedance is closer to the real impedance, the PCC is also optimal.
Figure 10 shows the visualized comparison between the ground truth and DenseNet+Siam (ResNet), DenseNet+Siam (Dense), and DenseNet+CAM+Siam (Dense). The results are also from well 1. From the results, it is evident that the deviation between the ground truth and the predicted results is much smaller than in Figure 7. In the DenseNet+Siamese framework, when using ResNet as the Siamese model network, the predicted results have a large deviation in high- and low-frequency areas. Especially around local low-frequency regions, the predicted impedance exhibits severe up-and-down turbulence. When using DenseNet as the Siamese model network, the deviation between the ground truth and the predicted results decreases. After adding the CAM to the DenseNet+Siam (Dense) framework, the accuracy around local extrema improves. In all areas, the predicted results using DenseNet+CAM+Siam (Dense) are closest to the ground truth.
Figure 11 shows the visual comparison between the ground truth and the predictions from DenseNet+Siam (ResNet), DenseNet+Siam (Dense), and DenseNet+CAM+Siam (Dense). The results are from well 4, which exhibits many oscillations in impedance. From the figure, it is evident that the deviation between the ground truth and the predicted results is much smaller compared to the results shown in Figure 8. In the DenseNet+Siamese framework, when using ResNet as the Siamese network, the predicted results exhibit significant deviations, especially around extreme values. When using DenseNet as the Siamese network, the deviation between the ground truth and the predicted results decreases. However, the results are still somewhat lacking at local extreme locations. After incorporating the CAM into the DenseNet+Siam (Dense) framework, the accuracy around extreme positions improves. Overall, the predictions using DenseNet+CAM+Siam (Dense) are closest to the ground truth across all the regions.
Figure 12 shows the visual comparison between the ground truth and the predictions from DenseNet+Siam (ResNet), DenseNet+Siam (Dense), and DenseNet+CAM+Siam (Dense). The results are from the well in the test set. The accuracy of the predicted results is much improved compared to the results shown in Figure 9. Within the DenseNet+Siamese framework, using ResNet as the Siamese network shows a slight improvement in accuracy. However, substantial differences still exist at the last depths, and at a depth of about 300, local zero-values appear. When using DenseNet as the Siamese network, the deviation between the ground truth and the predicted results decreases. Nevertheless, the results are still somewhat poor in low-frequency regions. Adding the CAM to the DenseNet+Siam(Dense) framework improves the accuracy in low-frequency areas. Overall, the predicted results using DenseNet+CAM+Siam (Dense) are the closest to the ground truth across all the regions.
For the DenseNet+Siamese-based framework, the final training loss during the training stage is also compared, as shown in Table 2. The MSE and MAE across the entire test dataset are also computed. As can be seen from the table, the scale of the whole DenseNet-based framework increases by an order of magnitude. The proposed network DenseNet+CAM+Siam (Dense) achieves the lowest MSE and MAE, with the MSE being 0.002 lower than DenseNet+Siam (Dense) and 0.01 lower than ResNet and the MAE being 0.026 lower than U-Net. The results indicate that by fully utilizing DenseNet, the Siamese framework, and the CAM, the proposed model greatly improves impedance prediction accuracy for blind drilling wells. Due to its closer approximation to the real impedance, the proposed network also achieves the highest PCC.
Figure 13 shows the comparison between the proposed networks trained with and without velocity and depth. During the training stage, a qualitative comparison of wells 1 and 4 reveals almost no difference between the results. This indicates that velocity and depth are not essential for training the networks. However, in the test stage, although the results with and without these two parameters are still very close, the results obtained by using them are slightly closer to the real impedance.

4. Discussion and Conclusions

In this paper, a SADN with prediction and Siamese modules is proposed. In the prediction module, DenseNet is used as the backbone, and a channel attention mechanism is integrated to enhance the weight of highly correlated factors in seismic impedance inversion. To reduce the computational load, bottleneck layers are employed within the DenseNet. In the Siamese module, weight-shared DenseNet networks compute the distribution similarity between the predicted and real impedance. The Siamese framework effectively regularizes the distribution similarity between the inverted seismic impedance and the recorded ground truth. The proposed network is compared with traditional cascaded CNN, ResNet, U-Net, and DenseNet. Additionally, the performance of the proposed network is evaluated by replacing DenseNet with ResNet and by adding the CAM module. The proposed network successfully preserves local variations between high- and low-frequency areas and achieves results that are much closer to the ground truth.

Funding

This research received no funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

I declare no conflict of interest. I confirm that all the figures and tables in the text do not involve any copyright issues with other authors or publishers.

References

  1. Ray, A.K.; Chopra, S. Building more robust low-frequency models for seismic impedance inversion. First Break 2016, 34, 29–34. [Google Scholar]
  2. Wang, Q.; Wang, Y.; Ao, Y.; Lu, W. Seismic inversion based on 2D-CNNs and domain adaption. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5921512. [Google Scholar] [CrossRef]
  3. Chai, X.; Tang, G.; Wang, S.; Lin, K.; Peng, R. Deep learning for irregularly and regularly missing 3-D data reconstruction. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6244–6265. [Google Scholar] [CrossRef]
  4. Fomel, S. Seismic reflection data interpolation with differential offset and shot continuation. Geophysics 2003, 68, 733–744. [Google Scholar] [CrossRef]
  5. Naghizadeh, M.; Sacchi, M.D. f-x adaptive seismic-trace interpolation. Geophysics 2009, 74, V9–V16. [Google Scholar] [CrossRef]
  6. Fang, W.; Fu, L.; Liu, S.; Li, H. Dealiased seismic data interpolation using a deep-learning-based prediction-error filter. Geophysics 2021, 86, V317–V328. [Google Scholar] [CrossRef]
  7. Ma, J.; Plonka, G.; Chauris, H. A new sparse representation of seismic data using adaptive easy-path wavelet transform. IEEE Trans. Geosci. Remote Sens. 2010, 7, 540–544. [Google Scholar] [CrossRef]
  8. Naghizadeh, M.; Sacchi, M.D. Beyond alias hierarchical scale curvelet interpolation of regularly and irregularly sampled seismic data. Geophysics 2010, 75, WB189–WB202. [Google Scholar] [CrossRef]
  9. Ma, J. Three-dimensional irregular seismic data reconstruction via low-rank matrix completion. Geophysics 2013, 78, V181–V192. [Google Scholar] [CrossRef]
  10. Naghizadeh, M.; Sacchi, M. Multidimensional de-aliased Cadzow reconstruction of seismic records. Geophysics 2013, 78, A1–A5. [Google Scholar] [CrossRef]
  11. Liu, N.; Wu, L.; Wang, J.; Wu, H.; Gao, J.; Wang, D. Seismic data reconstruction via wavelet-based residual deep learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4508213. [Google Scholar] [CrossRef]
  12. Wang, B.; Zhang, N.; Lu, W.; Wang, J. Deep-learning-based seismic data interpolation: A preliminary result. Geophysics 2019, 84, V11–V20. [Google Scholar] [CrossRef]
  13. Wu, H.; Zhang, B.; Lin, T.; Cao, D.; Lou, Y. Semiautomated seismic horizon interpretation using the encoder-decoder convolutional neural network. Geophysics 2019, 84, B403–B417. [Google Scholar] [CrossRef]
  14. Qi, J.; Zhang, B.; Lyu, B.; Marfurt, K. Seismic attribute selection for machine-learning-based facies analysis. Geophysics 2020, 85, O17–O35. [Google Scholar] [CrossRef]
  15. Chen, H.; Gao, J.; Zhang, W.; Yang, P. Seismic acoustic impedance inversion via optimization-inspired semisupervised deep learning. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5906611. [Google Scholar] [CrossRef]
  16. Wu, X.; Shi, Y.; Fomel, S.; Liang, L.; Zhang, Q.; Yusifov, A.Z. FaultNet3D: Predicting fault probabilities, strikes, and dips with a single convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9138–9155. [Google Scholar] [CrossRef]
  17. Liu, N.; He, T.; Tian, Y.; Wu, B.; Gao, J.; Xu, Z. Common-azimuth seismic data fault analysis using residual UNet. Interpretation 2020, 8, SM25–SM37. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Liu, Y.; Zhang, H.; Xue, H. Seismic facies analysis based on deep learning. IEEE Trans. Geosci. Remote Sens. 2019, 17, 1119–1123. [Google Scholar] [CrossRef]
  19. Li, F.; Zhou, H.; Wang, Z.; Wu, X. ADDCNN: An attention-based deep dilated convolutional neural network for seismic facies analysis with interpretable spatial–spectral maps. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1733–1744. [Google Scholar] [CrossRef]
  20. Lin, J.; Li, H.; Liu, N.; Gao, J.; Li, Z. Automatic lithology identification by applying LSTM to logging data: A case study in X tight rock reservoirs. IEEE Trans. Geosci. Remote Sens. 2020, 18, 1361–1365. [Google Scholar] [CrossRef]
  21. Liu, N.; Huang, T.; Gao, J.; Xu, Z.; Wang, D.; Li, F. Quantum-enhanced deep learning-based lithology interpretation from well logs. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4503213. [Google Scholar] [CrossRef]
  22. Adler, A.; Araya-Polo, M.; Poggio, T. Deep learning for seismic inverse problems: Toward the acceleration of geophysical analysis workflows. IEEE Signal Proc. Mag. 2021, 38, 89–119. [Google Scholar] [CrossRef]
  23. Fang, W.; Fu, L.; Zhang, M.; Li, Z. Seismic data interpolation based on U-net with texture loss. Geophysics 2021, 86, V41–V54. [Google Scholar] [CrossRef]
  24. Yoon, D.; Yeeh, Z.; Byun, J. Seismic data reconstruction using deep bidirectional long short-term memory with skip connections. IEEE Trans. Geosci. Remote Sens. 2020, 18, 1298–1302. [Google Scholar] [CrossRef]
  25. Wei, Q.; Li, X.; Song, M. Reconstruction of irregular missing seismic data using conditional generative adversarial networks. Geophysics 2021, 86, V471–V488. [Google Scholar] [CrossRef]
  26. Zhang, S.B.; Si, H.J.; Wu, X.M.; Yan, S.S. A comparison of deep learning methods for seismic impedance inversion. Petrol. Sci. 2022, 19, 1019–1030. [Google Scholar] [CrossRef]
  27. Qian, F.; Liu, Z.; Wang, Y.; Liao, S.; Pan, S.; Hu, G. DTAE: Deep tensor autoencoder for 3-D seismic data interpolation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4503213. [Google Scholar] [CrossRef]
  28. Das, V.; Pollack, A.; Wollner, U.; Mukerji, T. Convolutional neural network for seismic impedance inversion. Geophysics 2019, 84, R869–R880. [Google Scholar] [CrossRef]
  29. Alfarraj, M.; AlRegib, G. Semi-supervised learning for acoustic impedance inversion. In Proceedings of the 2019 SEG International Exposition and Annual Meeting, San Antonio, TX, USA, 15–20 September 2019. [Google Scholar]
  30. Zhang, R.; Sun, Q.; Zhang, X.; Cui, L.; Wu, Z.; Chen, K.; Wang, D.; Liu, Q.H. Imaging hydraulic fractures under energized steel casing by convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8831–8839. [Google Scholar] [CrossRef]
  31. Li, Q.; Luo, Y. Using GAN priors for ultrahigh resolution seismic inversion. In Proceedings of the 2019 SEG International Exposition and Annual Meeting, San Antonio, TX, USA, 15–20 September 2019. [Google Scholar]
  32. Wang, Y.; Ge, Q.; Lu, W.; Yan, X. Well-logging constrained seismic inversion based on closed-loop convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5564–5574. [Google Scholar] [CrossRef]
  33. Adler, J.; Lunz, S. Banach wasserstein gan. In Proceedings of the 2018 the Advances in Neural Information Processing Systems (NeurIPS), Montréal, Quebec province, Canada, 3–8 December 2018. [Google Scholar]
  34. Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; Zuo, W. Multi-level wavelet-CNN for image restoration. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  35. Wu, B.; Xie, Q.; Wu, B. Seismic impedance inversion based on residual attention network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4511117. [Google Scholar] [CrossRef]
  36. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 7–12 June 2015. [Google Scholar]
  37. Zhang, D.; Wang, H.; Weng, C.; Shi, X. Video Human Action Recognition with Channel Attention on ST-GCN. In Proceedings of the 2021 International Conference on Computer Information Science and Application Technology (CISAT), Lanzhou, China, 30 July–1 August 2021. [Google Scholar]
  38. Benesty, J.; Chen, J.; Huang, Y.; Cohen, I. Pearson correlation coefficient. In Noise Reduction in Speech Processing, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2, pp. 1–4. [Google Scholar]
Figure 1. Flowchart of the proposed model.
Figure 1. Flowchart of the proposed model.
Mathematics 12 02824 g001
Figure 2. Detailed structure of 1D DenseNet-121 network.
Figure 2. Detailed structure of 1D DenseNet-121 network.
Mathematics 12 02824 g002
Figure 3. Schematic diagram of dense connectivity of six-layer DenseBlock.
Figure 3. Schematic diagram of dense connectivity of six-layer DenseBlock.
Mathematics 12 02824 g003
Figure 4. 1D channel attention module (CAM).
Figure 4. 1D channel attention module (CAM).
Mathematics 12 02824 g004
Figure 5. Workflow of the Siamese network.
Figure 5. Workflow of the Siamese network.
Mathematics 12 02824 g005
Figure 6. Experiment of real seismic data inversion: (a) real seismic data; (b) inversion result of SimpleCNN; (c) inversion result of ResNet; (d) inversion result of U-Net; (e) inversion result of DenseNet.
Figure 6. Experiment of real seismic data inversion: (a) real seismic data; (b) inversion result of SimpleCNN; (c) inversion result of ResNet; (d) inversion result of U-Net; (e) inversion result of DenseNet.
Mathematics 12 02824 g006aMathematics 12 02824 g006bMathematics 12 02824 g006c
Figure 7. Local visual comparison of the predicted impedance with the ground truth and other networks for well 1.
Figure 7. Local visual comparison of the predicted impedance with the ground truth and other networks for well 1.
Mathematics 12 02824 g007
Figure 8. Local visual comparison of the ground truth and predicted impedance by various networks for well 4.
Figure 8. Local visual comparison of the ground truth and predicted impedance by various networks for well 4.
Mathematics 12 02824 g008
Figure 9. Local visual comparison of the ground truth and predicted seismic impedance from various networks for wells in the test set.
Figure 9. Local visual comparison of the ground truth and predicted seismic impedance from various networks for wells in the test set.
Mathematics 12 02824 g009
Figure 10. Local visual comparison of the ground truth and predicted impedance from Siamese-derived networks of well 1.
Figure 10. Local visual comparison of the ground truth and predicted impedance from Siamese-derived networks of well 1.
Mathematics 12 02824 g010
Figure 11. Local visual comparison of the ground truth and predicted impedance from Siamese-derived networks of well 4.
Figure 11. Local visual comparison of the ground truth and predicted impedance from Siamese-derived networks of well 4.
Mathematics 12 02824 g011
Figure 12. Local visual comparison of the ground truth and predicted seismic impedance from Siamese-derived networks for wells in the test set.
Figure 12. Local visual comparison of the ground truth and predicted seismic impedance from Siamese-derived networks for wells in the test set.
Mathematics 12 02824 g012
Figure 13. Evaluation of the networks with and without velocity from wells 1, 4, and the test set well.
Figure 13. Evaluation of the networks with and without velocity from wells 1, 4, and the test set well.
Mathematics 12 02824 g013
Table 1. Quantitative results by comparing DenseNet with other mainstream network frameworks.
Table 1. Quantitative results by comparing DenseNet with other mainstream network frameworks.
Networks Final   Training   Loss   ( × 10 2 ) MSEMAEPCC
Simple CNN0.454640.02920.12730.6063
ResNet0.891460.02670.12650.5387
U-Net0.156720.02850.12630.5764
DenseNet0.438630.01980.10270.6179
Table 2. Quantitative results by comparing the proposed model with other combined networks.
Table 2. Quantitative results by comparing the proposed model with other combined networks.
Networks Final   Training   Loss   ( × 10 2 ) MSEMAEPCC
DenseNet+Siam (Res)0.137610.01990.10060.6657
DenseNet+ Siam (Dense)0.056680.02030.10230.6620
DenseNet+CAM+Siam (Dense)0.051680.01880.10040.7464
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J. Siamese-Derived Attention Dense Network for Seismic Impedance Inversion. Mathematics 2024, 12, 2824. https://doi.org/10.3390/math12182824

AMA Style

Wu J. Siamese-Derived Attention Dense Network for Seismic Impedance Inversion. Mathematics. 2024; 12(18):2824. https://doi.org/10.3390/math12182824

Chicago/Turabian Style

Wu, Jiang. 2024. "Siamese-Derived Attention Dense Network for Seismic Impedance Inversion" Mathematics 12, no. 18: 2824. https://doi.org/10.3390/math12182824

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop