Next Article in Journal
Statistical Analysis of Atmospheric Delay Gradient and Rainfall Prediction in a Tropical Region
Next Article in Special Issue
Effective First-Break Picking of Seismic Data Using Geometric Learning Methods
Previous Article in Journal
Combining Sentinel-2 Data and Risk Maps to Detect Trees Predisposed to and Attacked by European Spruce Bark Beetle
Previous Article in Special Issue
Seismic Imaging of the Arctic Subsea Permafrost Using a Least-Squares Reverse Time Migration Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Separation of Blended Seismic Data in Shot Domain Using Deep Learning

College of Geo-Exploration Science and Technology, Jilin University, Changchun 130026, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4167; https://doi.org/10.3390/rs16224167
Submission received: 9 October 2024 / Revised: 6 November 2024 / Accepted: 7 November 2024 / Published: 8 November 2024

Abstract

:
Accurate deblending techniques are essential for the successful application of blended seismic acquisition. Deep-learning-based deblending methods typically begin by performing a pseudo-deblending operation on blended data, followed by further processing in either the common-shot domain or a non-common-shot domain. In this study, we propose an iterative deblending framework based on deep learning, which directly addresses the blended data in the shot domain, eliminating the need for pseudo-deblending and domain transformation. This framework is built around a unique architecture, termed WNETR, which derives its name from its W-shaped network structure that combines U-Net and Transformer. During testing, the trained WNETR is incorporated into the iterative framework to extract useful signals iteratively. Tests on synthetic data validate the effectiveness of the proposed deblending iterative framework.

1. Introduction

Seismic exploration methods can be categorized into active and passive source techniques. Active source exploration relies on controlled, human-induced seismic signals, which offer higher spatial resolution and improved signal-to-noise ratio under controlled conditions. In contrast, passive source exploration leverages natural ambient noise, allowing for continuous data acquisition in environments where active sources may not be feasible, such as for deep fault monitoring or seismic hazard assessment [1,2,3,4]. Blended acquisition, as an active method, improves the efficiency and imaging quality of seismic data acquisition by firing multiple shots in rapid succession [5,6,7]. However, interference from blending noise introduced by other sources can significantly diminish these benefits [8,9]. A viable solution is to decompose the blended data into separate records [10,11,12]. Given that blending noise closely resembles the useful signal, directly separating blended seismic data remains a challenging task.
Deep-learning-based deblending techniques can be viewed as denoising methods, where neural networks are employed to suppress blending noise [13]. These approaches typically require converting pseudo-deblended data from the shot domain into another domain. In non-common-shot gathers, the useful signal appears continuous, while blending noise from other seismic sources manifests as random noise [14,15]. As a result, denoising methods based on supervised and self-supervised learning have been introduced to suppress blending noise [16,17,18,19]. However, these methods rely on a pseudo-deblending operation to account for the delay time of source excitation. When the delay time is unavailable, these deblending techniques fail. Wang et al. [20] obtained pseudo-deblended data from other sources via phase shift. However, this approach requires a small distance between adjacent sources, limiting the generalizability of the trained neural network to new datasets.
With the rapid development of deep learning, increasing feature extraction capabilities now make it feasible to perform deblending operations directly on blended data [21]. This allows for the extraction of each seismic source’s independent contribution, bypassing the risk of damaging useful signals during denoising. While convolutional neural networks (CNNs) have been widely used in deblending due to their effectiveness, they struggle to capture long-range dependencies, limiting their performance in complex datasets [22,23,24,25,26]. In contrast, the self-attention mechanism of the Transformer model excels at capturing global dependencies by directly considering interactions between any two points in the data [27,28,29,30,31,32]. This adaptability in assigning weights based on the distribution of blending noise enables a more precise separation of noise from useful signals, enhancing the overall deblending accuracy.
In this study, we design an iterative framework for deblending blended seismic data using deep learning. We propose a hybrid network (WNETR) that combines a CNN and Transformer. The trained WNETR is then embedded in an iterative framework. In each iteration, WNETR directly estimates the useful signal and the blending noise. The blending noise is used as an input to the next iteration to estimate useful signals from other seismic sources. Finally, we test the effectiveness of the proposed iterative framework in numerical experiments.

2. Materials and Methods

2.1. Overview of Blended Acquisition

Blended acquisition is economical and efficient compared to conventional acquisition. In conventional acquisition, each source is triggered sequentially with a large time interval to avoid signal interference. The blended acquisition is shown in Figure 1. The receivers record the signals from the sources consecutively. The receivers r i ,   i = 1,2 , , N record signals from all sources s j ,   j S almost simultaneously and the blended data D t , r i can be expressed as:
D t , r i = j S , i N d t + τ j , s j , r i ,
where d t + τ j , s j , r i denotes the raw seismic data of source j and τ is the dithering time, which is the delay introduced by the blending process.
In the blended data, the signal generated by the target source is regarded as the useful signal, whereas the signals from other sources are classified as the blending noise. The coherence character of the blending noise is very similar to that of the useful signal. However, after the pseudo-deblended data are converted from the shot domain to the other domains, the blending noise is no longer continuously coherent and behaves as random noise. This allows the target source data to be separated by suppressing the blending noise. Consequently, the extant research on deblending techniques using deep learning is predominantly conducted with the assistance of diverse neural networks for denoising.

2.2. Deep Learning for Iterative Separation of Blended Data

We employ deep learning to separate the blended data to overcome the practical challenges of the blended acquisition technique. In the framework of supervised learning, we develop the WNETR to obtain the useful signal of each source from the blended seismic data. The input of the WNETR is the blended data and the three outputs are the useful signal to be separated, the blending noise, and the reconstructed blended data.

2.2.1. Dataset and Training Labels

In this work, we extract and refine six two-dimensional models from the complex Overthrust model to generate the training and test sets [33]. These velocity models are shown in Figure 2. Each model has a horizontal dimension of 0.96 km and a vertical dimension of 0.64 km. The grid spacing of the velocity model is 10 m. The training set is generated on the first 5 models and the last model is used to generate the test set. We use the structural similarity index (SSIM) to measure the similarity between two models [34]. SSIM has a value of 1 when the two velocity models are identical and a value close to 0 when the velocity models are completely different. The SSIM for the test velocity model and each training velocity model ranged from 0.2 to 0.5.
We use the finite difference method to solve the acoustic equation in the time domain to generate synthetic seismic records. The receivers and sources are located at each grid point on the surface. The source wavelet is a Ricker wavelet with a peak frequency of 30 Hz. The sampling rate is 1 ms and 0.512 s of seismic data are recorded on each velocity model. For creating the blended data, we assume a maximum of four sources contributing to each record. To organize these sources, we divide them into four distinct groups. In generating the blended records, we randomly select between two and four sources from these groups, ensuring that no more than one source is taken from each section. This approach, illustrated in Figure 3, ensures a controlled overlapping of signals, simulating realistic blending scenarios while maintaining some degree of separation among sources.
Figure 4 illustrates the input blended data from four sources along with their associated labels. In this setup, we designate the signal from the first source in the blended data as training label 1, as shown in Figure 4a. Label 1 is crucial for guiding the model in learning how to extract useful signals from blended data during training. Besides the useful signal, the blended data also contain blending noise from other seismic sources, which we define as training label 2. Label 2 represents these interfering signals that the model needs to identify and eliminate. We define the original blended data as training label 3 to ensure the model comprehends the input data comprehensively. In order to achieve continuous and effective separation, we adopt an iterative process. In each iteration, we use label 2 from Figure 4c as the new training data, generating three corresponding labels as depicted in Figure 5. This process is repeated iteratively, with the blended noise in label 2 increasingly approximating single-source data.
Figure 6 displays blended data containing two sources, where label 1 corresponds to the useful signal of one source, and label 2 represents the useful signal of the other source. We first generate 300 blended data containing two sources, three sources, and four sources on each velocity model. The blended data with four sources can be successively separated into blended data with three and two sources. Similarly, the blended data with three sources can be successively separated into blended data with two sources. Thus, we obtain a total of 9000 training data, which include 1500 four-source data, 3000 three-source data, and 4500 two-source data.

2.2.2. Network Architecture

We propose the WNETR to directly separate blended seismic data. WNETR is inspired by the UNETR architecture [35]. UNETR, which combines the advantages of Transformer and U-Net [36], has demonstrated significant success in medical image segmentation tasks. The architecture of WNETR is shown in Figure 7. It contains one encoder and two decoders. The encoder extracts features from the blended data using both Transformer and CNN techniques. The two decoders are responsible for predicting the useful signal and blending noise, respectively. Additionally, the sum of the two outputs is designed to be consistent with the original blended data.
The core of the Transformer lies in its ability to capture dependencies within input data directly through the self-attention mechanism, bypassing convolutional operations, which enables efficient data processing. Initially, the blended data are divided into non-overlapping patches, which are then projected into a K-dimensional embedding space. This sequence of embedding vectors, with added positional encoding, is fed into the Transformer. The Transformer encoder is composed of multiple identical modules, each incorporating two key components: the multi-head self-attention (MSA) mechanism and a multi-layer perceptron (MLP). MSA allows the model to simultaneously focus on different regions of the blended seismic data, capturing diverse contextual information through its multiple attention heads. The MLP enhances the model’s non-linear representation capability, allowing it to learn more complex function mappings. However, the Transformer alone may not fully capture all the essential features within the data. To address this, researchers often integrate CNNs with Transformers to extract complementary features and improve overall model performance [37,38,39,40]. CNNs excel at capturing local patterns, and their combination with Transformers makes the model more flexible and efficient for complex tasks. Thus, WNETR incorporates CNNs to extract additional features from the blended data, complementing the Transformer’s capabilities.
The decoder consists of multiple upsampling modules and convolutional layers arranged sequentially. These upsampling modules gradually increase the spatial resolution of the feature map to recover detailed information, with bilinear interpolation used for upsampling. Additionally, the decoder fuses the upsampled feature map with multi-resolution features generated by the encoder. Through skip connections, the decoder can access information from earlier stages of the encoder, which is critical for preserving essential details of the seismic waveforms. This approach allows the decoder to fully leverage multi-level information, enhancing the deblending performance. The decoder has three distinct output layers. The final layer of decoder 1 outputs the useful signal, while the final layer of decoder 2 outputs the blending noise. By summing these two outputs, the reconstructed raw data can be obtained. This strategy effectively applies a self-supervised learning method, enabling the encoder to learn essential features from the input data more efficiently by reconstructing the original data. In doing so, the model not only learns to separate the useful signal from the blending noise but also ensures the accurate preservation or recovery of the useful signal. This dual-objective learning process enhances the encoder’s feature extraction capability, ultimately improving the accuracy of useful signal separation.

2.2.3. Loss Function

We train WNETR using a joint loss of SSIM and L1 [41,42]. The SSIM is defined as follows:
L o s s S S I M = 1 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2 ,
where x and y denote the predicted data and labels, respectively. μ x and σ x denote the mean and standard deviation of x , respectively. μ y and σ y denote the mean and standard deviation of y , respectively. σ x y denotes the covariance between x and y . c 1 and c 2 are very small constants.
The L1 is defined as follows:
L o s s L 1 = 1 n m i = 1 n j = 1 m x i j y i j ,
where x i j and y i j denote the i t h row and j t h column pixels of x and y , respectively.
The loss for each task is expressed as follows:
L S S I M + L 1 = L o s s S S I M + L o s s L 1 ,
We use the Adam optimizer to update the network parameters [43]. The learning rates are adjusted according to the exponential decay strategy, with an initial learning rate of 2 × 1 0 4 and a decay factor of 0.994. The network is trained for a total of 600 epochs.

2.2.4. Iterative Framework for Deblending

The trained WNETR is embedded in an iterative framework for deblending, as shown in Figure 8. The number of iterations must be predetermined based on the degree of blending. In each iteration, the network generates three outputs: the first is the deblending result for the first source, the second is the blending noise, and the third is the reconstructed blended data, which aids in WNETR’s training. However, the iterative framework utilizes only the first two outputs. Rather than using the first output directly as the deblending result, we combine the first and second outputs to obtain the final deblending result:
d i = 1 2 f 1 i x + x f 2 i x ,
where d i is the deblending result for iteration number i , x is the input data of the WNETR, f 1 i x is the first output of the WNETR, and f 2 i x is the second output of the WNETR.
The blending noise as the input for the next iteration is denoted as:
n i = 1 2 f 2 i x + x f 1 i x .

3. Results

In order to evaluate the effectiveness of the proposed method, we use an independent test set for performance analysis. The test data followed the same blending principles as the training data. This means that each sample in the test set is carefully constructed, resulting in some similarity between the test and training data. By using test samples prepared using the same process, it is feasible to test the generalizability of the model under similar conditions.
The test data are fed into the iterative deblending framework. Figure 9 shows the blended data with a mixing degree of 2, along with the predicted deblending results. Since the input data have a mixing degree of 2, the deblending operation is completed in just one iteration. Notably, the first output of the WNETR is not the final deblending result. The subtraction of the input data and the second output of the WNETR are also the deblending result. To further improve the deblending accuracy, we introduce an additional step. We merge the two results to improve the precision of the deblending. This dual deblending approach captures subtle features of the signal from different angles, thus improving the overall deblending accuracy. Figure 10 shows the residuals between the final deblending results and the original unblended data. The relatively small residuals indicate that our method can effectively capture and reconstruct the complex structure of useful signals. Even in the face of previously unseen blended data, our deblending method maintains a high level of accuracy.
The blended data with a mixing degree of 4 are shown in Figure 11. A higher degree of mixing implies that there are more complex interactions between the seismic sources, increasing the difficulty of the deblending task. To address this, we employ an ordered deblending strategy. Specifically, the deblending operation is performed sequentially on the blended data in a left-to-right direction, and this order of deblending is determined by the training labels. In the iterative process, WNETR first processes the leftmost useful signals, and then separates the signals from different seismic sources one by one. Figure 12 shows the first two outputs of the first iteration. They represent the useful signal of the first source and the blending noise generated by the other seismic sources, respectively. Despite the complexity of blended data, the useful signal from the first source has been successfully separated through the first iteration. The blending noise in Figure 12b still contains the useful signals from other seismic sources, which are used as input for the second iteration. This input excludes the useful signals separated in the previous iteration. Since the useful signals of the first source have been successfully separated, the signal of the second source is located on the leftmost side. Figure 13 shows the deblending results and blending noise from the second iteration.
After two iterations, the remaining blending noise contains signals from only two sources. The results of the third iteration are shown in Figure 14. The useful signals from the third and fourth sources are successfully separated. With three iterations, the deblending framework efficiently extracts the useful signals from each source in the blended data individually. The useful signals are recovered with high quality during the deblending process. The effects caused by the blending noise are basically eliminated, which fully demonstrates the powerful ability of this iterative deblending framework in solving the complex blended seismic data problem.
To further validate the auxiliary effect of self-supervised learning, we remove the third output of WNTER and retrain the remaining network. The training data and details are consistent with WNETR. Figure 15b,d show the deblending results of the blended data by this network in Figure 11. While the useful signals from each seismic source are generally separated, the finer details of these signals are not fully recovered. The comparison of these results with the deblending results of WNETR in Figure 15a,c verifies that the third branch in WNETR is valid. The WNETR results demonstrate superior accuracy, particularly in recovering the finer waveform details. The self-supervised learning mechanism enables WNETR’s encoder to identify and extract more effective features, facilitating better recovery of the original signal from complex blended data and improving separation accuracy. This mechanism also contributes to the satisfactory recovery of the detailed components of the useful signal. Thus, with the support of self-supervised learning, WNETR not only effectively separates signals from different sources but also enhances the quality of signal detail recovery.

4. Discussion

In this study, we incorporate WNETR within an iterative framework to address the deblending challenge for blended seismic data. We first rely on the Overthrust model to fully train and optimize the WNETR. The diverse training data enable the network to better understand and capture the complex features in seismic data. The well-trained WNETR demonstrates exceptional deblending performance, accurately discriminating and separating useful signals from individual sources within complex blended datasets. Given the diversity and uncertainty of blended data, we introduce an iterative strategy, enabling the network to progressively extract useful signals from different sources. By embedding WNETR in an iterative framework, we enhance its adaptability to various blended data scenarios. However, it is worth noting that error accumulation may occur during the iterative deblending process. As the number of iterations increases, small errors in previous steps may be amplified in subsequent stages, which can have some negative impact on the deblending effect. Nevertheless, it is expected to minimize the error accumulation by rationally setting the iterative strategy and continuously improving the network architecture. Our method belongs to supervised learning, which requires a sufficient number and good quality of raw unblended data as training labels. However, supervised learning methods are often limited in their practical application by an inherent challenge. It is quite difficult to obtain clean unblended training data. In addition, we mainly focus on 2D numerical experiments to validate the effectiveness of the proposed method. The application to actual blended data will be left to future research.

5. Conclusions

We propose a deblending method to separate blended seismic data using deep learning. To deal with the uncertainty of the prior information, we design an iterative framework to gradually separate different sources. WNETR extracts local and global information of the blended data by combining a CNN and Transformer to improve the separation accuracy of each source. The self-supervised learning mechanism further aids WNETR in improving the recovery quality of signal details during the deblending process. With minimal preprocessing steps, the trained WNETR effectively recovers useful signals from each source within the blended seismic data. Numerical experiments demonstrate that our proposed method achieves considerable accuracy in deblending. This technique can help to efficiently separate seismic signals and improve the efficiency and accuracy of seismic exploration.

Author Contributions

Data curation, L.M. and P.Z.; Funding acquisition, L.H.; Methodology, L.M.; Software, L.M.; Supervision, L.H.; Validation, P.Z.; Writing—original draft, L.M.; Writing—review and editing, L.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 42130805, No. 42074154, and No. 42374147).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, L.; Wang, Y.; Zheng, Y.; Chang, X. Deblending using a high-resolution radon transform in a common midpoint domain. J. Geophys. Eng. 2015, 12, 167–174. [Google Scholar] [CrossRef]
  2. Wang, K.X.; Mao, W.J.; Zhang, Q.C.; Li, W.Q.; Zhan, Y.; Sun, Y.S. A direct inversion method for deblending simultaneous-source data. Oil Geophys. Prospect. 2020, 55, 17–28. (In Chinese) [Google Scholar] [CrossRef]
  3. Yan, J.; Zhang, L.S.; Hong, H.T.; Huang, Q.C.; Lei, Y.H.; Liao, X.L.; Gui, L.B.; Yang, X. Application of Ambient Noise and Dense Seismic Array Imaging Techniques in Goaf Detection Beneath Coal Mines at Haerwusu. CT Theory Appl. 2023, 32, 461–470. (In Chinese) [Google Scholar] [CrossRef]
  4. Varotsos, P.K.; Sarlis, N.V. Green’s Function, Earthquakes, and a Fast Ambient Noise Tomography Methodology. Appl. Sci. 2024, 14, 697. [Google Scholar] [CrossRef]
  5. Berkhout, A.J.G. Changing the mindset in seismic data acquisition. Lead. Edge 2008, 27, 924–938. [Google Scholar] [CrossRef]
  6. Blacquière, G.; Berkhout, G.; Verschuur, E. Double illumination in blended acquisition. In SEG Technical Program Expanded Abstracts 2011; Society of Exploration Geophysicists: Houston, TX, USA, 2011; pp. 11–15. [Google Scholar] [CrossRef]
  7. Beasley, C.J.; Dragoset, B.; Salama, A. A 3D simultaneous source field test processed using alternating projections: A new active separation method. Geophys. Prospect. 2012, 60, 591–601. [Google Scholar] [CrossRef]
  8. Mahdad, A.; Doulgeris, P.; Blacquiere, G. Separation of blended data by iterative estimation and subtraction of blending interference noise. Geophysics 2011, 76, Q9–Q17. [Google Scholar] [CrossRef]
  9. Li, C.; Mosher, C.C.; Morley, L.C.; Ji, Y.; Brewer, J.D. Joint source deblending and reconstruction for seismic data. In SEG Technical Program Expanded Abstracts 2013; Society of Exploration Geophysicists: Houston, TX, USA, 2013; pp. 82–87. [Google Scholar] [CrossRef]
  10. Kumar, R.; Wason, H.; Herrmann, F.J. Source separation for simultaneous towed-streamer marine acquisition—A compressed sensing approach. Geophysics 2015, 80, WD73–WD88. [Google Scholar] [CrossRef]
  11. Chen, Y.; Zu, S.; Wang, Y.; Chen, X. Deblending of simultaneous source data using a structure-oriented space-varying median filter. Geophys. J. Int. 2020, 222, 1805–1823. [Google Scholar] [CrossRef]
  12. Moore, I.; Dragoset, B.; Ommundsen, T.; Wilson, D.; Ward, C.; Eke, D. Simultaneous source separation using dithered sources. In SEG Technical Program Expanded Abstracts 2008; Society of Exploration Geophysicists: Houston, TX, USA, 2008; pp. 2806–2810. [Google Scholar] [CrossRef]
  13. Zu, S.; Cao, J.; Qu, S.; Chen, Y. Iterative deblending for simultaneous source data using the deep neural network. Geophysics 2020, 85, V131–V141. [Google Scholar] [CrossRef]
  14. Hampson, G.; Stefani, J.; Herkenhoff, F. Acquisition using simultaneous sources. Lead. Edge 2008, 27, 918–923. [Google Scholar] [CrossRef]
  15. Zu, S.; Zhou, H.; Chen, Y.; Qu, S.; Zou, X.; Chen, H.; Liu, R. A periodically varying code for improving deblending of simultaneous sources in marine acquisition. Geophysics 2016, 81, V213–V225. [Google Scholar] [CrossRef]
  16. Sun, J.; Slang, S.; Elboth, T.; Larsen Greiner, T.; McDonald, S.; Gelius, L. A convolutional neural network approach to deblending seismic data. Geophysics 2020, 85, WA13–WA26. [Google Scholar] [CrossRef]
  17. Xue, Y.; Chen, Y.; Jiang, M.; Duan, H.; Niu, L.; Chen, C. Unsupervised seismic data deblending based on the convolutional autoencoder regularization. Acta Geophys. 2022, 70, 1171–1182. [Google Scholar] [CrossRef]
  18. Wang, B.; Li, J.; Han, D. Iterative deblending using MultiResUNet with multilevel blending noise for training and transfer learning. Geophysics 2022, 87, V205–V214. [Google Scholar] [CrossRef]
  19. Chen, X.; Wang, B. Self-supervised multistep seismic data deblending. Surv. Geophys. 2024, 45, 383–407. [Google Scholar] [CrossRef]
  20. Wang, K.; Hu, T.; Zhao, B.; Wang, S. An Unsupervised Deep Learning Method for Direct Seismic Deblending in Shot Domain. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5914012. [Google Scholar] [CrossRef]
  21. Li, X.G.; Wu, X. Progresses of artificial intelligence on seismic data processing and interpretation reviewed from SEG annual meetings. World Pet. Ind. 2020, 27, 27–35. [Google Scholar]
  22. Wang, N.; Shi, Y.; Ni, J.; Fang, J.; Yu, B. Enhanced Seismic Attenuation Compensation: Integrating Attention Mechanisms With Residual Learning in Neural Networks. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5927011. [Google Scholar] [CrossRef]
  23. Wang, D.L.; Zhou, H.L.; Wang, Y.J.; Lv, F.; He, P.Y. Seismic data denoising method based on feedforward denoising convolution neural network. Comput. Tech. Geophys. Geochem. Explor. 2023, 45, 17–27. (In Chinese) [Google Scholar] [CrossRef]
  24. Tao, L.; Ren, H.; Gu, Z. Acoustic Impedance Inversion from Seismic Imaging Profiles Using Self Attention U-Net. Remote Sens. 2023, 15, 891. [Google Scholar] [CrossRef]
  25. Zhao, H.; Zhou, Y.; Bai, T.; Chen, Y. A U-Net Based Multi-Scale Deformable Convolution Network for Seismic Random Noise Suppression. Remote Sens. 2023, 15, 4569. [Google Scholar] [CrossRef]
  26. Feng, Q.; Han, L.; Pan, B.; Zhao, B. Microseismic source location using deep reinforcement learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–9. [Google Scholar] [CrossRef]
  27. Arkin, E.; Yadikar, N.; Xu, X.; Aysa, A.; Ubul, K. A survey: Object detection methods from CNN to transformer. Multimed. Tools Appl. 2023, 82, 21353–21383. [Google Scholar] [CrossRef]
  28. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, U.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  29. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  30. Feng, Q.; Han, L.; Ma, L.; Li, Q. High-precision microseismic source localization using a fusion network combining convolutional neural network and transformer. Surv. Geophys. 2024, 45, 1527–1560. [Google Scholar] [CrossRef]
  31. Li, F.; Liu, H.; Wang, W.; Ma, J. Swin Transformer for Seismic Denoising. IEEE Geosci. Remote Sens. Lett. 2024, 21, 7501905. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Chen, R.; Ma, J. Improving Seismic Fault Recognition with Self-Supervised Pre-Training: A Study of 3D Transformer-Based with Multi-Scale Decoding and Fusion. Remote Sens. 2024, 16, 922. [Google Scholar] [CrossRef]
  33. Aminzadeh, F.; Burkhard, N.; Long, J.; Kunz, T.; Duclos, P. Three dimensional SEG/EAEG models—An update. Lead. Edge 1996, 15, 131–134. [Google Scholar] [CrossRef]
  34. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  35. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. pp. 234–241. [Google Scholar]
  37. Yuan, F.; Zhang, Z.; Fang, Z. An effective CNN and Transformer co-mplementary network for medical image segmentation. Pattern Recognit. 2023, 136, 109228. [Google Scholar] [CrossRef]
  38. Luo, X.; Hu, M.; Song, T.; Wang, G.; Zhang, S. Semi-supervised medical image segmentation via cross teaching between CNN and transformer. In Proceedings of the International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022; pp. 820–833. [Google Scholar]
  39. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  40. Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lungren, M.P.; et al. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef] [PubMed]
  41. Yu, J.; Wu, B. Attention and Hybrid Loss Guided Deep Learning for Consecutively Missing Seismic Data Reconstruction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5902108. [Google Scholar] [CrossRef]
  42. Manwar, R.; Li, X.; Mahmoodkalayeh, S.; Asano, E.; Zhu, D.; Avanaki, K. Deep learning protocol for improved photoacoustic brain imaging. J. Biophotonics 2020, 13, e202000212. [Google Scholar] [CrossRef]
  43. Kingma, D.P. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Blended acquisition.
Figure 1. Blended acquisition.
Remotesensing 16 04167 g001
Figure 2. Velocity model. (a) Velocity models for generating the training set. (b) Velocity model for generating the test set.
Figure 2. Velocity model. (a) Velocity models for generating the training set. (b) Velocity model for generating the test set.
Remotesensing 16 04167 g002
Figure 3. The generation of blended data. The four colored dots indicate seismic sources classified into four groups.
Figure 3. The generation of blended data. The four colored dots indicate seismic sources classified into four groups.
Remotesensing 16 04167 g003
Figure 4. The blended data and their labels. (a) Blended seismic data from four sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Figure 4. The blended data and their labels. (a) Blended seismic data from four sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Remotesensing 16 04167 g004
Figure 5. The blended data and their labels. (a) Blending noise in Figure 4c serves as the blended seismic data for the three sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Figure 5. The blended data and their labels. (a) Blending noise in Figure 4c serves as the blended seismic data for the three sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Remotesensing 16 04167 g005
Figure 6. The blended data and their labels. (a) Blending noise in Figure 5c serves as the blended seismic data for the two sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Figure 6. The blended data and their labels. (a) Blending noise in Figure 5c serves as the blended seismic data for the two sources. (b) Deblending results as training label 1. (c) Blending noise as training label 2.
Remotesensing 16 04167 g006
Figure 7. The architecture of WNETR.
Figure 7. The architecture of WNETR.
Remotesensing 16 04167 g007
Figure 8. The iterative framework for deblending. N represents the maximum iteration number and n represents the current number of iterations.
Figure 8. The iterative framework for deblending. N represents the maximum iteration number and n represents the current number of iterations.
Remotesensing 16 04167 g008
Figure 9. (a) A test sample with a mixing degree of 2. (b) The predicted deblending results of the first source. (c) The predicted deblending results of the second source.
Figure 9. (a) A test sample with a mixing degree of 2. (b) The predicted deblending results of the first source. (c) The predicted deblending results of the second source.
Remotesensing 16 04167 g009
Figure 10. (a) Residuals of the deblending results of the first source and the original unblended data. (b) Residuals of the deblending results of the second source and the original unblended data.
Figure 10. (a) Residuals of the deblending results of the first source and the original unblended data. (b) Residuals of the deblending results of the second source and the original unblended data.
Remotesensing 16 04167 g010
Figure 11. The blended data containing four sources.
Figure 11. The blended data containing four sources.
Remotesensing 16 04167 g011
Figure 12. The result of the first iteration. (a) Deblending results. (b) Blending noise.
Figure 12. The result of the first iteration. (a) Deblending results. (b) Blending noise.
Remotesensing 16 04167 g012
Figure 13. The result of the second iteration. (a) Deblending results. (b) Blending noise.
Figure 13. The result of the second iteration. (a) Deblending results. (b) Blending noise.
Remotesensing 16 04167 g013
Figure 14. The result of the third iteration. (a) Deblending results. (b) Blending noise.
Figure 14. The result of the third iteration. (a) Deblending results. (b) Blending noise.
Remotesensing 16 04167 g014
Figure 15. Comparison of deblending results. (a) The useful signal of the first source obtained by our method. (b) The useful signal of the first source obtained by a method without self-supervised learning. (c) The useful signal of the second source obtained by our method. (d) The useful signal of the second source obtained by a method without self-supervised learning.
Figure 15. Comparison of deblending results. (a) The useful signal of the first source obtained by our method. (b) The useful signal of the first source obtained by a method without self-supervised learning. (c) The useful signal of the second source obtained by our method. (d) The useful signal of the second source obtained by a method without self-supervised learning.
Remotesensing 16 04167 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, L.; Han, L.; Zhang, P. Iterative Separation of Blended Seismic Data in Shot Domain Using Deep Learning. Remote Sens. 2024, 16, 4167. https://doi.org/10.3390/rs16224167

AMA Style

Ma L, Han L, Zhang P. Iterative Separation of Blended Seismic Data in Shot Domain Using Deep Learning. Remote Sensing. 2024; 16(22):4167. https://doi.org/10.3390/rs16224167

Chicago/Turabian Style

Ma, Liyun, Liguo Han, and Pan Zhang. 2024. "Iterative Separation of Blended Seismic Data in Shot Domain Using Deep Learning" Remote Sensing 16, no. 22: 4167. https://doi.org/10.3390/rs16224167

APA Style

Ma, L., Han, L., & Zhang, P. (2024). Iterative Separation of Blended Seismic Data in Shot Domain Using Deep Learning. Remote Sensing, 16(22), 4167. https://doi.org/10.3390/rs16224167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop