Next Article in Journal
Paraquat Removal from Water by Magnetic Nanoparticles Coated with Waste-Sourced Biobased Substances
Previous Article in Journal
Coordinated Optimization of Hydrogen-Integrated Energy Hubs with Demand Response-Enabled Energy Sharing
Previous Article in Special Issue
Experimental Analysis of the Mechanical Properties and Failure Behavior of Deep Coalbed Methane Reservoir Rocks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Quantitative Methodology for Computing Gravel Parameters in Imaging Logging Leveraging Deep Learning: A Case Analysis of the Baikouquan Formation within the Mahu Sag

1
State Key Laboratory of Shale Oil and Gas Enrichment Mechanisms and Effective Development, Beijing 102206, China
2
Sinopec Key Laboratory of Shale Oil/Gas Exploration and Production Technology, Beijing 100083, China
3
Petroleum Exploration & Development Research Institute of SINOPEC, Beijing 100083, China
4
Southwest Logging & Control Company, Sinopec Matrix Corporation, Chengdu 610059, China
5
Chengdu North Petroleum Exploration and Development Technology Co., Ltd., Chengdu 610059, China
6
China Petroleum Qinghai Oilfield Exploration and Development Research Institute, Dunhuang 736202, China
7
College of Geophysics, Chengdu University of Technology, Chengdu 610059, China
*
Authors to whom correspondence should be addressed.
Processes 2024, 12(7), 1337; https://doi.org/10.3390/pr12071337
Submission received: 20 May 2024 / Revised: 10 June 2024 / Accepted: 18 June 2024 / Published: 27 June 2024
(This article belongs to the Special Issue Coal Mining and Unconventional Oil Exploration)

Abstract

:
Gravels are widely distributed in the Baikouquan formation in the Mabei area of the Junggar Basin. However, conventional logging methods cannot quantitatively characterize gravel development, which limits the identification of lithology, structure, and sedimentary facies in this region. This study proposes a new method for automatically identifying gravels from electric imaging images and calculating gravel parameters utilizing the salient object detection (SOD) network. Firstly, a SOD network model (U2-Net) was constructed and trained using electric imaging data from the Baikouquan formation at the Mahu Sag. The blank strips in the images were filled using the U-Net convolutional neural network model. Sample sets were then prepared, and the gravel areas were labeled in the electric imaging images with the Labelme software in combination with image segmentation and human–machine interaction. These sample sets were used to train the network model, enabling the automatic recognition of gravel areas and the segmentation of adhesive gravel regions in the electric imaging images. Based on the segmented gravel results, quantitative evaluation parameters such as particle size and gravel quantity were accurately calculated. The method’s validity was confirmed through validation sets and actual data. This approach enhances adhesive area segmentation’s accuracy and processing speed while effectively reducing human error. The trained network model demonstrated an average absolute error of 0.0048 on test sets with a recognition accuracy of 83.7%. This method provides algorithmic support for the refined evaluation of glutenite reservoir logging.

1. Introduction

Glutenite reservoirs commonly feature substantial heterogeneity and a complex lithology. The formation’s particle size and the roundness of gravels are keys to judging reservoir physical properties and sedimentary facies [1,2,3,4,5]. Therefore, it is essential to calculate gravel parameters to evaluate glutenite reservoirs accurately. Rock cores can provide rich lithological information, whereas coring is time-consuming, costly, and challenging to execute in multi-well and full-well sections, significantly limiting the precise research of glutenite reservoirs [6]. However, gravel appears to be a two-dimensional feature in images, while conventional logging data are one-dimensional and cannot reflect quantitative characteristics such as the grain size and roundness of the gravel. It is impossible to quantitatively calculate the parameters of gravel using conventional logging data and quantitatively describe the developmental characteristics of the gravel.
Formation micro-resistivity imaging (FMI) logging involves emitting current into the formation through an electrode plate, recording the resulting current data, and processing these data to create a colored resistivity image. The sampling interval of FMI is 0.1 inches, and the vertical resolution is 5 mm [7,8,9]. The FMI resistivity image provides an intuitive reflection of the gravel’s shape, size, and spatial location. Using digital image processing technology, parameters such as particle size, quantity, roundness, and size distribution of gravels can be quantitatively calculated [10].
Traditional methods calculate parameters such as the average gray value, variance, saturation, and entropy of the FMI images of different lithologies in conglomerate reservoirs based on the different gray values of the lithologies. These methods use core data to determine the threshold values corresponding to the different grain sizes of conglomerates and qualitatively describe the development of gravel [11,12,13]. Image processing methods, such as multipoint geostatistical methods or convolutional neural networks, fill the blank stripes in FMI images. Then, based on threshold segmentation, the gravel regions in FMI images are segmented, and gravel parameters are calculated for lithology identification. Although threshold segmentation can delineate the gravel regions in FMI images, finding an appropriate threshold is challenging and time-consuming, and the segmentation accuracy is affected by high-resistivity minerals [14,15]. The morphological component analysis algorithm is used to fill the blank stripes in FMI images. Edge detection is then used to extract the edges of gravel in FMI images, and sedimentary roundness constraints are applied to separate adhered gravel particles. Finally, the extracted gravel parameters are calculated for lithology identification. However, the edge detection algorithm has high computational time complexity and is not suitable for processing entire well sections across multiple wells [16].
In the decade since the resurgence of deep learning research with AlexNet, deep neural network (DNN) models such as ResNet, VGG, ResNeXt, and DenseNet have emerged and have been widely applied across various fields [17,18,19,20,21]. In petroleum geology, researchers have used DNN models like DeepLabv3+ and SegNet to automatically identify fracture areas in electric imaging images and calculate parameters such as fracture dip for reservoir evaluation. This approach has effectively improved efficiency and reduced the influence of human subjective factors [22,23]. However, research on applying deep learning methods to gravel recognition remains relatively rare. The predominant approach involves using FMI glutenite image data and incorporating the Dual Serial Attention Module (DSAM) into the semantic segmentation network DeeplabV3+ to train an FMI gravel recognition model. This method demonstrates good recognition performance and minimizes the impact of human factors on gravel identification [24]. However, the network’s deep layers and many parameters make obtaining an optimal training model time-consuming. To comprehensively improve the accuracy and efficiency of gravel identification in FMI images and reduce the influence of human subjective factors, it is imperative to develop new methods for gravel identification and parameter calculation.
Salient object detection (SOD) is advantageous in the accurate identification and location of the most attractive and significant target areas in the image for segmentation [25,26,27]. The U2-Net salient object detection network is a lightweight network model with a parameter size of 4.7 MB, requiring low hardware specifications for training, short training times, and a quick cycle to obtain the optimal model weights, resulting in fast recognition speeds. Therefore, this paper uses the U-Net network to fill the blank stripes in FMI images and apply the U2-Net salient object detection network to extract gravel regions in FMI images. Furthermore, we use a marker-based watershed algorithm to segment adhered gravel regions and quantitatively calculate gravel parameters, effectively improving the accuracy and efficiency of conglomerate reservoir evaluation.

2. Lithological Characteristics

Mahu Sag is located on the northwest slope of the Junggar Basin, which is a piedmont depression at the Karamay overthrust fault zone, and its long axis stretches along the northeast–southwest direction [28,29]. The formations in this area are well developed, with Carboniferous, Permian, Triassic, Jurassic, Cretaceous, Paleogene, Neogene, and Quaternary systems developed from bottom to top [30]. Hydrocarbon source rocks are mainly in the Carboniferous, Muhe, Fengcheng, and Urho formations [31]. The Baikouquan formation has a large scale of fans, extending far and overlapping with each other. The glutenite reservoirs are widely distributed in this area, with poor reservoir properties, substantial heterogeneity, and low permeability [32,33]. The formation’s lithology primarily consists of gray and brownish-red glutenite, with minor amounts of gray fine sandstone and pebbly sandstone. The formation thickness ranges from 130 to 230 m. Based on particle diameter, the lithology can be categorized into nine types: mudstone, siltstone, medium-fine sandstone, coarse sandstone, fine conglomerate, small to medium conglomerate, large to medium conglomerate, coarse conglomerate, and boulder conglomerate, as shown in Figure 1.

3. Methodology

3.1. Blank Strip Filling

Due to the measurement principles of micro-resistivity imaging logging, the pads cannot cover the entire borehole wall, resulting in blank stripes in the electrical imaging logs. In the actual formation, these blank stripe areas contain formation information that needs to be filled using specific algorithms. Filling methods are mainly divided into traditional methods and deep learning methods. Conventional methods, such as Fitersim and structure-based methods, leave noticeable processing traces and are unsuitable for complex gravel formations. On the other hand, deep learning methods can improve filling results with the increase in the sample set, offer faster filling efficiency, and produce results that better align with visual semantic features [34,35]. In this paper, we use deep learning methods to fill the blank stripes.
The U-Net model is characterized by its fast processing speed, the robust connectivity of the filling results, minimal interference in regions, and significant visual semantic effects. In this paper, the U-Net model is used to fill blank stripes, which is a U-shaped symmetrical structure and mainly consists of two parts: an encoder (En) that extracts local features and a decoder (De) that restores the image resolution to the original size. The encoding process includes eight convolutional layers (with the convolutional kernel sizes of 7, 5, 5, 3, 3, 3, 3, and 3, and the channel sizes of 64, 128, 256, 512, 512, 512, 512, 512, and 512, respectively, for each layer) and the maximum pooling layer, all of which use Relu as the activation function. The decoding process includes eight upsampling layers (with the channel sizes of 512, 512, 512, 512, 256, 128, 64, and 3), all of which use LeakyReLu as the activation function. The input image is a color image of 512 × 512 px and 3-channel size. The output image is a color image of 512 × 512 px and 3-channel size.

3.2. Network Structure and Loss Function

The gravel recognition process utilizes the SOD network U2-Net, which incorporates a two-level nested U-shaped network structure. Its core components are residual U-blocks designed to extract multi-scale contextual information from images. As shown in Figure 2, this module primarily consists of three parts: an ordinary convolutional layer for extracting local features, a symmetric encoder–decoder structure similar to U-Net for extracting and encoding multi-scale contextual information, and a residual connection structure that sums the fused local features and multi-scale features to produce the final feature map. The number of layers in the RSU residual blocks can be adjusted based on the resolution of the input feature maps at each layer.
Figure 3 illustrates the structure of the U2-Net network. At the top layer, the network forms a sizeable U-shaped structure by combining 11 RSU residual blocks at different layers. Specifically, six encoders (with the RSU layers of 7, 6, 5, 4, 4, and 4) are used to gradually reduce the spatial resolution of the image through a series of convolutional and pooling layers, extracting both low-level and high-level features. Five decoders (with the RSU layers of 4, 4, 5, 6, and 7) are then used to restore the resolution of the feature map to the original image size through upsampling and other operations. During this process, the decoder connects the fused intermediate and corresponding encoder features to obtain rich semantic and detailed information, receiving a significance probability map. Then, the significance probability maps for each stage are upsampled to the input image’s size and fused through cascading. Finally, the 1 × 1 convolutional layer and sigmoid activation function are combined to generate the final significance probability map, i.e., the gravel area in the electric imaging image. Due to the low resolution of the input feature map in the encoder of the fifth and sixth layers and in the decoder of the fifth layer, further downsampling will cause the loss of contextual information, and dilated convolution is used instead of downsampling.
The loss function for training is defined as follows:
L = m = 1 M ω s i d e m l s i d e m + ω f u s e l f u s e
where l s i d e m (m is the number of layers) is the loss of each significance probability map S s i d e m (Figure 3), l f u s e is the loss of the final fused significance probability map, and ω s i d e m and ω f u s e are the weight of losses for each stage. Among them, the cross entropy was used to calculate the loss l   :
l = r , c H , W P G r , c l o g P s r , c + ( 1 P G r , c ) log ( 1 P s r , c )
where (r, c) is the coordinate of each pixel, (H, W) is the height and width of the image, and P G r , c and P s r , c are the pixel values of the label and the predicted significance probability map, respectively. The network model is trained continuously to make the L value as small as possible, and weight parameters at the smallest value of L are saved, which is the final network model.

3.3. Data Preparation

Electric imaging data with a 100% or above 90% polar plate coverage rate were selected as samples. Multiple-point geostatistics methods such as Filtersim or Criminisi were used to obtain a full wellbore image. The positions of blank strips were recorded and filled with white pixels to create masks. A total of 5000 original images and mask images with a resolution of 360 × 360 were prepared. The U-Net network was trained with these samples and masks to obtain the optimal model. The trained U-Net model was then used to repair the blank strips in the imaging logging, as shown in Figure 4. The filled image is consistent with the original image regarding the gravel morphology, and the connectivity in the horizontal direction of the black strips is similar, indicating high consistency in both morphology and appearance.
Based on the results of the repaired blank strips, the conductivity data were converted into electric imaging images and scanned with a window length of 0.5 m, resulting in an image resolution of 360 × 196. The electric imaging images of the target formation in the research area are mainly divided into two types: (1) Low gravel content: Irregular bright block particles in the electric imaging images are scattered, making image segmentation challenging due to the influence of high resistivity formation and other minerals, as depicted in Figure 5a. (2) High gravel content: Bright dots and blocks in the electric imaging image are uniformly distributed throughout the formation, as illustrated in Figure 5b. We selected high-quality formation data without large areas of white noise and with stable instrument descent speeds. For the first type, we used the Labelme software and core photos to manually outline the gravel areas in the images, labeling 366 images. For the second type, we used a combination of threshold segmentation and human–computer interaction based on the core photos to label 410 images of the gravel areas. The pixel values for the gravel and non-gravel regions were 255 and 0, respectively (Figure 5c,d). In total, 776 images were created and labeled, with 620 used for training and 156 for validation.

3.4. Segmentation of Adhesive Areas

The watershed algorithm is a classic algorithm in the field of image segmentation [36], and it is a classic algorithm in image segmentation. The basic idea is to treat the image as a geodetic topological terrain. Specifically, brightness changes in the image are regarded as peaks and valleys, where high gray areas indicate peaks and low gray areas indicate valleys. Water is injected between these peaks and valleys, flowing from high to low; in the image, this can be seen as water starting at a seed point and flowing along the steepest gradient path from high gray pixels to low gray pixels. When the water flow between two peaks meets, it converges at a low valley known as the watershed. The watershed separates different peak areas, and on either side of the watershed, water flows converge into different ponds, each corresponding to a target area. This method is highly effective in image edge detection. However, due to noise and other factors, using this method independently may lead to over-segmentation. Therefore, the target areas of the image generally need to be labeled and then segmented based on the labeled image.
Implementing the watershed function in the OpenCV library utilized in this study is based on labeled watershed segmentation, primarily involving two parameters: the electric imaging image and the labeled image. The specific implementation process is divided into three steps:
(1)
Digital image conversion. The preprocessed electric imaging data are normalized first and then converted into numerical electric imaging images that reflect the formation characteristics using the SLB heated color code.
(2)
Label the gravel area. The gravel areas and backgrounds identified by the U2-Net model are labeled one by one using the eight connected domain algorithm as the “water injection area” of the watershed algorithm, where the gravel areas identified as foreground are labeled as 2~n; the unknown areas that may be graveled are labeled as 0, and the remaining background areas are labeled as 1.
(3)
Adhesive area segmentation using the watershed algorithm. The electric imaging image and labeled data are input into the watershed function to segment the gravel area further. The algorithm will re-identify unknown areas, correct gravel boundaries, and segment adhesive gravel areas.

4. Results and Discussion

4.1. Evaluation Indicators

To evaluate the model accuracy in identifying gravel areas in electric imaging images, precision, recall, the harmonic mean between precision and recall ( F β ), and mean absolute error (MAE) were introduced as evaluation indicators to examine the model’s accuracy.
Precision indicates the probability of the predicted gravel area being the actual gravel area:
P r e c i s i o n = T p T p + F p
where T p is the probability that positive samples (gravel areas) are predicted to be accurate and F p is the probability that negative samples (other areas) are predicted to be true.
Recall indicates the probability that the portion of samples is the actual gravel areas, which is predicted to be correct:
R e c a l l = T p T p + F n
where F n is the probability that positive samples are predicted to be false.
F β indicates the maximum balance point simultaneously reached by precision and recall as much as possible:
F β = 1 + β 2 × P r e c i s i o n × R e c a l l β 2 × P r e c i s i o n + R e c a l l
where β is a weight parameter, and β 2 was set to 0.3 during model training.
MAE represents the mean absolute error between the predicted value and the actual value:
M A E = 1 H × W r = 1 H c = 1 W P r , c G ( r , c )
where P and G are the significance probability map and the corresponding accurate labeled map, respectively, while (H, W) and (r, c) are the (height, width) of the electric imaging images and the position coordinate of each pixel, respectively.

4.2. Learning Strategy

Neural networks often exhibit instability during the initial stages of training and typically require a lower learning rate to achieve satisfactory convergence. However, using a lower learning rate prolongs the training time, necessitating the dynamic adjustment of the learning rate. Specifically, during the early stages of network training, the learning rate can be gradually increased from a lower value to a higher one to facilitate the “warm-up” of network training. Subsequently, it can be progressively decreased according to specific rules during the later stages of training.
The formula for dynamically adjusting the learning rate during the increase stage is as follows:
l r = f a c t o r × 1 c u r r e n t _ s t e p w a r m _ e p o c h s × i t e r _ s t e p + 1 c u r r e n t _ s t e p w a r m _ e p o c h s × i t e r _ s t e p
The formula for dynamically adjusting the learning rate during the reduction stage is as follows:
l r = 1 + cos c u r r e n t s t e p w a r m e p o c h s × i t e r s t e p × π e p o c h s w a r m e p o c h s × i t e r s t e p 2 × 1 e n d f a c t o r + e n d f a c t o r
where factor is the regulatory factor of learning rate, which was set to 0.001 during training in this study, current_step is the current number of iteration steps, warm_epochs is the number of iterations that require warm-up, iter_step is the number of steps needed for each iteration, epoch is the total number of iterations, and end_factor is the regulatory factor of learning rate during the reduction stage, which was set to 0.000001.
The Adams optimizer was used to dynamically adjust the size of the learning rate, with the initial weight attenuation value of 1 × 10−4. The maximum learning rate was 0.0001. In each step, 16 images were trained for a total of 5000 times. The hybrid training method was adopted to reduce the model’s training time.

4.3. Model Training

(1)
Model Training
By recording the F-measure, mean absolute error (MAE), and loss function values at each iteration stage during training and plotting the trend graphs of these training parameters, the accuracy of the model training process can be assessed.
As shown in Figure 6, the training loss decreases to below 0.06 at the early stage and stabilizes after 2700 training times. The F-measure rises to 0.820 at the early stage and gradually stabilizes after 3000 training times, reaching a maximum of 0.837. The MAE rapidly decreases at the early stage with significant fluctuations and tends to stabilize after 2700 training times. The corresponding MAE at the highest F-measure is 0.048.
Two types of formation images were employed to assess the model’s recognition effectiveness. The U2-Net model accurately identified gravel areas (depicted in green) within the electric imaging images, as illustrated in Figure 7a,b. Subsequently, the eight-connected domain method was utilized to label the gravel areas recognized by the U2-Net. These labeled areas served as the input data for the watershed algorithm, enabling the precise segmentation of the adhesive gravel area, as indicated by the red dashed line in Figure 7c.
(2)
Method comparison
As is shown in Table 1, we compared the U2-Net model with the DeeplabV3+ models using ResNet50 and ResNet101 as backbone networks. The accuracy of the U2-Net model is 82.03%, which is 1.67% higher than that of the DeeplabV3+ model with the ResNet50 backbone; compared to the DeeplabV3+ model with the ResNet101 backbone, the accuracy of the U2-Net model is 82.8%, which is 0.90% higher. Additionally, the parameter size of the U2-Net model is 41.2 MB, which is 36.5 MB lower than that of the DeeplabV3+ model with the ResNet50 backbone and 60.3 MB lower than that of the DeeplabV3+ model with the ResNet101 backbone. The U2-Net model achieves higher accuracy in gravel recognition than the previous models, with smaller parameters (resulting in faster computation), thus improving the efficiency of gravel parameter calculation.

4.4. Application Validation

Before practical application, it should be noted that: ① Due to the longitudinal resolution of the electric imaging instrument being 5 mm, the gravel particle size calculated from the electric imaging image was greater than or equal to 5 mm, which was somewhat different from the core particle size. For gravel formations with particle sizes less than 5 mm, it should be combined with other logging data for comprehensive interpretation; ② Based on the careful consideration of the sampling interval of electric imaging data, formation lithology characteristics, and the input image size of the network model, a window length of 0.5 m was used for the gravel identification and parameter calculation, with a shift step length of 0.125 m of the window length. ③ After the conversion between the electric imaging image size and the actual size, the imaging logging image was compared with the core image, and the apparent features were marked with green boxes in the imaging logging, as shown in Figure 8a. The image was converted to the size of 360 × 196 to calculate the pixel size of the labeled area, and it was compared with the actual size of the corresponding features (red dashed boxes) of the core, as shown in Figure 8b. Finally, the corresponding actual size of each pixel in the electric imaging image was calculated to be 2.3 mm.
This method has been applied to XX well’s 3204-3210 coring section, as shown in Figure 9. From the figure, the two-track is the logging curve of natural GR and wellbore diameter; the third track is the logging curve of compensated density, compensated neutron, and interval transit time; the fourth track is the bilateral resistivity logging curve; the fifth track is the imaging logging image filled with blank strips; the sixth track is the gravel identification image by imaging logging; the seventh track is the lithology profile(from left to right: mudstone, granule rock, pebble rock, cobble rock and sandstone); the eighth track is the calculated pebble content compared with the core experiment data; the ninth track is the calculated granule content compared with the core experiment data; and the tenth track is the grain size distribution. The calculated pebble gravel, granule, and core data are consistent. The grain size spectrum peak in the formation from 3206.8 to 3207.4 m is shifted to the right, indicating the dominance of cobble. The grain size spectrum peak from 3207.5 to 3208 m is centered, indicating the dominance of pebbles. The grain size spectrum peak from 3208.1 to 3208.5 m is shifted to the left, indicating the dominance of the granule. The lithology identification results are consistent with the core data.

5. Conclusions

In this paper, due to the lack of efficient and accurate methods for identifying gravel regions and calculating gravel parameters in electrical imaging logging images, we propose an automatic gravel identification method based on the U2-Net salient object detection network. We use a watershed algorithm to segment adhered gravel regions and calculate gravel parameters. The main conclusions obtained are as follows:
We created 776 labels sized 360 × 196 px for training the gravel recognition model. We evaluated the accuracy of the model training using the harmonic mean of recall and precision and the mean absolute error. The final model achieved a harmonic mean of precision and recall of 83.7% and a mean absolute error of 0.048. Furthermore, we conducted validation on formations containing gravel and gravel layers. We filled the gravel areas with green color and accurately identified the bright, irregular elliptical gravel areas in the images, which matched visual judgment results. By comparing the gravel parameter calculation results with core particle sizes, we found errors to be less than 25%, further demonstrating the accuracy of the parameter calculation.
The U2-Net model has 4.7 M parameters, making it lightweight with small parameters and short training cycles. It exhibits high recognition efficiency and can identify gravel regions at a rate of 0.09 s/m, making it suitable for multi-well processing. This paper is the first to apply lightweight network models to micro-resistivity imaging logging image processing, achieving good accuracy and efficiency. This confirms the feasibility of using lightweight network models for gravel recognition and parameter calculation in micro-resistivity imaging logging.
In order to improve the effect of gravel recognition, in-depth research can be carried out in three aspects: dataset, image preprocessing, and model improvement. In terms of datasets, more datasets can be prepared and labeled more accurately. In terms of image preprocessing, it is recommended that image enhancement algorithms such as image equalization, Gaussian filtering, and wavelet denoising be added to reduce the influence of noise in micro-resistivity imaging logging images before gravel recognition. In terms of model improvement, you can change the backbone network architecture of the model or add some attention mechanisms to improve the model’s performance further. In terms of gravel parameter calculation, although the watershed algorithm can effectively divide the cohesive gravel area, it needs to be continuously adjusted to find the optimal foreground, background, and unknown area when labeling the gravel area, which is labor-intensive and time-consuming, and the adherent gravel segmentation algorithm can be further studied according to the edge morphological characteristics of the gravel to improve the accuracy of the later gravel parameter calculation.

Author Contributions

Conceptualization and writing—original draft preparation, L.W. and Y.L.; data curation, J.L., B.R. and Y.L.; writing—review and editing, L.W., N.Z. and A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (U2003102, 41974117) and the Postdoctoral Fellowship Program of China Postdoctoral Science Foundation under Grant Number (GZC20230328), and the Natural Science Foundation of Sichuan Province, P. R. China (Grant No. 2023NSFSC0260).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

Authors: Liang Wang and Jing Lu are employed by the Sinopec Key Laboratory of Shale Oil/Gas Exploration and Production Technology; author Jing Lu is employed by the Petroleum Exploration & Development Research Institute of SINOPEC; author Yang Luo is employed by the Sinopec Matrix Corporation; author Benbing Ren is employed by the Chengdu North Petroleum Exploration and Development Technology Co., Ltd.; author Angxing Li is employed by the China Petroleum Qinghai Oilfield Exploration and Development Research Institute; The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Mahmic, O.; Dypvik, H.; Hammer, E. Diagenetic influence on reservoir quality evolution, examples from Triassic conglomerates/arenites in the Edvard Grieg field, Norwegian North Sea. Mar. Pet. Geol. 2018, 93, 247–271. [Google Scholar] [CrossRef]
  2. Shi, J.A.; He, Z.; Ding, C.; Gu, Q.; Xiong, Q.R.; Zhang, S.C. Sedimentary Characteristics and Model of Permian System in Ke-Bai Area in the Northwestern Margin of Jungar Basin. Acta Sedimentol. Sin. 2010, 28, 962–968. [Google Scholar]
  3. Wang, J.; Zhou, L.; Zhan, J.; Xiang, B.L.; Hu, W.X.; Yang, Y.; Tang, X. Characteristeristics of high-quality glutenite reservoirs of Urho Formation in Manan area, Junggar Basin. Lithol. Reserv. 2021, 2021, 33. [Google Scholar]
  4. Zhao, N.; Wang, L.; Tang, Y.; Qu, J.H.; Luo, X.P.; SiMa, L.Q. Logging Identification Method for Lithology: A Case Study of Baikouquan Formation in Wellblock Fengnan, Junggar Basin. Xinjiang Pet. Geol. 2016, 37, 732–737. [Google Scholar]
  5. Peng, M.; Zhang, L.; Tao, J.Y.; Zhao, K.; Zhang, X.H.; Zhang, C.M. Quantitative characterization of gravel roundness of sandy conglomerates of Triassic Baikouquan Formation in Mahu Sag. Lithol. Reserv. 2022, 34, 121–129. [Google Scholar]
  6. Yuan, R.; Zhang, C.M.; Tang, Y.; Qu, J.H.; Guo, X.D.; Sun, Y.Q.; Zhu, R.; Zhou, Y.Q. Utilizing borehole electrical images to interpret lithofacies of fan-delta: A case study of Lower Triassic Baikouquan Formation in Mahu Depression, Junggar Basin, China. Open Geosci. 2017, 9, 539–553. [Google Scholar] [CrossRef]
  7. Tian, J.; Wang, L.; SiMa, L.Q.; Fang, S.; Liu, H.Q. Characterization of reservoir properties and pore structure based on micro-resistivity imaging logging: Porosity spectrum, permeability spectrum, and equivalent capillary pressure cure. Pet. Explor. Dev. 2023, 50, 553–561. [Google Scholar] [CrossRef]
  8. Damiani, P.S.; Prabantara, A.; Shah, R.A.; Budideti, R.A. Porosity and permeability mapping of heterogeneous upper Jurassic carbonate reservoirs using enhanced data processing of electrical borehole images, Onshore Field Abu Dhabi. In Proceedings of the Abu Dhabi International Petroleum Exhibition & Conference, Abu Dhabi, United Arab Emirates, 7–10 November 2016. [Google Scholar] [CrossRef]
  9. Lai, F.Q. Fracture Detecting with Acoustic and Electric Imaging Logging Data and Evaluation; China University of Petroleum: Beijing, China, 2007. [Google Scholar]
  10. Li, D.H.; Yuan, R.; Ding, Z.F.; Xu, R. Automatic calculating grain size of gravels based on micro-resistivity image of well. Arab. J. Geosci. 2021, 14, 1794. [Google Scholar] [CrossRef]
  11. Li, K.S. Lithology Identification of Glutenite Reservoirs Based on the Static Image Logging Data; China University of Petroleum: Beijing, China, 2011. [Google Scholar]
  12. Luo, X.P.; Pang, X.; Su, D.X. Recognition of Complicated Sandy Conglomerate Reservoir Based on Micro-Resistivity Imaging Logging: A Case Study of Baikouquan Formation in Western Slope of Mahu Sag, Junggar Basin. Xinjiang Pet. Geol. 2018, 39, 345–351. [Google Scholar]
  13. Yuan, Y.; Hou, G.Q.; Ma, T.T. An Automatic Lithology Recognition Method of Sand Conglomerate Based on Electric Imaging Logging. Meas. Control Technol. 2021, 40, 30–35. [Google Scholar]
  14. Wang, M.; Wang, Y.S.; Liu, X.F.; Zhang, S.; Guan, L. New method for quantitative estimation of grain size in sand conglomerate reservoir. Prog. Geophys. 2019, 34, 2018–2213. [Google Scholar]
  15. Chen, S.R.; Qu, X.Y.; Qiu, L.W.; Zhang, Y.C.; Tao, D. A Statistical Method for Lithic Content Based on Core Measurement, Image Analysis and Microscopic Statistics in Sand-conglomerate Reservoir. Instrum. Mes. Métrologies 2019, 18, 343–352. [Google Scholar] [CrossRef]
  16. Wang, Q.; Wang, Z.Y.; Fan, Y.J.; Teng, Q.Z.; He, X.H. Image segmentation of cutting grains based on edge flow and region merging. J. Sichuan Univ. (Nat. Sci. Ed.) 2014, 51, 111–118. [Google Scholar]
  17. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  18. He, K.; Zhang, X.Y.; Ren, X.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  19. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  20. Xie, S.; Girshick, R.; Dollá, P.; Tu, Z.W.; He, K.M. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  21. Huang, G.; Liu, Z.; Laurens, V.D.M.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  22. Li, B.T.; Wang, Z.Z.; Kong, C.X.; Jiang, Q.P.; Wang, W.F. A New Intelligent Method of Fracture Recognition Based on Imaging Logging. Well Logging Technol. 2019, 43, 257–262. [Google Scholar]
  23. Zhang, W.; Wu, T.; Li, Z.P.; Liu, S.Y.; Qiu, A.; Li, Y.J.; Shi, Y.B. Fracture recognition in ultrasonic logging images via unsupervised segmentation network. Earth Sci. Inform. 2021, 14, 955–964. [Google Scholar] [CrossRef]
  24. Jiao, Z.F.; Xing, Q.; Zhang, J.Y.; Wang, J.; Wang, Y.L. Gravel Extraction from FMI Based on DSAM-DeepLabV3+ Network. In Proceedings of the 2022 16th IEEE International Conference on Signal Processing (ICSP), Beijing, China, 21–24 October 2022; Volume 1, pp. 405–410. [Google Scholar] [CrossRef]
  25. Zhou, T.; Fan, D.P.; Cheng, M.M.; Shen, J.B.; Shao, L. RGB-D salient object detection: A survey. Comput. Vis. Media 2021, 7, 37–69. [Google Scholar] [CrossRef]
  26. Wang, W.G.; Lai, Q.X.; Fu, H.Z.; Shen, J.B.; Ling, H.B.; Yang, R.G. Salient object detection in the deep learning era: An in-depth survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3239–3259. [Google Scholar] [CrossRef]
  27. Qin, X.B.; Zhang, Z.C.; Huang, C.Y.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recognit. 2022, 106, 107404. [Google Scholar] [CrossRef]
  28. Zhi, D.M. Discovery and Hydrocarbon Accumulation Mechanism of Quasi-Continuous High-Efficiency Reservoirs of Baikouquan Formation in Mahu Sag, Junggar Basin. Xinjiang Pet. Geol. 2016, 37, 373–382. [Google Scholar]
  29. Li, J.; Tang, Y.; Wu, T.; Zhao, J.Z.; Wu, H.Y.; Wu, W.T.; Bai, Y.B. Overpressure origin and its effects on petroleum accumulation in the conglomerate oil province in Mahu Sag, Junggar Basin, NW China. Pet. Explor. Dev. 2020, 47, 679–690. [Google Scholar] [CrossRef]
  30. Chen, C.; Peng, M.Y.; Zhao, T.; Wang, J.G. Reservoir Comparison and Exploration Enlightenment of Baikouquan Formation in Northern and Westtern Slopes of Mahu Sag. Xinjiang Pet. Geol. 2022, 43, 18–25. [Google Scholar]
  31. Yin, L.; Xu, D.N.; Le, X.F.; Qi, W.; Zhang, J.J. Reservoir characteristics and hydrocarbon accumulation rules of Triassic Baikouquan Formation in Mahu Sag, Junggar Basin. Lithol. Reserv. 2023, 36, 59–68. [Google Scholar]
  32. Deng, J.X.; Cai, K.W.; Song, L.T.; Liu, Z.H.; Pan, J.G. The influence of diagenetic evolution on rock physical properties of sandy conglomerate of Baikouquan formation. Chin. J. Geophys. 2022, 65, 4448–4459. [Google Scholar]
  33. Wang, T.H.; Xu, D.N.; Wu, T.; Guan, X.; Xie, Z.B. Sedimentary facies distribution characteristics and sedimentary model of Triassic Baikouquan Formation in Shawan Sag, Junggar Basin. Lithol. Reserv. 2023, 36, 98–110. [Google Scholar]
  34. Sun, Q.; Su, N.; Gong, F.; Du, Q. Blank Strip Filling for Logging Electrical Imaging Based on Multiscale Generative Adversarial Network. Processes 2023, 11, 1709. [Google Scholar] [CrossRef]
  35. Du, C.Y.; Xing, Q.; Zhang, J.Y. Blank strips filling for electrical logging images based on attention-constrained deep generative network. Prog. Geophys. 2022, 37, 1548–1558. [Google Scholar]
  36. Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef]
Figure 1. Lithological classification. (a) Conglomerate. (b) Cobble conglomerate. (c) Coarse to medium conglomerate. (d) Small to medium conglomerate. (e) Granule. (f) Gritstone. (g) Fine-medium sand. (h) Siltstone. (i) Mud.
Figure 1. Lithological classification. (a) Conglomerate. (b) Cobble conglomerate. (c) Coarse to medium conglomerate. (d) Small to medium conglomerate. (e) Granule. (f) Gritstone. (g) Fine-medium sand. (h) Siltstone. (i) Mud.
Processes 12 01337 g001
Figure 2. RSU block.
Figure 2. RSU block.
Processes 12 01337 g002
Figure 3. U2-Net architecture.
Figure 3. U2-Net architecture.
Processes 12 01337 g003
Figure 4. Before and after comparison of blank strip filling: (a) original; (b) blank strips filling image.
Figure 4. Before and after comparison of blank strip filling: (a) original; (b) blank strips filling image.
Processes 12 01337 g004
Figure 5. Before original maps and labels of different types of gravel formation: (a) low gravel content; (b) high gravel content; (c) the image label of low gravel content; (d) the image label of high gravel content.
Figure 5. Before original maps and labels of different types of gravel formation: (a) low gravel content; (b) high gravel content; (c) the image label of low gravel content; (d) the image label of high gravel content.
Processes 12 01337 g005aProcesses 12 01337 g005b
Figure 6. Changes in training parameters.
Figure 6. Changes in training parameters.
Processes 12 01337 g006
Figure 7. Gravel recognition and segmentation: (a) the identification of gravel-bearing formation; (b) the identification result of gravel formation; (c) the segmentation of adhesive areas.
Figure 7. Gravel recognition and segmentation: (a) the identification of gravel-bearing formation; (b) the identification result of gravel formation; (c) the segmentation of adhesive areas.
Processes 12 01337 g007
Figure 8. Conversion of pixel dimensions to actual dimensions: (a) imaging logging feature region. (b) Core characterization area.
Figure 8. Conversion of pixel dimensions to actual dimensions: (a) imaging logging feature region. (b) Core characterization area.
Processes 12 01337 g008
Figure 9. Comparison of rock particle size and imaging particle size of XX well.
Figure 9. Comparison of rock particle size and imaging particle size of XX well.
Processes 12 01337 g009
Table 1. Comparison of our method with previous methods.
Table 1. Comparison of our method with previous methods.
BackboneArchitectureAccuracySize (MB)
ResNet50DeeplapV3+82.03%41.2
ResNet101DeeplapV3+82.80%60.3
RSUU2-Net83.70%4.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Lu, J.; Luo, Y.; Ren, B.; Li, A.; Zhao, N. An Automated Quantitative Methodology for Computing Gravel Parameters in Imaging Logging Leveraging Deep Learning: A Case Analysis of the Baikouquan Formation within the Mahu Sag. Processes 2024, 12, 1337. https://doi.org/10.3390/pr12071337

AMA Style

Wang L, Lu J, Luo Y, Ren B, Li A, Zhao N. An Automated Quantitative Methodology for Computing Gravel Parameters in Imaging Logging Leveraging Deep Learning: A Case Analysis of the Baikouquan Formation within the Mahu Sag. Processes. 2024; 12(7):1337. https://doi.org/10.3390/pr12071337

Chicago/Turabian Style

Wang, Liang, Jing Lu, Yang Luo, Benbing Ren, Angxing Li, and Ning Zhao. 2024. "An Automated Quantitative Methodology for Computing Gravel Parameters in Imaging Logging Leveraging Deep Learning: A Case Analysis of the Baikouquan Formation within the Mahu Sag" Processes 12, no. 7: 1337. https://doi.org/10.3390/pr12071337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop