Next Article in Journal
Educational Approaches for Integrating Advanced Environmental Remediation Technologies into Environmental Engineering: The ‘Four Styles’ Model
Previous Article in Journal
Extraction and Characterization of Biological Phytoconstituents of Commiphora gileadensis Leaves Using Soxhlet Method
Previous Article in Special Issue
Research on Operation Optimization of Fluid Sampling in Wireline Formation Testing with Finite Volume Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combined Deep-Fill and Histogram Equalization Algorithm for Full-Borehole Electrical Logging Image Restoration

1
DPIC, China Oilfield Services Limited, Langfang 065201, China
2
School of Chemical Engineering, Qingdao University of Science and Technology, Qingdao 266042, China
3
Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610218, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(8), 1568; https://doi.org/10.3390/pr12081568
Submission received: 27 June 2024 / Revised: 17 July 2024 / Accepted: 23 July 2024 / Published: 26 July 2024
(This article belongs to the Special Issue Oil and Gas Drilling Processes: Control and Optimization)

Abstract

:
Electrical borehole imaging tools cannot achieve full-borehole images due to their structure limitation. Gaps always occur between pads, and it is necessary to fill in the gaps for subsequent interpretation. In this paper, an improved model for borehole image restoration and enhancement is established by combining a “Deep-Fill” image repair algorithm based on generative adversarial networks (GANs) with histogram equalization principles. Firstly, resistivity data is converted into images, and the anomalous areas are manually repaired. Then, the manually repaired images undergo iterative training using the “Deep-Fill” model. Finally, the repaired images are further enhanced through histogram equalization principles. Results show the overall restoration quality of the model surpasses that of the original GAN-based restoration model, particularly in terms of texture coherence at junctions. This approach not only enhances the quality of repaired images but also improves the interpretability of geological features of the electrical imaging logs.

1. Introduction

Electrical imaging logging technology has unique advantages, as it can accurately obtain two-dimensional images of the wellbore, providing strong data support for geological exploration. These images are not only intuitive but also highly clear, able to reflect the structure and characteristics of the wellbore in detail, including various geological details such as bedding, cracks, and pores [1]. Through detailed analysis of these images, we can gain a deeper understanding of the properties, sedimentary environment, and structural changes of underground rocks [2]. In practical applications, the visibility and intuitiveness of borehole image logs provide great convenience for geological exploration personnel. Many geological problems that are difficult to solve with conventional logging methods can be easily solved with the help of imaging logging technology [3]. For example, in complex formations, conventional logging methods often find it difficult to accurately determine the interface and intra-layer changes of rock layers, while imaging logging can clearly reveal these subtle geological features [4]. It is worth noting that due to the special characteristics of the wellbore structure and the structure of electrical imaging tools, electrical imaging logging technology also has certain limitations. During the logging process, the instrument is usually in an open state to better approach the wellbore for scanning [5]. However, that state also resulted in some wellbore walls not being measured during scanning along the wellbore due to the physical limitations of the instrument, resulting in the coverage of the wellbore walls not reaching 100% [6]. The lack of coverage is manifested in the appearance of white stripes on electrical logging images. Meanwhile, high-temperature and high-pressure environments can significantly impact electrical imaging logging operations by affecting the response of logging instruments and damaging their electronic components [7]. This can lead to issues such as electrode damage or electrode plate failure, resulting in vertical stripes, moiré patterns, and other anomalies in the logging images, complicating the interpretation of logging data. Therefore, it is necessary to develop new methods to restore electrical imaging logs.
Image restoration technology is a crucial component of image processing. Bertalmio et al. introduced the concept of digital image restoration at an academic conference in 2000 [8]. They used mathematical partial differential equation (PDE) diffusion methods to simulate the manual repair process of professional restorers. This technique propagates information from known regions into damaged areas along isophotes (lines of equal luminance) to complete the restoration process. Over the past few decades, digital image restoration technology has matured, resulting in a variety of restoration algorithms, which can be categorized into three types:
  • Structure-based Methods: These methods primarily solve partial differential equations and are mainly suitable for images with small, damaged areas and relatively simple textures.
  • Texture-based Methods: These methods achieve inpainting through texture feature matching of images. They are mainly used for images with larger target areas and more complex structural textures [9].
  • Machine Learning-based Methods: Image restoration methods based on deep learning have rapidly developed and are favored for their superior restoration effects.
The rapid rise of deep learning in the field of computer vision has significantly surpassed the performance of traditional algorithms in image processing. In 2016, Pathak et al. proposed context encoders, a deep learning-based image inpainting model that combines an encoder structure with a generative adversarial network (GAN) [10]. In 2018, NVIDIA’s team drew on the UNet (convolutional networks for biomedical image segmentation) structure, utilizing nearest-neighbor upsampling during the decoding phase. To ensure that the content for edge restoration is not affected by invalid values from the surrounding intact information, partial convolution layers are used with appropriate masks when dealing with edges [11]. In the same year, Yuhang Song et al. introduced an innovative inference mechanism and designed a patch-swap layer [12]. This method uses feature maps as input and covers each neuron in the area to be repaired with the most similar edge patches, reducing the difficulty of network training.
In recent years, deep learning-based image inpainting methods have emerged, leveraging convolutional autoencoders, GANs, and recurrent neural networks (RNNs). These algorithms can learn rich semantic information from large-scale data through iterative training and directly fill in missing information in an end-to-end manner. GANs consist of two parts: a generator and a discriminator [13,14,15,16]. They are trained adversarially until a balance is reached, where the discriminator can no longer distinguish between real and generated data. However, a significant drawback of this model is that the generation process can become too unconstrained, making it difficult to control the results as the resolution increases.
To generate specific types of images, conditional GANs (CGANs) were introduced. They concatenate a one-hot vector with a random noise vector, adding conditional variables to both the generator and discriminator models. Additionally, due to the strong image feature extraction capabilities of convolutional networks, the deep convolutional GAN (DCGAN) was proposed in 2015 [17]. This model replaces pooling layers with convolutions in the discriminator and uses fractional-strided convolutions in the generator. To address issues with initialization and ensure effective gradient propagation, both the generator and discriminator employ batch normalization layers, except at the output layer of the generator and the input layer of the discriminator, to prevent instability caused by applying batch normalization across all layers. Furthermore, DCGAN removes fully connected layers, using Leaky ReLU (leaky rectified linear unit) as the activation function in all discriminator layers and ReLU in all generator layers except the output layer, which uses tanh. The innovation of DCGAN lies in its specialized adjustments to the convolutional network topology to maintain training stability.
In 2018 and 2019, Yujia Hui et al. proposed the Deep-Fill algorithm model, which innovatively combines low-level pixel recognition with high-level semantic information in the encoder and integrates the GAN discriminator to ensure pixel consistency between the generated and original images [18,19]. The training process includes reconstruction loss and two Wasserstein losses, corresponding to the global image information and local missing information. The proposed gated convolution addresses the issue of regular convolution, treating all pixels as valid by providing a learnable dynamic feature selection mechanism for each spatial position in all layers. Additionally, the paper introduces a patch-based GAN loss named SN-PatchGAN, where spectral normalization stabilizes training, speeds up training, and addresses the problem of global and local discriminators being unsuitable for masks of arbitrary shapes.
Currently, the main algorithms in image inpainting include the Criminisi image inpainting algorithm [20], PatchMatch image inpainting algorithm [21], fast marching method (FMM) image inpainting algorithm [22], partial convolution image inpainting algorithm (PConv) [23], and the Deep-Fill algorithm [24].
The Criminisi inpainting algorithm is groundbreaking in its integration of the strengths of partial differential equation-based and texture synthesis-based image inpainting models. This algorithm simultaneously considers both texture and simple structural features when repairing images, leading to effective results for images with reproducible textures and structures. However, the Criminisi algorithm requires searching for matching patches in the undamaged regions of the image, which can become impractical when repairing images with complex structures and textures. For instance, when repairing holes, if the hole region is missing, it is challenging to use the remaining parts of the image to accurately reconstruct the missing area.
The PatchMatch algorithm employs a fast random approximation algorithm to compute nearest neighbor regions between two image areas, thereby significantly enhancing repair efficiency. However, it has certain limitations. For instance, it is most effective for repairing images with low-frequency information and repetitive textures in the background.
The FMM algorithm is advantageous due to its ability to achieve fast computational speeds during image inpainting. However, it has certain drawbacks in terms of visual quality. While it excels in repairing small missing areas, it is less effective for large missing regions and tends to exhibit poor edge preservation.
The PConv algorithm treats all pixels in an image as either invalid or valid, using a binary mask (0 or 1) multiplied with the input to all layers. This mask acts as a single, non-trainable feature gate channel. However, this approach has limitations. One major drawback is that the mask updating mechanism is not optimal. In partial convolution networks, the mask tends to gradually disappear in deeper layers, which can result in color discrepancies in the restoration results. Therefore, despite its innovative use of masks to handle missing data, the PConv algorithm may still exhibit noticeable color differences in the repaired areas.
The Deep-Fill algorithm introduces significant improvements over traditional and partial convolutions with its gated convolution mechanism. This approach enhances partial convolutions by providing a learnable dynamic feature selection mechanism at each spatial position for every channel across all convolutional layers. This innovation addresses the unrealistic assumption of traditional convolutions treating all input pixels as valid, thereby significantly improving the consistency of inpainting results with arbitrary-shaped masks and input colors.
Additionally, the Deep-Fill algorithm introduces a context attention mechanism layer, which further enhances the boundary effects in the repaired regions. The core idea involves dividing the image into foreground and background parts. The foreground represents the area to be restored, while the background comprises the surrounding intact regions. For each pixel in the foreground, the algorithm identifies the most suitable matching background part and uses its content to guide the reconstruction of that pixel. This method improves the edge coherence and overall quality of the restored images by leveraging contextual information effectively.
The Deep-Fill algorithm is well-suited for electrical imaging logs inpainting for two primary reasons: Firstly, its gated convolution mechanism represents a significant advancement over traditional convolution and partial convolution methods. Secondly, the introduction of the contextual attention mechanism layer enhances boundary effects in the repaired regions.
In this study, an improved model for borehole image restoration and enhancement is established by combining a “Deep-Fill” image repair algorithm based on GANs with histogram equalization principles. Borehole imaging logs were processed and restored using the proposed model. The restored images were compared with those restored using the original GAN model. The model provides an improved method for imaging logs restoration and aids further geological interpretation.

2. Materials and Methods

2.1. Generative Adversarial Networks (GANs)

A generative adversarial network (GAN) consists of two components: a generator network G and a discriminator network D. The generator takes input information and produces a synthetic sample X_{fake} that closely resembles a real sample. The discriminator then evaluates both the generated sample X_{fake} and real samples X_{real}, distinguishing between them. This iterative process aims to achieve a balance where the generator produces increasingly realistic samples (Figure 1).
The objective function for optimizing the GAN is as follows [25,26]:
min max G , D = min max E x P d a t a x log D x + E z P z ( z ) log 1 D G z
The term represents the probability that generated samples pass through the discriminative network D, while denotes the discrimination probability of real samples, which is independent of the generator. Here, E denotes the expectation, signifies a set of random vector inputs, and signifies a set of real sample data points.

2.2. Deep-Fill Algorithm

The Deep-Fill algorithm based on GAN is a generative adversarial network model that incorporates a context attention mechanism and gated convolutions for image processing. The network structure is divided into two main parts: a generator network and a discriminator network, both fundamental to GANs. The generator network consists of coarse and fine generators, as illustrated in Figure 2. The coarse generator focuses on learning semantic information from the image to create a preliminary, blurred semantic map that guides subsequent repair processes. In contrast, the fine generator utilizes the output of the coarse generator to generate realistic textures, contours, and structures. The discriminator network includes local and global discriminators. The local discriminator evaluates the realism of generated content, textures, and colors locally, while the global discriminator conducts semantic analysis across the entire image, ensuring overall structural coherence and semantic integrity.
The optimization objective of this network is the Wasserstein distance, augmented with gradient penalties to facilitate easier training and reduce the risk of mode collapse. A notable innovation of this algorithm model is the introduction of a contextual attention mechanism, which effectively identifies matching parts in the background space for the foreground. This mechanism guides the inpainting process based on these identified regions, enhancing the targeted repair of the image.
The training process of the entire network proceeds as follows:
  • Input Preparation: Three images are provided as input: the original image, a mask indicating damaged areas, and a contour map (lines). Before computation begins, a simple preprocessing step is applied where each pixel value in these images is divided by 255 to normalize the range of all pixels to [0–1].
  • Mask Application: Multiply the original image element-wise with the mask. This operation masks out the damaged areas, leaving intact regions.
  • Contour Processing: Multiply the contour channel (lines channel) with the inverse of the mask. This operation isolates the contours within the damaged areas.
  • Image Formation: Combine the masked image and the contour map using the mask. This creates an image that has the damaged regions removed and outlines preserved. Add a noise channel to this combined image, resulting in a total of six channels.
  • Input to Coarse Generator: Feed these six channels as input into the coarse generator network in tensor form.
  • Coarse Generator Operation: Process the input through the coarse generator network.
  • Fine Generator Operation: Use the output from the coarse generator as input to the fine generator network.
  • Output: The result of the fine generator’s computation is the final repaired image output.
This completes one forward pass through the generator network, where inputs are processed to generate the desired inpainted image as output.
Next, the final result from the generator network is combined with the original image and input into the discriminator network to compute the loss function. This constitutes one forward pass through the discriminator network. Subsequently, based on the loss value computed by the discriminator network, backpropagation is performed. This involves updating the parameters of the discriminator network to improve its ability to differentiate between real and generated images. Once the discriminator network parameters are updated, the parameters of the generator network are also adjusted accordingly. This completes one full iteration of the training process.

2.2.1. Coarse Generation Network

In the coarse generation network (Figure 3), the upper orange section denotes the input. Moving downward as follows:
The light blue sections represent the partial convolution layers.
The green sections indicate downsampling layers based on partial convolution.
The dark blue sections signify dilated convolution layers based on partial convolution.
The bright yellow sections denote concatenation layers.
The final layer in red signifies the output layer.
Additionally, in this network, concatenation layers are utilized to pass low-level image features extracted by shallow networks to higher-level networks. Furthermore, conventional convolution layers in the coarse generation network are substituted with partial convolution layers. These partial convolution layers have been enhanced to optimize the structure of dilated convolution layers, effectively mitigating grid artifacts.
(1)
Spatial Pyramid Pooling (SPP) Layer
The concatenation layers are derived from the UNet network, and a similar approach is adopted in this paper. In the coarse generation network, the encoder performs two downsampling operations in total, while the decoder includes two upsampling operations. During the first downsampling, since the network layers are relatively shallow, lower-level image features are extracted. As the network layers deepen and during the second downsampling, more abstract and advanced image features are extracted.
The concatenation layers in this network are inspired by the UNet architecture, following a similar approach adopted in this paper. In the coarse generation network, the operations are as follows:
The encoder performs two downsampling operations in total.
The decoder includes two upsampling operations.
During the first downsampling, the network layers are relatively shallow, focusing on extracting lower-level image features. As the network layers deepen and during the second downsampling, more abstract and advanced image features are extracted.
In the decoder, there are two upsampling operations. During the second upsampling, which is critical as it approaches generating the final image result, the network’s deeper layers may risk losing lower-level image features extracted in earlier stages. To mitigate this issue, this paper introduces concatenation layers before upsampling. Specifically, as follows:
The first downsampling layer from the encoder is concatenated with the last upsampling layer in the decoder.
Similarly, the second downsampling layer is concatenated with the second-to-last upsampling layer.
This approach aims to enrich the decoder with lower-level image features from the encoder before proceeding with further convolutional decoding operations.
Adding the concatenation layer resulted in a significant improvement in the training process of both the discriminator and generator networks, as illustrated by the loss curve shown below (Figure 4).
In the above figure, the left side displays the loss curve of the discriminator (D), while the right side shows the loss curve of the generator (G). The light-colored lines depict the training process of the networks before adding the concatenation layer, whereas the darker lines represent the training process after adding the concatenation layer. It is evident that both the generator and discriminator achieved reduced loss during training following the addition of the concatenation layer.
(2)
Dilated Convolution Layer
Wang et al. proposed that the design of hybrid dilated convolution (HDC) structures should satisfy three characteristics [27]: firstly, the dilation rates of consecutive layers in the dilated convolution must have 1 as their greatest common divisor; secondly, the dilation rates should follow a repeated zigzag pattern, such as [a, b, c, a, b, c]; finally, to avoid grid artifacts, the HDC design structure needs to satisfy the following equation:
M i = m a x [ M i + 1 2 r i , M i + 1 2 ( M i + 1 r i ) ,   r i ]
where Mi is a meaningless value, assuming there are a total of n layers, Mn = rn, ri represents the dilation rate of i layer.
In the original network, there are four layers of dilated convolutions with dilation rates [2, 4, 8, 16], and each convolution kernel size is 3 × 3. Although such dilated convolutions increase the receptive field, they often skip many pixels, leading to parameter gridding issues. This is because the dilation rates [2, 4, 8, 16] set in the network do not satisfy any of the three characteristics required by HDC design.
Due to these reasons, the model has adjusted the dilation rates from [2, 4, 8, 16] to [1, 2, 5, 1, 2, 5]. This modification adheres to the three characteristics mentioned earlier, mitigates parameter gridding issues, and enhances the restoration effectiveness.

2.2.2. Fine Generation Network

The structure of the fine generation network is analogous to that of the coarse generation network, with a key difference: it incorporates a branch that bifurcates into two pathways right after receiving the input. One pathway mirrors the coarse generation network without substantial modifications. The other pathway integrates an attention mechanism and substitutes conventional convolution layers with a gated network. The structure diagram of the fine generation network is depicted in Figure 5.
(1)
Contextual attention mechanism layer
The model incorporates a contextual attention layer [28]. The fundamental concept behind this mechanism is to partition the image into foreground and background segments. The foreground denotes the area slated for restoration, while the background encompasses the remaining intact regions. For every pixel within the foreground, this layer identifies the most suitable corresponding segment in the background. It then utilizes this information to guide the reconstruction process of that pixel, thereby enhancing the inpainting accuracy and coherence.
(2)
Gated Convolution
The core idea of gated convolution is to autonomously learn masks for each layer in the network, addressing the challenge of irregular masks in restoration tasks [29,30]. The computation of this convolution is as follows:
G a t e = W g X ,
Feature = W f X ,
Y = φ F e a t u r e σ F e a t u r e ,
where X is the input, Y is the output, Gate is the gate mask generated by convolving X, and Feature is the feature map extracted by convolving X. Wg and Wf are the convolutional kernel parameters used for generating the gate mask and the feature map, respectively. Finally, φ denotes any activation function, while σ specifically refers to the Sigmoid activation function.
At each convolutional layer, the input undergoes two convolutions. One convolution produces the regular convolutional feature map, while the other convolution uses the same kernel size, number of channels, stride, padding, etc. However, it applies a Sigmoid activation function to constrain the output of this convolutional operation between 0 and 1, thereby generating a gate mask. Finally, the feature map and the gate mask are multiplied element-wise to obtain the final output.

2.2.3. Discriminator Network

The discriminator network adopts WGAN-div (Wasserstein Divergence for GANs) [31], and its optimization objective is as follows:
min G   max D E G z P g D G z E x P r D x k E x ^ P u D x ^ p ,
where z is random noise, x is the real data, and x ^ is sampled as a linear combination of real and fake data points. P g ,   P r ,   P u are Radon probability measures.
According to this formula, the loss function of the generator network can be derived as follows:
L o s s g = D G z ,
The loss function of the discriminator network is as follows:
L o s s d = D g t D G z k D x ^ p ,

2.3. Histogram Equalization Algorithm

Histogram equalization, also referred to as global histogram equalization, stands as a widely adopted technique in image processing for enhancing image contrast. Due to its straightforward principles, ease of implementation, and real-time performance, it has emerged as one of the most mature and popular methods for improving image contrast [32]. Its utility extends to enhancing contrast in well logging images, where it is commonly applied.
Histogram equalization operates on the basis of an image’s histogram. The histogram of an image depicts the distribution of its grayscale values, revealing characteristics such as brightness, contrast, and dynamic range. For example, darker images generally exhibit histograms concentrated towards the left, indicating lower grayscale values, while brighter images tend to have histograms skewed towards the right, representing higher grayscale values. Low-contrast images typically feature histograms with a narrow dynamic range centered around the middle, whereas high-contrast images span almost the entire range of grayscale levels.
Figure 6 illustrates four types of well logging electric images and their corresponding histograms. In (a), from top to bottom, the images are shown as follows: dark image, bright image, low-contrast image, and high-contrast image, respectively. In (b), the histograms corresponding to these four types of images are displayed. By observation, dark images have histograms concentrated towards the left, indicating lower grayscale values. Bright images tend towards the right, indicating higher grayscale values. Low-contrast images have histograms centered with a narrow range, while high-contrast images cover nearly the entire grayscale range.
Therefore, altering the shape characteristics of the histogram can effectively change the image contrast. Histogram equalization achieves this by adjusting the shape characteristics of the image histogram to enhance its contrast.

3. Implementation Route

The implementation of restoration and enhancement for acoustic-electric well logging images specifically involves six modules: the resistivity-to-image conversion module, the massive sample set training module, the missing image restoration module, the secondary repair of abnormal features module, the image enhancement module, and the image-to-resistivity conversion module. The resistivity-to-image conversion module and the image-to-resistivity conversion module are collectively referred to as the resistivity and image mutual conversion auxiliary module. The schematic flowchart is shown in Figure 7.

3.1. Resistivity and Image Conversion Module

The resistivity and image mutual conversion auxiliary module is further divided into two parts: the resistivity-to-image conversion module and the image-to-resistivity conversion module. It utilizes a Schlumberger 16-color scale to convert the resistivity numerical data to images and vice versa. The overall process is outlined as follows (Figure 8):
The purpose of the resistivity-to-image conversion module is to transform resistivity numerical data into a missing image and mask. The method involves using linear interpolation to convert the Schlumberger 16-color scale (SLB16) to a 256-color scale [33]. The normalized resistivity values range from 0 to 255, which are mapped correspondingly to the color scale to generate an RGB (Red, Green, Blue) image.
Conversely, the image-to-resistivity conversion module aims to convert the restored image back into resistivity numerical data. It employs a method that begins by converting the RGB values of each pixel from the 256-color scale into HSV (hue, saturation, value) values. Subsequently, the module calculates the Euclidean distance between these HSV values and each color in the scale. Finally, the module assigns the color with the smallest distance as the representation for that pixel, thereby ensuring preservation of detail even for pixels not originally part of the 256-color scale.

3.2. Massive Artificial Repair Sample Set Training Module

Each fully edited well logging image undergoes random selection for intensive training aimed at filling in missing areas and restoring overall quality. This process culminates in the generation of a well-trained set of model files. The number of training iterations is pivotal as it directly influences the restoration quality, thereby serving as a critical parameter in model training. Experimental analysis is carried out using the loss function to assess the impact of varying numbers of iterations, as illustrated in the following figures (Figure 9 and Figure 10).
Based on the restoration results observed across various iteration numbers and the corresponding loss function curves, it is evident that the model achieves its lowest loss and stabilizes effectively between approximately 800,000 to 1,000,000 iterations, demonstrating the best restoration outcomes. Following multiple experiments, the iteration number selected for the restoration model is fixed at 1,000,000 iterations.

3.3. Missing Image Restoration Module

This module leverages effective pixel regions from the image to be restored and learns from a sample training set. It identifies and repairs the missing pixel regions within the image based on this learned information.

3.4. Exceptional Feature Secondary Repair Module

This module first conducts local calibration on the anomalous features found in the previously restored image of the missing regions. These features are then utilized as masks. Subsequently, another round of image restoration is performed to eliminate these anomalous features. The delineation of local anomalous feature areas is achieved manually through human intervention.

3.5. Image Enhancement Module

The repaired and filled well log images undergo input into the image enhancement module for processing. This algorithm further enhances the restored well log images, emphasizing subtle features that may not be easily discernible. The output is an enhanced result image that highlights these features more prominently.

4. Results and Discussion

This study concentrates on restoring partial well log images from 46 wells, encompassing 21 wells with original resistivity data and 25 wells with only image data. The computational resources required for model operation are as follows: GPU = 1 * GTX 1080 ti, CPU = i5-12600KF, and Memory = 32 GB. The software configuration environment for model operation is as follows: TensorFlow-GPU = 1.7.0, Python = 3.7. The settings for some specific parameters are listed in Table 1.
The proposed model effectively repairs images containing four types of features: images with missing blank zones, images with vertical stripes, images with snakeskin patterns, and images with fuzzy features. Among these, images with missing blank zones are easier to identify and repair initially. The remaining three types require a two-step approach: first restoring the missing blank zones and then applying targeted enhancement to address the specific abnormal features.

4.1. Repair of Missing Image Bands

In this test, the model, trained through 1,000,000 iterations on a dataset comprising over 32,000 images, was used for image restoration. The results of the restoration are as follows (Figure 11):
To evaluate the restoration effectiveness, a comparison was made with the original GANs restoration model [34]. It was observed that the restored images from the GANs model exhibited discontinuities in connected regions. In contrast, the model proposed in this paper achieved more coherent restoration in these areas. This enhancement aids in better recognition of geological features during subsequent well logging tasks, as depicted in Figure 12.
Further quantitative analysis of the experimental results was carried out by calculating the SIFID (single image Fréchet inception distance), PSNR (peak signal-to-noise ratio), and SSIM (structural similarity index measure) for 100 images and the corresponding images generated by the GANs model and the Deep-Fill model. The average values of each metric for these images were then determined, as shown in Table 2. The results indicate that the model proposed in this paper achieved better results than that generated from GANs.

4.2. Enhancement of Vertical Stripe Image Repair

The enhancement and restoration effect on the fine vertical stripes in the images is notably effective, nearly completely eliminating the abnormal features of the vertical stripes, as illustrated in Figure 13.

4.3. Enhanced Restoration of Snake-Skin Pattern Images

Due to the extensive area affected by the snake-skin pattern anomaly, especially when it blends with other normal geological features, it significantly impacts the identification of subsequent geological features. This paper focuses on preserving normal geological features as much as possible while enhancing the restoration of areas affected by the snake-skin pattern. As depicted in Figure 14, the restoration effect on the snake-skin pattern is effective, and the normal geological features remain unchanged.

4.4. Enhanced Restoration of Blurry Feature Images

The restoration and enhancement of images containing fuzzy features are comparable to those affected by the snake-skin pattern. When the fuzzy area is limited, the restoration effect is notably effective, as depicted in Figure 15.
Finally, it has to be mentioned that alternative deep learning architectures, advanced generative models and more user interaction and guidance may enhance feature extraction and image inpainting quality, which will be the focus of our future work.

5. Conclusions

In high-temperature and high-pressure geological conditions, issues like bad electrical contacts and faulty electrodes often lead to abnormal patterns such as vertical stripes and snake-skin patterns on acoustic-electric logging (sonic-electric log) images, posing significant challenges for logging interpretation. This study utilized a Deep-Fill image restoration algorithm based on GANs, complemented by image enhancement through histogram equalization, to effectively repair and enhance problematic sonic-electric logging images, achieving satisfactory results.
  • Sample Dataset: The study accumulated over 32,000 manually refined and corrected samples, totaling approximately 11 GB of data. Both theoretical insights and practical applications have demonstrated that higher-quality and larger datasets contribute to better restoration results, aligning more closely with real-world conditions.
  • Restoration and Enhancement Effects: The overall restoration quality of the model in this study surpasses that of the original GAN-based restoration model, particularly in terms of texture coherence at junctions.
  • Computational Efficiency of Restoration and Enhancement: Depending on the GPU model used, computational speeds between 200–500 m/min were achieved, demonstrating efficient processing capabilities.
This approach not only enhances the quality of repaired images but also improves the interpretability of geological features in electrical imaging logging, thereby addressing critical challenges in logging interpretation under extreme environmental conditions.

6. Suggestions for Future Work

  • Alternative Deep Learning Architectures: While the paper utilized a contextual attention module, exploring more sophisticated attention mechanisms, such as those based on transformer models, may enhance feature extraction and image inpainting quality.
  • Advanced Generative Models: Investigating novel variants of GANs that can generate higher quality images and more effectively address the mode collapse issue.
  • User Interaction and Guidance: Interactive inpainting, developing more interactive systems where users can provide real-time feedback to the model, allowing for iterative refinement of the inpainting results; Multimodal guidance, integrating other forms of guidance in addition to sketches, such as color palettes, texture maps, or natural language descriptions of the desired content.

Author Contributions

Conceptualization, J.W. and Z.H.; methodology, Z.Z.; formal analysis, Z.Z., and M.W.; validation and writing, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (No. 52104028) and the Taishan Scholar Foundation of Shandong Province, China (No. tsqn202312208).

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

Junhua Wang, Zhenxue Hou, Zhiqiang Zhang, Meng Wang were employed by the company China Oilfield Services Limited. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Nomenclature

CGANsConditional generative adversarial networks
DDiscriminator
DCGANDeep conditional generative adversarial networks
D(G(z))Expectation of the discriminator’s output when the input comes from the generator’s distribution G (dimensionless)
EExpectation (dimensionless)
FeatureExtracted feature map (units depend on the specific application)
FMMFast marching method
GGenerator
GANGenerative adversarial network
GateThe learned mask for each layer (dimensionless)
G(z)Generated sample from the generator given noise z (data points, units depend on data)
HDCHybrid dilated convolution
HSVHue, saturation, value
LossdThe loss function of the discriminator, including the gradient penalty term to ensure smooth gradients (dimensionless)
LossgThe loss function of the generator, measuring its performance in the adversarial process (dimensionless)
MiA meaningless value calculated for each layer in the HDC structure (dimensionless)
nThe total number of layers in the HDC structure (dimensionless)
PConvPartial convolution image inpainting algorithm
PgThe distribution of data points generated by the generator (dimensionless)
PrThe distribution of data points generated by the discriminator (dimensionless)
PuA very loose target distribution, often taken to be the uniform distribution used as a reference (dimensionless)
ReLURectified linear unit
RGBRed, green, blue
riThe dilation rate of the current layer (dimensionless)
RNNsRecurrent neural networks
SN-PatchGANA patch-based generative adversarial network loss
SLB16Schlumberger 16-color scale
SPPSpatial pyramid pooling
UNetA type of convolutional neural network architecture for precise image segmentation and inpainting with skip connections
WfConvolutional kernel parameters for extracting features (dimensionless)
WgConvolutional kernel parameters for generating the gate mask (dimensionless)
WGAN-divWasserstein generative adversarial network with divergence regularization
xReal sample from the data distribution (data points, units depend on data)
XInput data (units depend on the specific application, e.g., pixel values for images)
YOutput of the gated convolution (units depend on the specific application)
φAny activation function (dimensionless)
σSigmoid activation function (dimensionless)
Element-wise multiplication (Hadamard product)

References

  1. Hou, Z.; Yu, X.; Li, D.; Wang, W.W.; Li, B.; Cheng, J.J.; Qian, Y.P. Application of new processing technology of electrical imaging logging in reservoir evaluation. Prog. Geophys. 2020, 35, 573–578. [Google Scholar]
  2. Wood, D.A. Expanding role of borehole image logs in reservoir fracture and heterogeneity characterization: A review. Adv. Geo-Energy Res. 2024, 12, 194. [Google Scholar] [CrossRef]
  3. Hang, C.; Xionghui, Z.; Chang, Y. Application of Acoustic and Electrical Imaging Logging Technology in Reservoir Fracture Identification. Audio Eng. 2020, 44, 83–84. [Google Scholar]
  4. Ying, Z.; Baozhi, P.; Changhai, Y.; Peng, W.; Chuanping, L.; Hongjuan, L. Application of imaging logging maps in lithologic identification of volcanics. Geophys. Prospect. Pet. 2007, 46, 288. [Google Scholar]
  5. Xu, F.H.; Wand, Z.W.; Liu, J.H.; Ou, W.M. Application of de-noising method on electrical imaging logging data based on joint EMD and wavelet threshold. J. China Univ. Pet. (Ed. Nat. Sci.) 2020, 44, 56–65. [Google Scholar]
  6. Wang, Z.; Gao, N.; Zeng, R.; Du, X.; Du, X.R.; Chen, S.Y. A gaps filling method for electrical logging images based on a deep learning model. Well Logging Technol. 2019, 43, 578–582. [Google Scholar]
  7. Gao, J.; Jiang, L.; Liu, Y.; Chen, Y. Review and analysis on the development and applications of electrical imaging logging in oil-based mud. J. Appl. Geophys. 2019, 171, 103872. [Google Scholar] [CrossRef]
  8. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  9. Bugeau, A.; Bertalmio, M. Combining Texture Synthesis and Diffusion for Image Inpainting. In Proceedings of the VISAPP 2009-Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009; pp. 26–33. [Google Scholar]
  10. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  11. Hasan, S.M.K.; Linte, C.A. A modified U-Net convolutional network featuring a nearest-neighbor re-sampling-based elastic-transformation for brain tissue characterization and segmentation. In Proceedings of the 2018 IEEE Western New York Image and Signal Processing Workshop (WNYISPW), Rochester, NY, USA, 5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  12. Song, Y.; Yang, C.; Lin, Z.; Liu, X.; Huang, Q.; Li, H.; Kuo, C.C. Contextual-based image inpainting: Infer, match, and translate. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  13. Qiang, Z.P.; He, L.B.; Chen, X.; Xu, D. Survey on deep learning image inpainting methods. J. Image Graph. 2019, 24, 447–463. [Google Scholar]
  14. Cao, Y.J.; Jia, L.L.; Chen, Y.X.; Lin, N.; Li, X.X. Review of computer vision based on generative adversarial networks. J. Image Graph. 2018, 23, 1433–1449. [Google Scholar]
  15. Jiang, Y.; Xu, J.; Yang, B.; Xu, J.; Zhu, J. Image inpainting based on generative adversarial networks. IEEE Access 2020, 8, 22884–22892. [Google Scholar] [CrossRef]
  16. Demir, U.; Unal, G. Patch-based image inpainting with generative adversarial networks. arXiv 2018, arXiv:1803.07422. [Google Scholar]
  17. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  18. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T.S. Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4471–4480. [Google Scholar]
  19. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T.S. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5505–5514. [Google Scholar]
  20. Yin, L.; Chang, C. An effective exemplar-based image inpainting method. In Proceedings of the 2012 IEEE 14th International Conference on Communication Technology, Chengdu, China, 9–11 November 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 739–743. [Google Scholar]
  21. Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 2009, 28, 24. [Google Scholar] [CrossRef]
  22. Wang, H.; Fang, D.; Wang, C.; Jin, J. An image hole inpainting algorithm with improved FMM for mobile devices. Int. J. Comput. Sci. Math. 2019, 10, 236–247. [Google Scholar]
  23. Qin, Z.; Zeng, Q.; Zong, Y.; Xu, F. Image inpainting based on deep learning: A review. Displays 2021, 69, 102028. [Google Scholar] [CrossRef]
  24. Cui, M.; Jiang, H.; Li, C. Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting. Information 2023, 14, 512. [Google Scholar] [CrossRef]
  25. Din, N.U.; Javed, K.; Bae, S.; Yi, J. A novel GAN-based network for unmasking of masked face. IEEE Access 2020, 8, 44276–44287. [Google Scholar] [CrossRef]
  26. Yang, T.; Ren, P.; Xie, X.; Zhang, L. Gan prior embedded network for blind face restoration in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 672–681. [Google Scholar]
  27. Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding convolution for semantic segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1451–1460. [Google Scholar]
  28. Wang, T.; Anwer, R.M.; Khan, M.H.; Khan, F.S.; Pang, Y.; Shao, L.; Laaksonen, J. Deep contextual attention for human-object interaction detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5694–5702. [Google Scholar]
  29. Ma, X.; Zhou, X.; Huang, H.; Chai, Z.; Wei, X.; He, R. Free-form image inpainting via contrastive attention network. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 9242–9249. [Google Scholar]
  30. Navasardyan, S.; Ohanyan, M. Image inpainting with onion convolutions. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November 2020. [Google Scholar]
  31. Wu, J.; Huang, Z.; Thoma, J.; Acharya, D.; Van Gool, L. Wasserstein divergence for gans. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 653–668. [Google Scholar]
  32. Bagade, S.S.; Shandilya, V.K. Use of histogram equalization in image processing for image enhancement. Int. J. Softw. Eng. Res. Pract. 2011, 1, 6–10. [Google Scholar]
  33. Mathis, B.; Hailer, D.; Ganem, H.; Standen, E. Orientation and calibration of core and borehole image data. In Proceedings of the SPWLA Annual Logging Symposium, SPWLA, Paris, France, 26–29 June 1995. Paper Number: SPWLA-1995-JJJ. [Google Scholar]
  34. Schlumberger Information Solutions. Geoframe Fundamentals, Training and Exercise Guide, Version 4.0.1; 2004. Available online: https://manualzz.com/doc/o/sox3c/geoframe-fundamentals-geoframe-product-groups (accessed on 22 July 2024).
Figure 1. Schematic diagram of GAN process.
Figure 1. Schematic diagram of GAN process.
Processes 12 01568 g001
Figure 2. Schematic diagram of the Deep-Fill algorithm.
Figure 2. Schematic diagram of the Deep-Fill algorithm.
Processes 12 01568 g002
Figure 3. Schematic diagram of the coarse generation network structure.
Figure 3. Schematic diagram of the coarse generation network structure.
Processes 12 01568 g003
Figure 4. Loss function curve.
Figure 4. Loss function curve.
Processes 12 01568 g004
Figure 5. Schematic diagram of the fine-grained generation network.
Figure 5. Schematic diagram of the fine-grained generation network.
Processes 12 01568 g005
Figure 6. Four types of grayscale well log images, histograms, and color-rendered images. (a) Original image. (b) Histogram. (c) Color-rendered image.
Figure 6. Four types of grayscale well log images, histograms, and color-rendered images. (a) Original image. (b) Histogram. (c) Color-rendered image.
Processes 12 01568 g006
Figure 7. Acoustic-electric imaging log restoration technology roadmap.
Figure 7. Acoustic-electric imaging log restoration technology roadmap.
Processes 12 01568 g007
Figure 8. Overall process flowchart for resistivity-to-image conversion.
Figure 8. Overall process flowchart for resistivity-to-image conversion.
Processes 12 01568 g008
Figure 9. Quantitative loss function curve of training model.
Figure 9. Quantitative loss function curve of training model.
Processes 12 01568 g009
Figure 10. Verification figure of training model repair results with random mask iteration count.
Figure 10. Verification figure of training model repair results with random mask iteration count.
Processes 12 01568 g010
Figure 11. Comparison Chart of Restoration Effects on Blank Strip Images.
Figure 11. Comparison Chart of Restoration Effects on Blank Strip Images.
Processes 12 01568 g011
Figure 12. Comparison chart of restoration effects between the proposed model and the original GANs restoration model.
Figure 12. Comparison chart of restoration effects between the proposed model and the original GANs restoration model.
Processes 12 01568 g012
Figure 13. Vertical Stripe Anomaly Repair Effectiveness Chart.
Figure 13. Vertical Stripe Anomaly Repair Effectiveness Chart.
Processes 12 01568 g013
Figure 14. Snake-skin pattern anomaly feature restoration effect diagram.
Figure 14. Snake-skin pattern anomaly feature restoration effect diagram.
Processes 12 01568 g014
Figure 15. Enhanced restoration results of blurry feature anomalies.
Figure 15. Enhanced restoration results of blurry feature anomalies.
Processes 12 01568 g015
Table 1. Specific parameters used for training Deep-Fill model.
Table 1. Specific parameters used for training Deep-Fill model.
HyperparametersValue
Base GAN modelSNGAN
Train spe4000
Max iters100,000,000
Img shapes[256, 256, 3]
Batch size16
Table 2. Comparison results of image filling with different models.
Table 2. Comparison results of image filling with different models.
ModelsSIFID/%PSNRSSIM/%
GANs3.8526.8986.81
Deep-Fill2.2128.5489.79
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Hou, Z.; Zhang, Z.; Wang, M.; Cheng, H. Combined Deep-Fill and Histogram Equalization Algorithm for Full-Borehole Electrical Logging Image Restoration. Processes 2024, 12, 1568. https://doi.org/10.3390/pr12081568

AMA Style

Wang J, Hou Z, Zhang Z, Wang M, Cheng H. Combined Deep-Fill and Histogram Equalization Algorithm for Full-Borehole Electrical Logging Image Restoration. Processes. 2024; 12(8):1568. https://doi.org/10.3390/pr12081568

Chicago/Turabian Style

Wang, Junhua, Zhenxue Hou, Zhiqiang Zhang, Meng Wang, and Haoran Cheng. 2024. "Combined Deep-Fill and Histogram Equalization Algorithm for Full-Borehole Electrical Logging Image Restoration" Processes 12, no. 8: 1568. https://doi.org/10.3390/pr12081568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop