Next Article in Journal
Numerical Simulation of Acoustic Wave Generated by DC Corona Discharge Based on the Shock Wave Theory
Previous Article in Journal
Analytical Modeling and Application for Semi-Circular Notch Flexure Hinges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved GAN-Based Image Restoration Method for Imaging Logging Images

School of Computer and Information Technology, Northeast Petroleum University, Daqing 163318, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9249; https://doi.org/10.3390/app13169249
Submission received: 22 May 2023 / Revised: 8 August 2023 / Accepted: 9 August 2023 / Published: 15 August 2023

Abstract

:
An improved GAN-based imaging logging image restoration method is presented in this paper for solving the problem of partially missing micro-resistivity imaging logging images. The method uses FCN as the generative network infrastructure and adds a depth-separable convolutional residual block to learn and retain more effective pixel and semantic information; an Inception module is added to increase the multi-scale perceptual field of the network and reduce the number of parameters in the network; and a multi-scale feature extraction module and a spatial attention residual block are added to combine the channel attention. The multi-scale module adds a multi-scale feature extraction module and a spatial attention residual block, which combine the channel attention mechanism and the residual block to achieve multi-scale feature extraction. The global discriminative network and the local discriminative network are designed to gradually improve the content and semantic structure coherence between the restored parts and the whole image by playing off each other and the generative network. According to the experimental results, the average structural similarity measure of the five sets of imaged logging images with different sizes of missing regions in the test set is 0.903, which is an improvement of about 0.3 compared with other similar methods. It is shown that the method in this thesis can be used for the restoration of micro-resistivity imaging log images with good improvement in semantic structural coherence and texture details, thus providing a new deep learning method to ensure the smooth advancement of the subsequent interpretation of micro-resistivity imaging log images.

1. Introduction

In the field of oil and gas resources exploration and development, complex oil and gas reservoirs represented by carbonates, volcanic rocks, and metamorphic rocks play an increasingly important role in the exploration and development process. Their stratigraphic properties are mainly characterised by complex lithology and reservoir space, and strong non-homogeneity. These stratigraphic properties lead to ambiguous reservoir internal display, which make it difficult to determine reservoir fluid properties, and hard to ensure the accurate calculation of reservoir parameters and the validity of reservoir evaluation. As a result, conventional logging is no longer able to meet the needs of evaluating such complex reservoirs. Imaging logging is a method of imaging physical parameters of wellbore walls and surrounding objects based on observations of geophysical fields in boreholes. It is a fundamental discipline in geography, mainly including wellbore imaging, borehole imaging, and cross wellbore imaging. Borehole imaging includes borehole acoustic imaging and formation micro-resistivity scanning imaging. Borehole imaging mainly involves resistivity imaging, and the methods used are azimuthal lateral logging and array induction logging. Cross wellbore imaging includes acoustic, electromagnetic, and resistivity imaging. Imaging logging uses a unique logging instrument to measure the borehole and generate an image based on the data to restore the true state of the formation, which can clearly characterise the formation lithology, fractures, sediment structure, tectonic dip analysis and in situ stress analysis. Compared with conventional logging, imaging logging has the advantages of directional measurement, intuitive image and high resolution, and can reflect all aspects of the formation structure, lithology, physical properties, oil content and electrical characteristics. Therefore, imaging logging is of great importance in the field of petroleum exploration and development, and has become an indispensable technical tool for logging and evaluation of complex inhomogeneous reservoirs [1]. However, in practice, due to the influence of cable elasticity and the respective hindrance of the downhole instrumentation, as well as the complexity of the borehole environment, the instrumentation cannot carry out measurements at an even speed downhole, making the measurement results distorted or the depth information recorded at the wellhead deviates significantly from the true depth of the instrumentation, resulting in distortions in the measured electrical buckling curve of the pole plate [2]. On the other hand, the imaging log images are often distorted or partially missing due to the collapse of the well wall without recording complete data and individual pole plates without collecting data, which makes the subsequent interpretation work quite difficult. Therefore, in order to provide complete geological information, the imaging log images must be restored.
Image restoration is a way of repairing distorted or missing parts of content based on pixel information from intact parts of an image, for which multiple potential solutions may exist, i.e., the missing parts can be filled in using any reasonable hypothesis associated with the surrounding known regions. Moreover, the missing parts can be complex shapes that are missing, further increasing the difficulty of image colouring. The traditional exemplar-based method PatchMatch [3], which fills the gaps by iteratively searching for the most appropriate patch, is still limited in capturing high-level semantics and may not be able to generate complex and non-repetitive structures. Yang et al. [4] started from image texture and image content, and improved the restoration of the boundary area to be restored by fusing the two with a multi-scale feature extraction module for joint optimisation, while with model training, it would automatically match and adapt the features extracted by the deep network to generate high-frequency details. However, the method was not effective for stratigraphic structures with wide gaps, and for complex structures, such as sand and gravel strata and carbonate formations, the filled area is blurred. Zhang et al. [5] used a multi-point geostatistics-based Filtersim algorithm to fill the gaps, but the continuity of the filled layers is not strong enough to obtain a better filling effect. Radford et al. [6] considered convolutional neural networks. CNN supervised learning has been widely used in computer vision applications. In contrast, unsupervised learning based on CNNs has received less attention [7]. Therefore, Radford et al. proposed a class of deep learning models called deep convolutional generative adversarial networks with certain structural constraints and a powerful algorithmic model for unsupervised learning, which was tested with good results from different perspectives on the model networks. Xiong et al. [8] proposed a foreground-aware image restoration model that explicitly separated structural inference and content filling, and although the model predicted reasonable contours of objects, restoration in terms of texture details was slightly inadequate. However, most of the existing CNN- and U-Net-based methods usually employ standard convolution, which makes it difficult to process valid pixels and missing parts differently. As a result, they are limited in dealing with large areas of missing images and are more likely to produce colour discrepancies and blurred restoration results. As a remedy, several post-processing techniques have been introduced, but they are still not sufficient to solve the problem. Wei Wang et al. [9] combined a generative adversarial network and a twin neural network as a discriminator in GAN and used the mean square error in the generative network and the contrast loss in the twin network, but it is slightly insufficient in image detail restoration.
In order to fill the missing parts in imaging logging images more effectively in practical applications, we propose an innovative image restoration method based on generative adversarial networks (GANs). In the sample preprocessing stage, we inject image annotation information to generate masked images, and through continuous data training, we make the generative network and discriminative network collaborate with each other so as to better learn the image features and reach an optimised model state. With end-to-end training, our method can effectively cope with the lack of contextual connectivity and propagation. Taking the imaging logging data of a horizontal well operation area in Daqing Oilfield as an example, we successfully use the improved GAN-based method to repair the imaging logging images of known horizontal wells and fill in the originally missing regions, demonstrating the application potential of the method in practical applications.

2. Preparatory Knowledge

2.1. Generating Adversarial Networks

A generative adversarial network (GAN) automatically captures the data distribution of a real sample set, i.e., it grasps the mapping pattern between the input and output images. The two train the model according to the rules of the minimal–maximal binary game. The training process can be represented by the objective function formula:
  max D min G V ( D , G ) = E x ~ f d ( x ) log D ( x ) + E y ~ f y ( y ) log ( 1 D ( G ( y ) ) )
where G and D denote the generator and discriminator, respectively. fd(x) denotes the distribution of real data, fy(y) denotes the distribution of data generated by the generator, D(x) denotes the probability that x is real data and not data generated by the generator, and D(G(y)) denotes the probability value when the samples generated by the generator are used as input to the discriminator. From the objective function, it can be seen that the optimisation objective in training the discriminator is to maximise D(x) close to 1 and minimise D(G(y)) close to 0. The optimisation objective in training the generator is to maximise D(G(y)) close to 1. This means that the two networks play off each other to optimise their own performance so that the discriminator cannot tell whether the output image of the generator is true or false.
Through the in-depth study of GAN ideas and the reproduction of the original GAN model, it is elaborated according to its own understanding of the overall process of the GAN model algorithm, as shown in Algorithm 1.
Algorithm 1: original GAN model algorithm flow
In each training iteration: initialise the generator parameter to θg and the discriminator parameter to θd.
1. for 1 steps do: study D.
2.    sampling m sample images from the dataset, i.e., x1, x2, …, xm m = batch size.
3.    sampling m vectors from some distribution (e.g., orthogonal, Gaussian), i.e., z1, z2, …, zm.
4.    get the generated data x ˜ 1 , x ˜ 2 , , x ˜ m , x ˜ i = G z i .
5.    update the discriminator parameter θd to maximum.
V ˜ = 1 m i m ( log D ( x i ) + log ( 1 D ( x ˜ i ) ) )
θ d θ d + η V ˜ ( θ d )
6. end.
7. for 2 steps do: study G.
8. sampling m vectors from some distribution (e.g., normal, Gaussian), i.e., z1, z2, …, zm.
9. update the parameter θg of the generator to maximize.
V ˜ = 1 m i = 1 m log ( D ( G ( z i ) ) )
θ g θ g η V ˜ ( θ g )
10. end.
The basic structure of GAN can be better understood with the help of the above algorithmic flow, as shown in Figure 1. The input is a random noise variable y~fy(y) that conforms to a uniform distribution or other random distribution. The generator G outputs by capturing the true data distribution x~fd(x) and disguising the received noise variable as a true sample data distribution G(y). The discriminator receives the true data and the generated data and outputs a probability value; this value ranges from 0 to 1, with 0 indicating that the current image is a false sample and 1 indicating that the current image is a real sample, as a way to determine the probability that the received data is real data [10].
During the training process of the GAN model, the generator G aims to generate data sufficient to deceive the discriminator D, while D aims to accurately distinguish between the data generated by G and the real data. The two play off each other to achieve their respective goals, and eventually converge to Nash equilibrium in their co-evolution [11] when the generator G is able to generate data close to the real data.

2.2. Attentional Mechanisms

Attention mechanisms (AMs) have arguably become one of the most important concepts in the field of deep learning, and were originally inspired by the human biological visual system. When people visually recognise an object, they tend not to observe the whole scene from beginning to end, but to observe and pay attention to certain areas as needed. When people see a section of an image that often shows what they need to see, and later when such an image reappears, people learn to focus their gaze on that section, focusing more on what is useful, which is a way for people to quickly find meaningful data in a large amount of data through limited information material. Attentional mechanisms have greatly improved the quality and accuracy of cognitive information processing, and they have been applied to image processing and speech and natural language processing due to further in-depth research on neural networks [12].
Currently, the most commonly used attention mechanism structures in image restoration and image segmentation tasks are the channel attention mechanism (CA) and spatial attention (SA). Since this paper uses the channel attention mechanism, only the channel attention mechanism will be described.
The CA aims to model the importance of each channel in the network extraction feature map according to the interdependencies between the acquired channels, so that different features can be suppressed or enhanced for different tasks, and its basic structure is shown in Figure 2.
CA for the input features of the image can be extracted from the global perceptual field, and the channels of the image are compressed by means of a compression mode performing an average pooling operation in the channel dimension, and the process can be represented by Equation (7).
z i = F s q ( v i ) = x = 1 H y = 1 W v i ( x , y ) H × W
In the above equation, H denotes the feature map height, W denotes the feature map width, and zx is the channel descriptor generated by shrinking the spatial dimension. The activation is then performed by the non-linear activation function Fex(z,W), and the process can be represented by Equation (8).
t = F e x ( z , W ) = s ( f ( z , W ) ) = s ( W 1 R ( W 2 z ) )
In the above equation, R denotes the ReLU function, s denotes the Sigmoid function, W1 denotes dimensionality C/16, and W2 denotes dimensionality C. The weighted features obtained by obtaining the interrelationships between global channels through a fully connected network can be expressed by Equation (9):
x ˜ c = F s c a l e ( v c , t c ) = v c t c
where X ˜ = x 1 ˜ , x 2 ˜ , , x c ˜ , F s c a l e v c , t c denotes the channel-level multiplication between the feature map v c and the scalar t c .

2.3. Image Evaluation Indicators

Often we need to evaluate the results of image restoration as a way of verifying the effectiveness of the model. There are generally two types of evaluation, subjective and objective. The subjective evaluation is based on human vision, i.e., the observer scores the image according to their perception of the image, but this method relies heavily on the observer’s subjective emotions, knowledge, and understanding of the image being evaluated, and is therefore not universal [13], so an objective evaluation of the restored image is meaningful and necessary. Objective evaluations such as peak signal-to-noise ratio (PSNR) [14] and mean structural similarity index measurement (MSSIM) are useful.
The PSNR is defined by calculating the mean square error (MSE) between the restored image and the original image, and the value of the PSNR decreases as the MSE of the image increases. The PSNR involves the calculation of the formula expressed as follows.
M S E = 1 H × W i = 0 H 1 j = 0 W 1 | | I ( i , j ) O ( i , j ) | | 2
P S N R = 10 × log 10 ( 2 n 1 ) M S E
In Equation (10), H × W denotes a monochrome image with height H and width W, I denotes an image with a missing region, and O denotes the restored image. In Equation (11), n denotes the number of bits of image pixels, which is generally taken as 8 bits, and since the restored imaging logging image is a colour image, it is necessary to average the MSE of the three channels when calculating the MSE.
The structural similarity metric (SSIM) [15] is a numerical representation that calculates the structural similarity between the original image and the restored image, in which a Gaussian template is used as a probability value for a pixel region, and in turn the pixel mean, variance, and covariance of a region are calculated. SSIM compares the similarity of two images in terms of luminance l, contrast c, and structure s, respectively. The relevant calculation formulae are as follows.
l ( I , O ) = 2 μ I μ O + C 1 μ I 2 + μ O 2 + C 1
c ( I , O ) = 2 σ I σ O + C 2 σ I 2 + σ O 2 + C 2
s ( I , O ) = 2 σ I O + C 3 σ I σ O + C 3
S S I M ( I , O ) = l ( I , O ) c ( I , O ) s ( I , O )
In the above formulae, the means of images I and O are denoted by μ I and μ O , respectively, the variances by σ I and σ O , and the covariances by σ I O . C1, C2, and C3 are denoted by constants in the formulae in order to avoid the possibility of having a denominator of 0 when there is a fraction, usually taking C1 = λ12L2, C2 = λ22L2, C3 = λ32L2, λ1 = 0.01, λ2 = 0.03, and L = 255.
M S S I M ( I , O ) = 1 N i = 1 N S S I M ( I i , O i )
From the above equation, the average structural similarity measure can be calculated based on the structural similarity measure, i.e., the image is divided into N chunks using a sliding window, and the window is moved in pixels each time to calculate the SSIM value of each chunk, and finally, the average value is taken as the average structural similarity measure of the two images. The value range is [0, 1]; the more similar the structure of the two images is, the closer the MSSIM value is to 1.

3. Improved Imaging Logging Image Restoration Model for GAN

3.1. Image Generation Networks

By studying some features of imaging logging images, the generative module and multi-level feature extraction module are designed in the generative network to enhance the linkage of image features in the scale range. It is used to solve the problems of content blurring and local colour difference after the boundary of the area to be repaired is repaired. The generation module is used to learn the correlation between the missing and unmissing areas in order to obtain more continuous and valid pixel information, and to fuse the image features from the multi-level network and extract deeper features. The multi-scale feature extraction module consists of two parts, a medium-spatial-resolution part to extract medium frequency feature information and a low-spatial-resolution part to extract low frequency feature information, jointly enhancing the ability to represent multi-scale features. More complex geology has challenged traditional techniques in a need for more powerful denoising methodologies [16]. The overall architecture of the generating network is shown in Figure 3.

3.1.1. Generating Module

In order to better match the multi-scale feature extraction module, a fully convolutional neural network FCN was chosen as the underlying network architecture for the generation module. FCN can improve the model’s ability to extract image feature information, which is very effective for image logging image restoration of complex strata. Depth separable convolution (DSConv) is combined with residual connection (RC) [17], bulk normalization (BN) [18], and the activation function LeakyReLU (LReLU) in the generation module to form a separable convolution residual block (DSCR) and inserted into the FCN network. DSConv learns continuous feature information at the image pixel level in both spatial and channel dimensions, and reduces the network parameters while retaining the learning capability of the convolutional kernel representation, thus improving the restoration efficiency of the model. RC enables the model to learn more effective boundary feature information and semantic structure information, etc., for image restoration by fusing the incoming image feature information from downsampling with the deeper feature information extracted in the moment. This accelerates the convergence of the network and improves the quality of image restoration. The structure of the deep separable convolutional residual block is shown in Figure 4.
In order to reduce the impact of overfitting due to the large number of parameters caused by increasing the number of layers and channels per layer, the Inception module was introduced in the generation module to expand the network using convolutional kernels of size, 1 × 1, 3 × 3, and 5 × 5, and adding a 1 × 1 convolutional layer after Max Polling (MP) to reduce the number of parameters and increase the network multi-scale perceptual field. The overall network structure of the Inception module is represented in Figure 5.
An overview of the network for each layer of downsampling is shown in Table 1. The model structure consists of a convolutional layer, batch normalisation, and an activation function, of which there are two types of activation functions, ReLU and LReLU. The Inception module shown in Figure 5 is introduced into the structure of layers 2 to 6 of the downsampling network, making the model more generalisable in terms of multi-scale feature extraction. After the multi-scale features are extracted, these features need to be fused for learning, so the concat operation is added specifically to the layer 4 and 6 structure for connecting the multi-scale feature extraction modules to achieve feature fusion. As the network levels deepen, DSCR is used at layers 7 and 8 to enhance the extraction of continuous feature information.
An overview of the network for each layer of upsampling is shown in Table 2. In the upsampling operation, the feature map is continuously expanded using bilinear interpolation (BI) [19] to gradually recover the dimensions of the image. Layers 1 and 2 use the separable convolutional residual block DSCR corresponding to the downsampling; layers 3 and 5 use concat to connect the features extracted from the medium and low resolution branches to achieve the fusion of high-, medium-, and low-scale feature information; and layer 7 is paired at the end to generate the restored image using the Sigmoid activation function.

3.1.2. Multi-Level Feature Extraction Module

To further extract feature information from the complete region of the image, a multi-scale feature extraction module is introduced into the generative network structure for extracting feature information at different scales. In addition, a channel attention mechanism is introduced into the network with the aim of suppressing feature information that has less impact on the area to be restored in the imaging logging image, while assigning higher weights to important features. In order to focus more on the network features transmitted by the front layer network, a residual block is introduced into the network to reduce the content blurring and artefacts that may occur at the boundary to be repaired due to upsampling. The channel attention mechanism and the residual block are used in conjunction to form the channel attention residual block (CAR). Before the feature information is extracted using the mid- and low-frequency branches, a feature extraction module (FEM) consisting of 7 × 7, 5 × 5, and 4 × 4 convolution, BN, and LReLU is used to extract features from the input image and output a feature map of size 256 × 16 × 16. The overall structure of the FEM module and the CAR module is shown in Figure 6.
The medium-spatial-resolution branch consists of an 8-layer network, which, after convolution and batch normalisation, outputs a feature map with a size of 1.5 mm. The low-spatial-resolution branch consists of a 7-layer network, which, after convolution, outputs a feature map with a size of 1.5 mm. Bilinear interpolation (BI) is used between the convolution modules in the medium- and low-spatial-resolution branches to increase the image size, recover the image size, and increase the resolution of the image. Finally, the feature maps output from the two branches are connected to the corresponding network layers in the generation module to achieve multi-scale feature extraction. An overview of the composition of the medium and low-spatial-resolution branching network structure is shown in Table 3.

3.2. Image Discriminant Network

As the underlying network architecture of the generative network is FCN, the restoration effect is still insufficient to support the evaluation and interpretation of imaging logging images, despite the design of a multi-scale feature extraction module by combining the channel attention mechanism and residual connection. Through an in-depth analysis of GAN and J Liu et al. [20], it is found that the discriminative network architecture in the model proposed by J Liu et al. can significantly improve the model restoration effect, so in this paper a discriminative network is designed and built with reference to its discriminative module as well. The discriminative network uses the adversarial idea to play the restored image generated by the generative network off the real image, and gradually updates the parameters of the generative and discriminative networks according to the results, so that the image generated by the generative network can deceive the discriminative network and has clear and reasonable texture details in the visual effect. In order to improve the restoration quality and to enable the model to better focus on the missing areas of the image, the discriminant network structure studied by S Lizuka et al. [21] was used to improve on this structure, and a local discriminant network was used to assist the global discriminant network in discriminating local areas of the image. The overall discriminant network structure is shown in Figure 7.
Both the global discriminative network and the local discriminative network are built based on a full convolutional neural network that compresses the image into small feature vectors. The networks are basically similar except for the number of layers, and an overview of their respective compositions is shown in Table 4 and Table 5. The global discriminative network takes as input the whole image rescaled to a size of 256 × 256 pixels, and its network structure consists of five convolutional layers, all of which use 5 × 5 convolutional kernels, BN, and LReLU, and reduce the feature map in 2 × 2 pixel steps simultaneously. The local discriminative network follows the structure of the global discriminative network, except that a 128 × 128 pixel-sized image block at the centre of the restoration region is used as input and has one less convolutional layer than the global discriminative network due to the low resolution of the input. Unlike the discriminative network proposed by S Lizuka et al., which takes an image sample as input and in turn a profile of whether it is a real image as output, the output structure designed in this paper is inspired by the idea of SPatchGAN [22], where the global and local discriminative networks downsample the input image to obtain the corresponding feature maps for each layer of the network, and at the end map the received upper layer feature maps into a single-channel 8 × 8 matrix.

3.3. Overall Model Architecture

The model structure of this paper is inspired by GAN, and a generative network module and a discriminative network module are built. The discriminative network consists of a global discriminative network and a local discriminative network, which play with the generative network to gradually improve the content and semantic structure coherence between the restored part and the overall image. The overall structure of the model is shown in Figure 8.
The model as a whole can be divided into left and right parts. The left side of the generative network includes the fully connected network FCN module, as well as the multi-scale feature extraction module, and embedded in these two modules are the separable convolutional residual block, the channel attention residual block, and so on. The right side is the discriminative network, including the global discriminative network and both parts of the local discriminative network. The relevant implementation details have been given in the previous section and will not be elaborated on here.

4. Training the Improved GAN Model

4.1. Data Pre-Processing

The experimental data were obtained from the actual logging data of a certain working area in Daqing Oilfield and a working area in Dagang Oilfield. Since the FCN was chosen as the base network structure for the model generation network in this paper, more data samples were needed to improve the restoration effect of the generation network compared with the GAN-based image restoration model, so the imaging logging data of eight horizontal wells in the same working area were selected to increase the number of data samples. Because the micro-resistivity imaging logs in this work area are more effective, most of the eight horizontal wells are micro-resistivity imaging logs, which can better extract image features during training and meet the experimental conditions. The well sections with good imaging results and practical references were selected for interception, including coarse conglomerate, fault, sand and gravel, and mudstone. In order to expand the number of image logging data to prevent overfitting of the model, the images were flipped vertically and horizontally to obtain a final dataset of 25,000 image logging images of 256 × 256 size.
The GAN model designed in this paper requires input imaging log images with missing regions and mask binary images, so the original imaging log images need to be pre-processed, and a separate method is designed using the OpenCV library for generating mask images with missing upper and lower connectivity in order to match the real missing situation. Where white (pixel value of 255) is the area to be repaired and black (pixel value of 0) is the valid area, and the dataset is classified according to the size of the missing area to generate five categories of masks with different area ratios of the missing area of the image.

4.2. Training Model

Combining real imaging log images from a total of eight horizontal wells in the Daqing and Dagang Oilfields, a dataset of imaging log images with mask regions and corresponding mask binary images was created after data pre-processing. The dataset was divided into a training set (20,000 images), a validation set (2500 images), and a test set (2500 images) according to a ratio of 8:1:1.
The improved GAN network model for imaging logging image restoration experiments was implemented using a 64-bit Windows 10 operating system, the Python 3.7 programming language, torch 1.9 (cuda11) and torchvision 0.10.0, and a 16G NVIDIA GeForce GTX 1650Ti graphics card (NVIDIA, Santa Clara, CA, USA) to speed up the training process. The initial learning rate of the repair network was set to 0.0005, the initial learning rate of the discriminative network was set to 0.00001, and the number of training rounds was set to 200 epochs. The training flow of the improved GAN model algorithm is shown in Algorithm 2.
Algorithm 2: Improved GAN model algorithm training process
1. set the number of training rounds e, the total amount of training data n and batc size b
2. for i = 1 to e do:
3.   for j = 1 to n/b do:
4.   inputting a single batch of b defective images and the corresponding mask images into the generative network
5.   training of generative networks:
6.      calculation of content loss Lv and Lh, perceptual loss Lprep, windage loss Ls;
7.      updating the G-weight parameters of the generating network using the four loss functions Lh, Lv, Lprep and Ls training
8.   training discriminative networks
9.      input of the restored b images and the original images into the discriminatory network;
10.    comparing the restored image with the real image and calculating the adversarial loss Ladv;
11.    update the weight parameters of the global discriminant network Dg and the local discriminant network Dl with Ladv
12.    improved overall GAN
13.    joint training of the generative and discriminative networks using the total loss function L
14. end
15. end
During the training of the model, the generative network is trained to repair the image to fool the discriminator network, and the global and local discriminators are trained to identify whether the input image is repaired by the generative network or is the real one. First, the pre-processing stage generates a defective image and corresponding mask image that matches the real imaging logging image defect; it sets the number of training rounds e, the total training data volume n, and the batch processing volume b; then the model iterates through round e, updating the parameters in each round with n/b cycles. In each cycle, the training is optimised in three steps. (I) Calculate content loss Lh and Lv, perceptual loss Lprep, and style loss Lstyle, and use them to update the weight parameters of the generative network through back-propagation training. (II) Fix the parameters of the generative network, obtain the adversarial loss based on the error between the restored image and the original image, and update the weight parameters of the global and local discriminative networks by training with the adversarial loss alone. (III) Train the overall model built in this paper using the total loss function. This is conducted for several iterations of updates until the network stabilises and the final model is obtained.

4.3. Loss Function

The overall performance of the imaging logging image restoration model is not only dependent on the network structure, but the selection of the loss function is also critical. During model training, the aim is to use the loss function values as feedback to drive the model to convergence, so the model training goal is to minimise the loss function values. In order to better restore the texture details and semantic structure of the imaging logging images, this paper uses a combination of content loss, perceptual loss, style loss, and adversarial loss to train an improved GAN model.
In this paper, we use a combination of content loss, perceptual loss, stylistic loss, and adversarial loss as a loss function for image restoration, mainly to combine different types of information in order to improve the effectiveness of image restoration. For this, a pixel-level difference metric is used to measure the similarity between the restored image and the original image. By minimising content loss, the restored image can more closely resemble the original image, retaining more of the original detail and structure. Perceptual loss uses a pre-trained deep learning model (VGG network) to extract high-level features of the image, which are used to compare the perceptual differences between the restored image and the original image. Perceptual loss can help ensure that the restored image is more perceptually realistic and avoids generating false details. Style loss is used to compare the restored image with the original image in terms of style. By minimising style loss, the restored image retains the texture and style of the original image, increasing visual consistency. Adversarial loss introduces the concept of an adversarial generative network in which the generator tries to deceive the discriminator, which in turn tries to distinguish between the real image and the generated image. By minimising adversarial loss, the restored image can be more realistic and have higher visual quality. Combining these different types of loss functions can enable image restoration models to produce more realistic and natural-looking restoration results while maintaining structural and content realism. Each loss function is described below.
Content loss: Content loss consists of the loss of intact regions Lv and the loss of missing regions Lh. Since L1 loss ignores the structural correlation between missing and undamaged regions, multi-scale structural similarity MS-SSIM [23] is used to better preserve the contrast of information in high frequency regions of the image. The content loss consists of L1 loss and MS-SSIM, which is calculated as shown in the equation below:
L v = ( 1 a ) M i n ( I g I t ) 1 + a ( 1 M S M ) ( M i n ( I g I t ) )
L h = ( 1 a ) ( 1 M i n ) ( I g I t ) 1 + a ( 1 M S M ) ( ( 1 M i n ) ( I g I t ) )
where Min represents the mask region, ⊙ denotes the Hadamad product, MSM denotes MS-SSIM (the formula can be found in Equation (16)), Ig denotes the image generated by the generative network, It denotes the real image, and the constant a is used to balance the MS-SSIM loss and the L1 loss, with a value defined empirically as 0.5 in the model.
Perceptual loss: Perceptual loss is introduced in order to give the generated image a higher level of structural similarity to the real image, rather than just making the generated image a pixel match to the real image. Perceptual loss can be used to obtain a higher level of structural information using convolutional layers to perceive images from higher latitudes closer to the level of human visual effects. This paper introduces the perceptual loss defined on a VGG16 network pre-trained on ImageNet, which is calculated as follows:
L p r e c = 1 N i = 1 N F i ( I t ) F i ( I g ) 2
where N denotes the image size and Fi(·) denotes the feature map of the ith pooling layer.
Style loss: Style loss is a loss function that measures the similarity in style between two images and is used to guide the learning of image restoration so that the generated content has a detailed local texture [24]. The style loss pass is calculated based on the Gram Matrix between the features of the image at different levels in the feature extractor, which measures the difference between the original image and the restored image at different levels of features. An analysis of the feature map shows that the values at each position on the feature map are calculated using the convolution layer for a fixed position on the image, while for the positions in the Gram Matrix, the values are obtained by inner-producing the two feature maps. The Gram Matrix therefore provides a good representation of the style of the image by showing the correlation between two features, such as which two features appear at the same time and which two features are swapped. Assuming that the size of the Fi feature map is Hi × Wi × Ci, the style loss can be defined as the following equation.
L s = 1 N i = 1 N 1 C i × C i × F i ( I t ) ( F i ( I t ) ) T F i ( I g ) ( F i ( I g ) ) T 2
Adversarial loss: Adversarial loss has been widely used in image generation and low-level vision to improve the visual quality of generated images. Therefore, in order to better learn the data distribution of real images, the parameters of the generative and discriminative networks are continuously iteratively updated through adversarial training between the generative network and the global and local discriminative networks to improve the visual performance of the restored images. The adversarial loss in this paper sets the adversarial loss Lglobal for the global discriminant network and Llocal for the local discriminant network, and weighs the bias of the two using a constant factor. The calculation formula is shown below:
L a d v = 1 2 ( 0.7 L g l o b a l + 0.3 L l o c a l )
where the global adversarial loss Lglobal and the local adversarial loss Llocal are calculated as shown below.
L g l o b a l = min G max D E I t log D g I t + E I g 1 log D g G I i n , M i n
L l o c a l = min G max D E I t log D m I t M i n + E I g 1 log D m G I i n , M i n M i n
In the above equation, the image to be repaired is represented by the Iin, the Dg represents the complete image input into the global discriminative network, and the Dm represents the image with missing regions input into the local discriminative network.
Total loss: Based on the above loss function, the total loss function of the model can be expressed by the following equation.
L = λ 1 L v + λ 2 L h + λ 3 L p r e c + λ 4 L s + λ 5 L a d v
In the above equation, the parameters λ1, λ2, λ3, λ4, and λ5 are not fixed values and need to be determined based on experimental tests. This paper sets them empirically as follows: λ1 = 1.0, λ2 = 3.0, λ3 = 0.02, λ4 = 120, and λ5 = 0.1.
In this paper, we design a model of an imaging logging image restoration algorithm based on improved GAN and introduce different types of loss functions to guide the model training. Among them, λ4 = 120 is the weight used for style loss, while other loss functions (e.g., content loss, perceptual loss, and adversarial loss) are set with relatively small weights. Such a setting is the result of several experimental validations and aims to better balance the impact of different loss functions in model training. The larger weight of λ4 = 120 is set out to emphasise the importance of the style loss. In the imaging logging image restoration task, texture details play a key role in image quality. By setting λ4 to a larger value, the model pays more attention to maintaining the style of the image during the training process, which makes the restored image closer to the real image in terms of texture, and thus improves the fidelity and visual quality of the restoration effect. The weights of the other loss functions are set smaller, but they still play a key role in the model. Content loss and perceptual loss help maintain the consistency of the generated image with the real image in terms of content and structure, while adversarial loss promotes the model to generate more realistic images. By appropriately balancing the weights of these loss functions, we ensure that the whole model is able to synthesise different types of information and achieve better performance in the restoration task.
In summary, the setting of λ4 = 120 is to emphasise the importance of style loss in image restoration, while the smaller weights of the other loss functions help to maintain the balance and stability of the model. With such a loss function weight setting, the model achieves satisfactory results in the imaging logging image restoration task.

5. Example Application and Analysis of Results

5.1. Comparison of Experimental Results

By training the improved GAN-based imaging logging image restoration model in this paper, the rising curves of mean structural similarity MSSIM and peak signal-to-noise ratio PSNR as well as the falling curves of the loss function for the training and validation sets are shown in Figure 9 (with an imaging logging image missing region ratio of (0.1, 0.2] as an example). The highest value is 0.964 for the training set and 0.972 for the validation set for MSSIM; the highest value is 37.684 for the training set and 37.436 for the validation set for PSNR; and the lowest value is 0.12 for the training set and 0.10 for the validation set from the loss function plot.
The imaging logging image restoration model proposed in this paper refers to the global and local discriminative ideas of the GLCIC algorithm, and improves both its generative network structure and discriminative network structure. Therefore, in order to verify the effectiveness of the proposed imaging logging image restoration model, the GLCIC algorithm can be selected to evaluate the model in this paper when conducting the comparison of experimental results. In addition to the others, the GAN-based image restoration algorithms with widely recognised restoration results WGAN [25] and PDGAN [26] were also selected in recent years. These three model algorithms were compared qualitatively and quantitatively with the model proposed in this paper. Among them, WGAN is built on the basis of DCGAN. All pooling layers are removed from the network model, the fully connected structure is removed, both the generative network G and the discriminative network D use batch-normalised BN, and neither loss function takes log in the calculation. There is a difference between G and D in terms of activation functions; D uses the LeakyReLU activation function for all layers, while G uses the ReLU activation function in the last layer. The biggest advantage of WGAN is that it uses Wasserstein distance to circumvent the shortcomings of the original GAN and improves the model training process, which basically solves the problem of unstable training and pattern collapse of the original GAN model. Where WGAN uses weight, PDGAN proposes spatial probability diversity normalisation (SPDNorm for short) to gradually modulate the deeper features of the random noise vector. The SPDNorm consists of a Hard SPDNorm and a Soft SPDNorm, where the Hard SPDNorm increases the likelihood of obtaining diversity results but reduces the quality of the results. In contrast, the Soft SPDNorm can be trained stably and dynamically learn the conditions of the prior information, but lacks diversity; hence the use of two branches for the input features modulated to allow for diversity results as well as high-quality restoration. The model-specific loss function is a perceptual diversity loss, which keeps content constant by introducing a mask into the loss, while the perceptual diversity loss is computed on the perceptual space rather than the original pixel space and maximises distance in the highly non-linear network feature space. At the same time, it integrates semantic measures and avoids the need to fully generate black or white solutions. The optimiser uses the Adam algorithm with settings beta1 = 0.0 and beta2 = 0.99.
Qualitative comparison is shown in Figure 10, which shows that the improved GAN-based imaging logging image restoration model has better results in the restoration of top and bottom connectivity loss and edge detail texture restoration in imaging logging images.
The quantitative comparisons are shown in Table 6. The PSNR values (with a missing ratio of (0.1, 0.2]) of the improved GAN model approach improved by 4.611 over GLCIC, 3.192 over WGAN, and 1.534 over PDGAN; the MSSIM values (with a missing ratio of (0.1, 0.2]) improved by 0.102 over GLCIC and 0.064 and 0.011 over PDGAN.
The results of the improved GAN model for the ablation experiments are given in Table 7. The descriptions are as follows: (I) model(DSCR): embedding the designed deep separable convolutional residual block on the underlying FCN architecture; (II) model(Inception): adding the Inception module to the generative network; (III) model(FEM): adding the multi-scale feature extraction module to the generative network; (IV) model(CAR): the channel attention residual block is added to the generative network; and (V) model(Full): the full model of the imaging logging image restoration method using improved GAN. The ablation experiments conducted used the same set of datasets with the same ratio of missing imaging log images above all, and the designed modules were gradually added for training separately.

5.2. Analysis of Experimental Results

Based on the experimental comparison results obtained from the real imaging logging data in Section 5.1, the authors make the following theoretical analysis. As can be seen from Figure 9c, the bilinear interpolation, normalisation, ReLU, and LeakyReLU activation functions are used in this model to improve the quality of the images generated by the generative network, and the loss functions of the training and validation sets start to level off at about 20 iterations. Figure 9a,b shows the mean structural similarity measure and the peak signal-to-noise ratio with training, respectively, and based on the trend of the curves, it can be seen that there is no overfitting in this improved GAN imaging logging image restoration model. In Figure 10, each row of images from left to right shows the original undeficient image, the image with mask, and the image after restoration using different models. It can be observed that the discriminative network of the GLCIC model is well designed to discriminate the quality of the images generated by the generative network globally and locally, but the restored images are partially blurred because the generative network uses inflated convolution but does not add an appropriate attention mechanism. PDGAN uses spatial probability diversity normalisation and loss of perceptual diversity, and considers that repairing the region to be repaired requires giving higher weights to the surrounding pixels of the missing region, and therefore does not pay enough attention to the pixels as a whole, so that artefacts may be produced when the edges of the image are missing from repair. In contrast, the model in this paper designs a deep separable convolutional residual block, an Inception module, and a multi-scale feature extraction module in the generative network in order to generate higher quality images with clear content and a coherent semantic structure, and the discriminative network designs global and local discriminations, focusing more on the discriminations of edge parts from both global and local aspects to further enhance the image realism. By inviting observers for subjective observation, there is a better restoration effect compared to the other three restoration models. Table 6 shows the quantitative comparison results of the different models on imaging log images with different missing ratios, and it can be seen that the improved model performs well in terms of PSNR and MSSIM. The data in Table 7 show that the PSNR and MSSIM of the model (with a missing ratio of (0.1–0.2]) are improved to different degrees when the designed modules are gradually introduced, e.g., the MSSIM of the model rises by 0.082 when using the depth-separable convolutional residual block in the generative network; the MSSIM of the model rises by 0.109; the introduction of the multi-scale feature extraction module increased the MSIIM by 0.144; the introduction of the channel attention residual block increased the MSIIM by 0.150; and the final experiment using the complete model of the imaging logging image restoration method with improved GAN increased the MSSIM by 0.213 compared to the initial one.
After the analysis of the above results, it can be seen that the restoration model using the improved GAN has a better restoration effect than the same type of GAN model for restoring the edge of the missing regions of the imaging logging images, and can be used to restore the imaging logging images.

6. Conclusions

In this paper, an improved GAN-based model for an imaging logging image restoration algorithm is proposed. The model uses an FCN as the infrastructure of the generative network and introduces a depth-separated convolutional residual block to learn and retain pixel information and semantic information more efficiently. In addition, by adding an Inception module (Inception), the network is equipped with a multi-scale perceptual field while reducing the number of parameters. The introduction of the multi-scale feature extraction module and the channel-attentive residual block further improves the feature extraction capability. The global and local discriminative networks and the generative network collaborate with each other to gradually improve the consistency between the restored part and the whole image in terms of content and semantic structure. Through qualitative and quantitative result analyses and ablation experiments, it is verified that the model in this paper outperforms the traditional GAN model and selected GAN model variants in terms of restoration effect. This study provides a new idea for the restoration of missing imaging logging images.

Author Contributions

Conceptualization, M.C. and H.X.; methodology, H.F.; software, H.F.; validation, H.F., M.C. and H.X.; formal analysis, M.C.; investigation, M.C.; resources, M.C.; data curation, H.F.; writing—original draft preparation, H.F.; writing—review and editing, M.C.; visualization, H.F.; supervision, H.F.; project administration, H.F.; funding acquisition, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by the Heilongjiang Higher Education Teaching Reform Foundation “Construction and Practice of Geological Data Mining and Fusion Course for Smart Oilfield” (No. SJGY20220253).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to national legal requirements.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, L.; Zhou, C.-C.; Huang, S.-X. Restoration of distorted images from FMI imaging logs and their geological applications. J. Oil Gas 2007, 88–91+2. [Google Scholar] [CrossRef]
  2. Wang, M.; Sun, J.M.; Lai, F.Q.; Li, M.O.B.; Yu, H.W. Recovery techniques for distortion of full-borehole formation microimager logs. J. China Univ. Pet. (Nat. Sci. Ed.) 2010, 34, 47–51+55. [Google Scholar]
  3. Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 2009, 28, 24. [Google Scholar] [CrossRef]
  4. Yang, C.; Lu, X.; Lin, Z.; Shechtman, E.; Wang, O.; Li, H. High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6721–6729. [Google Scholar]
  5. Hurley, N.F.; Zhang, T. Method to generate full-bore images using borehole images and multipoint statistics. SPE Reserv. Eval. Eng. 2011, 14, 204–214. [Google Scholar] [CrossRef]
  6. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  7. Shan, L.; Liu, C.; Liu, Y.; Kong, W.; Hei, X. Rock CT Image Super-Resolution Using Residual Dual-Channel Attention Generative Adversarial Network. Energies 2022, 15, 5115. [Google Scholar] [CrossRef]
  8. Xiong, W.; Yu, J.; Lin, Z.; Yang, J.; Lu, X.; Barnes, C.; Luo, J. Foreground-aware image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5840–5848. [Google Scholar]
  9. He, J.; Lv, X.; Zhang, J.; Li, J. Research on SN-GAN based algorithm for large-area defect image restoration. J. Inn. Mong. Univ. Sci. Technol. 2022, 41, 180–186. [Google Scholar]
  10. Li, Y. Research on Image Complementation Algorithm Based on Generative Adversarial Network. Master’s Thesis, Beijing University of Architecture, Beijing, China, 2022. [Google Scholar]
  11. Li, P. Some Studies on Nash Equilibrium Problems; Dalian University of Technology: Dalian, China, 2013. [Google Scholar]
  12. Niu, Z.; Zhong, G.; Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 2021, 452, 48–62. [Google Scholar] [CrossRef]
  13. Ning, Z.; Wang, Y.; Zhang, X.; Xu, D. Research progress on single-image super-resolution reconstruction based on deep learning. J. Autom. 2020, 46, 2479–2499. [Google Scholar]
  14. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
  15. Sheikh SWang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  16. Zhang, H.; Wang, W. Imaging Domain Seismic Denoising Based on Conditional Generative Adversarial Networks (CGANs). Energies 2022, 15, 6569. [Google Scholar] [CrossRef]
  17. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  18. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  19. Kirkland, E.J. Bilinear interpolation. In Advanced Computing in Electron Microscopy; Springer: Boston, MA, USA, 2010; pp. 261–263. [Google Scholar]
  20. Liu, J.; Jung, C. Facial image inpainting using multi-level generative network. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 1168–1173. [Google Scholar]
  21. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 2017, 36, 1–14. [Google Scholar] [CrossRef]
  22. Shao, X.; Zhang, W. Spatchgan: A statistical feature based discriminator for unsupervised image-to-image translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 6546–6555. [Google Scholar]
  23. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1398–1402. [Google Scholar]
  24. Odena, A.; Dumoulin, V.; Olah, C. Deconvolution and checkerboard artifacts. Distill 2016, 1, e3. [Google Scholar] [CrossRef]
  25. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
  26. Liu, H.; Wan, Z.; Huang, W.; Song, Y.; Han, X.; Liao, J. Pd-gan: Probabilistic diverse gan for image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9371–9381. [Google Scholar]
Figure 1. Generating an adversarial network structure.
Figure 1. Generating an adversarial network structure.
Applsci 13 09249 g001
Figure 2. Channel attention mechanism.
Figure 2. Channel attention mechanism.
Applsci 13 09249 g002
Figure 3. Generating the overall network architecture.
Figure 3. Generating the overall network architecture.
Applsci 13 09249 g003
Figure 4. Deeply separable convolutional residual blocks.
Figure 4. Deeply separable convolutional residual blocks.
Applsci 13 09249 g004
Figure 5. Structure of the Inception module.
Figure 5. Structure of the Inception module.
Applsci 13 09249 g005
Figure 6. FEM and CAR structure diagram.
Figure 6. FEM and CAR structure diagram.
Applsci 13 09249 g006
Figure 7. Structure of the discriminant network.
Figure 7. Structure of the discriminant network.
Applsci 13 09249 g007
Figure 8. The overall architecture of the improved GAN repair model.
Figure 8. The overall architecture of the improved GAN repair model.
Applsci 13 09249 g008
Figure 9. Graphs of evaluation metrics for the training and test sets. (a) Average structural similarity metric. (b) Peak signal-to-noise ratio. (c) Loss function value.
Figure 9. Graphs of evaluation metrics for the training and test sets. (a) Average structural similarity metric. (b) Peak signal-to-noise ratio. (c) Loss function value.
Applsci 13 09249 g009
Figure 10. Qualitative comparison of imaging logging image restoration models with improved GAN. (a) Original. (b) Input. (c) GLCIC. (d) WGAN. (e) PDGAN. (f) OURS model.
Figure 10. Qualitative comparison of imaging logging image restoration models with improved GAN. (a) Original. (b) Input. (c) GLCIC. (d) WGAN. (e) PDGAN. (f) OURS model.
Applsci 13 09249 g010
Table 1. Overview of the structural components of the downsampling network.
Table 1. Overview of the structural components of the downsampling network.
LayerConvolution ModuleInputOutput
1Conv3×3 + BN + ReLU3 × 256 × 25632 × 256 × 256
2Conv4×4 + BN + LReLU + Inception32 × 256 × 25664 × 128 × 128
3Conv4×4 + BN + LReLU + Inception64 × 128 × 128128 × 64 × 64
4Concat + Conv4×4 + BN + LReLU + Inception128 × 64 × 64256 × 32 × 32
5Conv4×4 + BN + LReLU + Inception256 × 32 × 32256 × 16 × 16
6Concat + Conv4×4 + BN + LReLU + Inception256 × 16 × 16512 × 8 × 8
7DSCR512 × 8 × 8512 × 4 × 4
8DSCR512 × 4 × 4512 × 2 × 2
Table 2. Overview of upsampling network structure components.
Table 2. Overview of upsampling network structure components.
LayerDeconvolution ModuleInputOutput
1DSCR1024 × 4 × 4512 × 4 × 4
2BI + Deconv3×3 + BN + ReLU1024 × 8 × 8512 × 8 × 8
3BI + Concat + Deconv3×3 + BN + ReLU1024 × 16 × 16256 × 16 × 16
4BI + Deconv3×3 + BN + ReLU512 × 32 × 32256 × 32 × 32
5BI + Concat + Deconv3×3 + BN + ReLU512 × 64 × 64128 × 64 × 64
6BI + Deconv3×3 + BN + ReLU256 × 128 × 12864 × 128 × 128
7BI + Deconv3×3 + BN + Sigmoid96 × 256 × 2563 × 256 × 256
Table 3. Overview of the structural components of multi-scale feature extraction networks.
Table 3. Overview of the structural components of multi-scale feature extraction networks.
LayerConvolution ModuleInputOutputBranch
1Conv5×5 + BN + LReLU256 × 16 × 16256 × 8 × 8Middle
2Conv3×3 + BN + ReLU + CAR256 × 8 × 8256 × 8 × 8
3BI128 × 8 × 8128 × 16 × 16
4Conv3×3 + BN + ReLU + CAR128 × 16 × 16128 × 16 × 16
5BI128 × 16 × 16128 × 32 × 32
6Conv3×3 + BN + ReLU + CAR128 × 32 × 32128 × 32 × 32
7BI128 × 32 × 32128 × 64 × 64
8Conv3×3 + BN + ReLU + CAR128 × 64 × 64128 × 64 × 64
1Conv5×5 + BN + LReLU256 × 16 × 16256 × 8 × 8Low
2Conv5×5 + BN + LReLU256 × 8 × 8128 × 4 × 4
3Conv3×3 + BN + ReLU + CAR128 × 4 × 4128 × 4 × 4
4BI128 × 4 × 4128 × 8 × 8
5Conv3×3 + BN + ReLU + CAR128 × 8 × 8128 × 8 × 8
6BI128 × 8 × 8128 × 16 × 16
7Conv3×3 + BN + ReLU + CAR128 × 16 × 16128 × 16 × 16
Table 4. Overview of global discriminatory network structure components.
Table 4. Overview of global discriminatory network structure components.
LayerConvolution ModuleInputOutput
1Conv5×5 + BN + LReLU3 × 256 × 25664 × 128 × 128
2Conv5×5 + BN + LReLU64 × 128 × 128128 × 64 × 64
3Conv5×5 + BN + LReLU128 × 64 × 64256 × 32 × 32
4Conv5×5 + BN + LReLU256 × 32 × 32512 × 16 × 16
5Conv5×5 + BN + LReLU512 × 16 × 161 × 8 × 8
Table 5. Overview of local discriminatory network structure components.
Table 5. Overview of local discriminatory network structure components.
LayerConvolution ModuleInputOutput
1Conv5×5 + BN + LReLU3 × 128 × 128128 × 64 × 64
2Conv5×5 + BN + LReLU128 × 64 × 64256 × 32 × 32
3Conv5×5 + BN + LReLU256 × 32 × 32512 × 16 × 16
4Conv5×5 + BN + LReLU512 × 16 × 161 × 8 × 8
Table 6. Quantitative comparison of imaging log image restoration with improved GAN (PSNR(dB)/MSSIM).
Table 6. Quantitative comparison of imaging log image restoration with improved GAN (PSNR(dB)/MSSIM).
(0.01–0.1](0.1–0.2](0.2–0.3](0.3–0.4](0.4–0.5]
PSNR(GLCIC)31.72630.21526.83624.53721.368
PSNR(WGAN)33.53731.63428.47226.75924.957
PSNR(PDGAN)35.73533.29231.58329.83428.573
PSNR(OURS)37.65434.82633.68531.25328.247
MSSIM(GLCIC)0.8750.8350.7740.7460.762
MSSIM(WGAN)0.9150.8730.8050.7840.723
MSSIM(PDGAN)0.9480.9260.8390.8050.764
MSSIM(OURS)0.9630.9370.9050.8620.827
Table 7. Differences in performance of improved GAN under ablation experiments (PSNR(dB)/MSSIM).
Table 7. Differences in performance of improved GAN under ablation experiments (PSNR(dB)/MSSIM).
Method(0.01–0.1](0.1–0.2](0.2–0.3](0.3–0.4](0.4–0.5]
model(original)27.347/0.78625.472/0.70322.253/0.65720.352/0.58920.462/0.563
model(DSCR)29.527/0.80527.963/0.78523.869/0.68721.735/0.60220.984/0.583
model(Inception)30.315/0.82429.153/0.81225.752/0.70523.562/0.65822.074/0.614
model(FEM)32.573/0.86930.751/0.84727.147/0.76425.769/0.70524.351/0.673
Model(CAR)35.584/0.92732.574/0.85329.749/0.80627.734/0.74926.048/0.702
model(Full)37.126/0.95835.276/0.91633.153/0.86530.528/0.84927.938/0.785
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, M.; Feng, H.; Xiao, H. An Improved GAN-Based Image Restoration Method for Imaging Logging Images. Appl. Sci. 2023, 13, 9249. https://doi.org/10.3390/app13169249

AMA Style

Cao M, Feng H, Xiao H. An Improved GAN-Based Image Restoration Method for Imaging Logging Images. Applied Sciences. 2023; 13(16):9249. https://doi.org/10.3390/app13169249

Chicago/Turabian Style

Cao, Maojun, Hao Feng, and Hong Xiao. 2023. "An Improved GAN-Based Image Restoration Method for Imaging Logging Images" Applied Sciences 13, no. 16: 9249. https://doi.org/10.3390/app13169249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop