Next Article in Journal
Estimating the Product of the X-ray Spectrum and Quantum Detection Efficiency of a CT System and Its Application to Beam Hardening Correction
Previous Article in Journal
IMU-Based Effects Assessment of the Use of Foot Orthoses in the Stance Phase during Running and Asymmetry between Extremities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Local and Multi-Scale Mechanisms for Image Inpainting

School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(9), 3281; https://doi.org/10.3390/s21093281
Submission received: 2 April 2021 / Revised: 30 April 2021 / Accepted: 6 May 2021 / Published: 10 May 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Recently, deep learning-based techniques have shown great power in image inpainting especially dealing with squared holes. However, they fail to generate plausible results inside the missing regions for irregular and large holes as there is a lack of understanding between missing regions and existing counterparts. To overcome this limitation, we combine two non-local mechanisms including a contextual attention module (CAM) and an implicit diversified Markov random fields (ID-MRF) loss with a multi-scale architecture which uses several dense fusion blocks (DFB) based on the dense combination of dilated convolution to guide the generative network to restore discontinuous and continuous large masked areas. To prevent color discrepancies and grid-like artifacts, we apply the ID-MRF loss to improve the visual appearance by comparing similarities of long-distance feature patches. To further capture the long-term relationship of different regions in large missing regions, we introduce the CAM. Although CAM has the ability to create plausible results via reconstructing refined features, it depends on initial predicted results. Hence, we employ the DFB to obtain larger and more effective receptive fields, which benefits to predict more precise and fine-grained information for CAM. Extensive experiments on two widely-used datasets demonstrate that our proposed framework significantly outperforms the state-of-the-art approaches both in quantity and quality.

1. Introduction

Image inpainting, which synthesizes semantically reasonable and visually plausible contents in the damaged regions from existing areas, has attracted great attention in recent years. High-quality image inpainting can be capable of benefiting a wild range of applications such as unwanted-object-removal [1,2] and photos restoring [3]. Not only it is necessary to reconstruct textures and contents, but it is also crucial to have insight into the scene and objects that will be completed. Despite of many years research, image inpainting is still a challenging task in the field of computer vision as there is an inverse ill-posed problem [4] in this technique.
Generally, image inpainting approaches can be classified into three categories, diffusion-based method [5], patch-based method [6] and deep-learning-based method [7]. The first two methods depend on spreading and copying known information, hence they have an inferior ability to acquire high-level semantic features. Recently, deep-learning-based approaches such as convolutional neural networks (CNN) [8] and generative adversarial networks (GAN) [9] have exhausted the powerful capability of reconstructing target regions from surrounding areas.
Pathak et al. [10] designed a model based on the CNN termed as context encoder, which consists of an encoder to capture the context of an image into a compact latent feature representation and a decoder which utilizes that representation to predict the target region. Although it can achieve promising results, some exquisite details are ignored due to the more attention on recovering structure information rather than fine details. To tackle this issue, many methods adopted a two-stage architecture network. For instance, Yu et al. [11] introduced a coarse-to-fine framework to rough out the missing contents and a refined complete module to capture high-level features from known areas. Based on this network, Iizuka et al. [12] designed a global discriminator and a local discriminator to distinguish between real images and repaired images respectively, which can maintain the coherence of missing areas and surrounding regions. Above-mentioned methods mainly focus on rectangular areas and assume that the missing area is in the middle of the image. However, this hypothesis is limited to research and cannot be widely used in practice. Recently, massive methods have been investigated to deal with this problem. For example, Liu et al. [13] first put forward the mechanism of partial convolution (PConv) which incorporates re-normalized convolution and a mask-update operation to replace convolutional layers. Yu et al. [14] presented a gated convolution for free-form image completion. Furthermore, Ma et al. [15] designed a region-wise network to boost the capability of the generative model to adaptively learn feature presentations in different regions. Those methods have achieved promising performance on small proportion corrupted regions, while show insufficiency when the incomplete regions occupy a large proportion. Hence, they will inevitably lead to artifacts such as color discrepancies and blurriness.
To overcome the above-mentioned problems, we mainly focus on arbitrary and large image defects. Our work builds upon the recently proposed region-wise approach [15], which employed a region-wise convolution mechanism and trained the network with a joint loss that consists of reconstruction losses, a style loss, a correlation loss and an adversarial loss. It performs well when the missing region is discontinuous, while suffers from obvious grid-shaped artifacts in larger and continuous occluded image regions. This phenomenon is probably caused by the correlation loss and the style loss since they all use the gram matrix which pays attention to capturing pixel-wise correlations rather than taking the global consistency into account. Furthermore, they proposed an adversarial network to mitigate large area artifacts. However, there is no obvious improvement for visual performance. Another limitation is that lots of results containing over-smooth and incomplete structures will be obtained when the damaged area is large. We speculate that the original model does not have enough capacity to learn feature representations of different regions. Based on those observations, our proposed approach addresses those points and further achieves desirable results by focusing on non-local relationships and multi-scale information. More specifically, we retain original region-wise convolutions and the two-stage structure in paper [15] for the following reasons. Region-wise convolution can perform different operations for different regions. However, only adopted region-wise convolution framework will obtain blurry results. Hence, it is indispensable to incorporate the refinement network to infer more precise details. Based on the above-mentioned structure, we first utilized the pretrained VGG model combining with implicit diversified Markov random fields to substitute Gram-matrix-matching, which can alleviate the effects of grid-like shape artifacts. However, the repaired results will be blurred when the missing regions occupy a large proportion. Hence, we introduced the contextual attention module (CAM) to capture features in background patches and propagate the spatial coherency of attention. However, we find that the results will contain incorrect textures and incomplete structures. We analyze that the original cascade dilated convolution cannot provide abundant and accurate information for the CAM. To tackle this problem, we introduced several dense fusion blocks (DFB) to replace the original cascade dilated convolution and extract multi-scale features for CAM simultaneously. In conclusion, the combination of CAM and DFB can achieve the visual authenticity and perceptual plausibility of results.
We evaluate and analyze our proposed method on two standard datasets including CelebA-HQ and Paris StreetView. Meanwhile, we compare our model with the state-of-art schemes and provide experimental results to verify the effectiveness of our proposed method.
The main contributions of this paper are summarized as follows:
  • We aim to solve image inpainting tasks for randomly missing regions with a large proportion and employ an ID-MRF loss to tackle grid-shape artifacts and color discrepancy caused by style loss and correlation loss.
  • We innovatively combine the CAM with the DFB module to assist our network to generate precise and fine-grained contents by borrowing features from distant spatial location and extracting multi-scale features.
  • Experiments on multiple benchmark datasets intuitively show that our method is able to achieve competitive results.

2. Related Work

Recently, a great deal of literature has proposed numerous methods for image inpainting. In this section, we will mainly introduce a few methods related to non-local and multi-scale mechanisms.

2.1. Non-Local Mechanisms

For discontinuous missing regions, semantic information can be easily inferred from the background. However, it is a challenging task to repair large and continuous masked areas due to the huge gap between empty missing regions and corresponding possible recovered contents. Attention mechanism based on the relationship between contextual and missing regions has been often used in the task of image inpainting. For instance, Yu et al. [11] proposed a coarse-to-fine generative adversarial network and appended a contextual attention module to learn feature presentations via matching patches from background information. However, once the wrong information is captured in the first stage, it will cause the propagation of the error. On this basis, Sagong et al. [16] proposed a parallel extended-decoder path with a modified contextual attention module to reduce the number of convolution operations and create a higher-quality inpainting result simultaneously. For capturing long-range spatial dependencies, the self-attention mechanism [17] based on the non-local network [18] was wildly adopted. For instance, Uddin et al. [19] designed a global and a local attention architecture to obtain global and local coherent information. Yang et al. [20] exploited a self-attention mechanism and integrated it with structural information. Liu et al. [21] proposed a non-local module to capture a deeper relationship of different regions by using a self-attention framework. However, the non-local module was originally designed for the task of classification and this operation is not sufficient to significantly improve the performance of our framework. In addition to using the attention mechanism to obtain non-local information, a pre-trained model of the VGG network [22] has been wildly adopted to extract non-local features by calculating style loss. The essence of style loss is learning the relationship of existed and unknown regions by using the Gram-matrix to calculate the pixel relevance. Although it can preserve the high-frequency details, there is a tendency to produce grid-shape artifacts and contents that is inconsistent with the background. With the purpose of further successfully generating high-quality images, several studies concentrated on taking advantage of similarities of patches as loss function to learn non-local features. For example, Yang et al. [23] proposed a style transfer based on an MRF method to promote feature fusion, structure completion and texture reconstruction. In particular, Wang et al. [24] proposed an MRF-based non-local loss to encourage network to produce high-quality results by considering content consistency and texture similarity.

2.2. Multi-Scale Mechanisms

Currently, multi-scale-based methods have shown a significant development of applications in image inpainting. For instance, Wang et al. [25] introduced a Laplacian-pyramid model to progressively restore images with different resolutions. Mo et al. [26] introduced several multi-scale discriminators to generate the results containing more multi-scale information. Wang et al. [24] proposed a multi-column convolutional neural network to enlarge the receptive fields by applying different convolutional kernel size. However, those methods will be likely to cause resource-consuming and suffer from additional parameters. The common operation of aggregating multi-scale information and reducing resource consumption is to enlarge receptive fields by using dilated convolution. In the work of [12], they replaced the original channel-wise fully connected layer by a cascaded dilated convolution to broaden the receptive field. To further learn multi-scale features, Hui et al. [27] utilized dense combinations of dilated convolutions and different dilated rates to learn larger and more effective receptive fields, which is vital to infer reasonable structures and contents.

3. Proposed Methods

In this section, we firstly described the process of our method based on the model of generative adversarial network. Then we introduced the details of the contextual attention mechanism and the dense connection architecture of dilated convolution. Finally, objective loss functions including the reconstruction loss, the ID-MRF loss and the adversarial loss are presented in detail. An overall framework of our method is displayed in Figure 1.

3.1. The Architecture of Our Framework

As depicted in Figure 1, we take the architecture proposed by Ma et al. [15] as the backbone of our generator which is composed of several region-wise convolutions and the cascaded dilated convolution based on the coarse-to-fine structure. On this basis, we introduce the contextual attention module (CAM) and use the dense connection of dilated convolution as dense fusion block (DFB) to replace the original cascaded convolution. The CAM is not suitable for the coarse stage as this phase cannot provide enough accurate and delicate information for the CAM to borrow and propagate. Moreover, ordinary cascaded dilated convolution cannot extract multi-scale features of the image. Inspired by those observations, we integrate DFB into coarse and refinement stages and only employ the CAM in the refinement stage. In addition, we only embed one CAM as it is sufficient for feature borrowing and reconstruction. Generally, the combination of DFB and CAM can synthesize more fine-grained and better results. Moreover, in the work of [11] and [16], the attention module is used in one branch of the parallel network. However, we find it is not suitable for our architecture since our model has a strong dependence on skip connections and dilated convolutions. Based on this observation, we design a novel refinement framework to improve the robustness of the inpainting model and synthesize realistic contents simultaneously. It is worth emphasizing that the input of CAM is the concatenation of the convolutional layers before and after the dilated convolution. As shown in Figure 1, given a ground truth image I g t and a binary mask M which denotes damaged areas (0 for missing regions, 1 for existing counterparts). The corrupted image is
I g t ¯ = I g t M
where the symbol denotes the multiplication of corresponding elements of two matrices. We feed the concatenation of I g t ¯ and M as inputs to the coarse network instead of   I g t ¯ , which is beneficial for the network to concentrate more on valid pixels. Then the predicted image I p r e d 1 as the same resolution as the original image will be obtained. We integrate the masked area in I p r e d 1 with the opposite regions in the background as the composite image of the first stage. It can be denoted by the equation as below:
I c o m p 1 = I g t ¯ + I p r e d 1 ( 1 M )
Then the I c o m p 1 is sent to the refinement network to provide more information and the model can yield the refined image I p r e d 2 , which is used to obtain
I c o m p 2 = I g t ¯ + I p r e d 2 ( 1 M )
Moreover, we only consider the local predicted region in the phase of the adversary. Specifically, two local predicted images I p r e d 1 ( 1 M ) , I p r e d 2 ( 1 M ) in every stage are feed together into the discriminator to enhance the capability of generator. In addition, we adopt the recently proposed technology of spectral normalization [28] which controls the Lipschitz constant of the discriminator.

3.2. Contextual Attention Mechanism

Recently, the attention mechanism has been widely used in image inpainting tasks and exhibits a great potential in the generation of high-quality images. Since there is a strong necessary to apply a non-local mechanism to deal with continuous and large missing regions, we employ a contextual attention mechanism (CAM) in the refinement network to enhance the power of the generator to obtain sharper and pleasing results from initial prediction models. Yu et al. [11] firstly proposed the contextual attention module which borrows the patches from the background to fill holes. However, it utilizes the cosine similarity to match similar patches, which may influence the feature extraction due to the normalization operation. Furthermore, Sagong et al. [16] modified this module by replacing the cosine similarity with Euclidean distance. It is more feasible to match and propagate more reasonable contents since the Euclidean distance can not only take the angle of different patches into account, but also consider the size of them. We refer to this method to achieve the propagation of non-local features. The process of the attention mechanism can be defined as follows:
The first step is to divide the feature maps into background and foreground regions: background indicates known regions and foreground denotes opposite counterparts. We can obtain the background area though multiplying the mask by feature maps. Then we extract patches from different regions and reshape those patches of background as convolutional filters. Then we measure the similarity score d ¯ ( x , y ) , (   x ,   y ) between foreground patch ( f x , y ) and background patch ( b   x ,   y ) by the function:
d ¯ ( x , y ) , ( x , y ) = t a n h ( ( d ( x , y ) , ( x , y ) m ( d ( x , y ) , ( x , y ) ) σ ( d ( x , y ) , ( x , y ) ) ) )
where
d ( x , y ) , ( x , y ) = || f x , y b x , y ||
where m and σ indicate the constant value.
Finally, the foreground region is reconstructed from the weighted sum of background patches, and the importance of the background patch is judged by the similarity score. With the assistance of the tan h function, our model has the ability to accurately distinguish background and foreground, so as to better match and propagate features. Moreover, this module plays an important role in alleviating the influence of redundant information and synthesizing the satisfactory results by adaptively differentiating and fusing long-range spatial information.

3.3. Dense Connection of Dilated Convolution

Although the CAM has delivered a remarkable improvement performance in the reconstruction of structures and contents, it depends on the accuracy of initially predicted images. In addition, it has been proved that if the coarse network performs not well, the refinement phase will take advantage of irrelevant information and feature patches to match and attend [16]. We also find that the skip connection in our framework benefits the network to learn more valid and deeper information. Inspired by those observations, we design a dense connection of dilated convolution which is similar to the structure in paper [27]. As illustrated in Figure 1, the middle layers of the network consist of a series of dense fusion blocks (DFB) based on the dense connection of dilated convolutions and the concrete structure of every DFB is presented in Figure 2. Different from the common cascaded dilated convolutions [11] using various dilated rates, which may restrict the range flexibility of the generator, our dense combination has a large respective field and can adaptively learn more effective information. Specifically, a 3   by   3 convolution is employed to reduce parameters of input feature maps and concentrate on more valid features by decreasing the channels to a quarter of the original counterparts. Then there are four dilated convolution branches with different dilated rates of 2, 4, 8 and 16, respectively, and every convolution utilizes 3   by   3 kernel size. Suppose x i { i = 1 , 2 , 3 , 4 } indicates the four branches, C ( ) indicates the convolutional operator and y i { i = 1 , 2 , 3 , 4 } denotes the output after C ( ) . The process of our dense connection can be demonstrated as follows:
y i = { x i i = 1 C i ( x i 1 + x i ) i = 2 C i ( y i 1 + x i ) 2 < i 4
We can obtain multi-scale information by connecting all the outputs y 1 , y 2 , y 3 , y 4 . Then a 1   by   1 convolution is adopted to aggregate features. It is worth notable that all the convolution layers in the DFB have the same structure as the other counterparts in our architecture. A series of DFBs enjoy the ability to preserve multi-scale information and increase the richness of extracted features by enlarging the receptive fields.

3.4. Loss Functions

The task of image inpainting is an ill-posed problem in that there are a number of possible results. Therefore, it is crucial to use loss functions to select the most reasonable and realistic one. In our experiment, we rely on several loss functions to optimize networks in the training process.

3.4.1. Reconstruction Loss

Reconstruction loss is a straightforward method that can measure pixel-wise differences between the predicted region and the associated surroundings. We prefer to adopt L 1 distance to calculate the reconstruction loss rather than L 2 distance as the latter will produce images with much blurriness [29]. We will verify this conclusion in the ablation studies. In our two-stage model, the overall reconstruction losses can be expressed as follows:
L r e = || I p r e d 1 I g t || 1 + || I p r e d 2 I g t || 1
The pixel reconstruction loss can guarantee the consistency of contents in the generated area and the background area. Moreover, this function can benefit to reconstruct initial structural information while some high-frequency details will be ignored.

3.4.2. ID-MRF Loss

The style loss and correlation loss in paper [15] are prone to cause grid-shape artifacts and color discrepancy when repairing large continuous masked regions. We also find that the repaired target region will be very blurry if removing loss functions which focus on the relationship of different parts. Aiming to address those problems and motivated by the work of paper [30] and [24], we adopted an ID-MRF loss to capture complex image layouts and provide plausible contents with the same pattern in visual and style to the ground truth.
Suppose I p r e d 1 and I g t be the predicted image and the ground-truth image respectively, Φ p l is the feature maps derived from the l t h feature layer of a pretrained VGG model. Similarly, Φ G T l is the feature maps of the original image. Let m and n indicate one of the patches extracted from Φ p l and Φ G T l , respectively. The relative similarity between m and n is defined as
R S ( m , n ) = exp ( ( μ ( m , n ) max ν R n ( Φ G T l ) μ ( m , ν ) + δ ) / s )
where μ ( , ) indicates the cosine similarity. R n ( Φ G T l ) denotes all the neural patches in Φ G T l excepting for n . δ and   s indicate positive constants. Then we normalize R S ( m , n ) to
R S ¯ ( m , n ) = R S ( m , n ) / v R n ( Φ G T l ) R S ( m , v )
The ID-MRF loss of l t h feature layer between Φ p l and Φ G T l is defined as
L M ( l ) = l o g ( 1 h n Φ G T l m a x m Φ P l R S ¯ ( m , n ) )
where h is normalization constant. Different from common cosine similarity, the ID-MRF loss concentrates on relative distance which benefits to find high-quantity patches in the neighborhood. By minimizing L M ( l ) , the process of m in Φ p l seeking for some non-local similar candidates in Φ G T l will constrain the network to generate images closer to the real counterparts.
Let the predicted image is I p r e d 2 . Then it is projected to a more advanced feature space using pre-trained VGG16 on ImageNet. We use c o n v 3 _ 2 and c o n v 4 _ 2 to indicate image texture and c o n v 4 _ 2 to describe semantic structures. The ID-MRF loss is defined as:
L m r f = L M ( c o n v 4 _ 2 ) + i = 3 4 L M ( c o n v i _ 2 )
Since the correlation loss and style loss are pixel-wise methods rather than patch-wise, our ID-MRF loss has the ability to establish the relationship of long-term contents.

3.4.3. Adversarial Loss

Only relying on generator cannot guarantee to yield plausible results. The experiments in [31] have confirmed that the adversarial network can benefit to remove grid-like artifacts. Motivated by this research and aiming to produce pleasing results, we adopted a discriminator to encourage the generator to synthesize visually consistent results. Given I g t , I p r e d 1 ,and I p r e d 2 , we penalize the predicted missing regions rather than the entire image and concatenate those areas with the corresponding mask as inputs of the discriminator. To sum up, the learning objective for the discriminator in our experiments is formulated as:
L a d v ˙ = α E ( D ( I g t ( 1 M ) , M ) ) + E ( 1 D ( I p r e d 1 ( 1 M ) , M ) ) + E ( 1 D ( I p r e d 2 ( 1 M ) , M ) )
where D and M indicate discriminator and the mask respectively. We set α as 0.01.
For the generator, it struggles to improve the quality of synthesis results to fool the discriminator, while the task of the discriminator is to judge whether the predicted image is true or fake until the discriminator is indistinguishable from those images obtained by the generator.

3.4.4. Overall Loss

Finally, we obtain the hybrid loss function which is a linear combination of the construction loss L r e , the ID-MRF loss L m r f and the adversarial loss L a d v .
L = λ 1 L r e + λ 2 L m r f + λ 3 L a d v
where λ 1 , λ 2 , λ 3 are weights of different loss components. Our joint loss function can ensure the generator to produce semantically-reasonable and visually-realistic results.

4. Experiments

In this section, we present the datasets used in this work and our experimental implementation. We also compare our approach with several state-of-the-art image inpainting methods to evaluate the effectiveness of our model qualitatively and quantitatively. Finally, we conduct ablation studies to examine the effect of different components in our model.

4.1. Datasets and Masks

We validate our method on two public and widely-used datasets: CelebA-HQ dataset and Paris StreetView dataset. The former focuses on human face and contains 30,000 images, the latter collected from street views of Paris contains 14,900 training images and 100 test images. We use the original train, test and validation splits for these two datasets. Moreover, we adopt an algorithm to generate irregular masks during training and testing, which is more suitable for the situation of natural damaged images and can avoid over-fitting.

4.2. Implementation Details

Our proposed framework is implemented in Tensorflow. The size of input images is 256   by   256 and the batch size is 4. Our model is optimized by the Adam algorithm with a learning rate of 1 × 10 4 and β 1 = 0.5 , β 2 = 0.9 . We train our model on the NVIDIA 2070 GPU (8GB) and NVIDIA 2080Ti GPU (11GB). To stable the process of training, we divide it into two steps. Specifically, we train our model without the adversarial loss for the former 20 epochs and append it for the latter 10 epochs. Since we find that the larger proportion of our MRF loss will cause the propagation of incorrect contents, while over-smooth and blurry results will be obtained when their proportion is lower. Motivated by this observation, the trade-off parameters λ 1 , λ 2 and λ 3 are set to 20, 1 and 0 in the first step. In the second step of training, we deploy the adversarial network and set λ 1 as 10, λ 2 as 1, λ 3 as 1. The masked image includes missing regions with variable numbers, sizes, shapes and locations during every iteration. In particular, the proportion of arbitrary damaged areas varies from 0% to 40% during training. In addition, we experimentally set the number of DFB to 8. It takes 2 s to predict missing regions with any shapes and 1 day to train 20 epochs of 28,000 high-resolution images.

4.3. Comparative Experiments

We apply our network to perform qualitative and quantitative comparison experiments with several state-of-the-art methods including contextual attention (CA) [11], partial convolution (PConv) [13], GatedConv (GC) [14], EdgeConnect (EC) [32], Pluralistic Image Completion (PIC) [33] and Region-Wise Conv (RWC) [15]. To fairly evaluate, we carry out the experiments on discontinuous and continuous missing regions and every test image has the corresponding mask. For CA, GC and RWC, we train their models on CelebA-HQ dataset and Paris StreetView dataset with the released code. For EC and PIC, we directly adopt the pre-trained model as our training results perform not as good as the released results. As to PConv, we refer to the implementation on github (https://github.com/MathiasGruber/PConv-Keras, accessed on 15 December 2020) and follow the suggestions of authors for training since there are no public codes.

4.3.1. Qualitative Comparison

Figure 3 and Figure 4 present the inpainting results of different state-of-the-art approaches on some samples selected from CelebA-HQ and Paris StreetView datasets. GT indicates the ground-truth image. To make a comprehensive and objective comparison of various methods, we explore the effect of different missing ratios of masks ranging from 0–40%, which is consistent with the training phase. Observing from Figure 3 and Figure 4, inpainting results of CA suffer from visible distortions and inconsistency because it is originally designed for restoring regular missing holes. Among the rest of algorithms, EC reconstructs images with more accurate and intact structures when the missing region is narrow, but it still faces artifacts compared to the ground truth. PIC is proposed for the task of generating reasonable and diverse results hence it is difficult to approximate the true distribution of images. Although PConv and GC are designed to deal with irregular missing regions, they fail to repair some structure information such as the eye and some details in building. RWC is designed to cope with arbitrary masked images. It can successfully repair correct contents when the missing region is discontinuous, while brings strong grid-like artifacts and incomplete structures in large continuous missing parts. For face images, our model has the ability to repair missing glasses, synthesize symmetrical eyes and predicted vivid results. For Paris StreetView datasets, our method exhibits a superior performance with more intact structures and exquisite details. From what has been discussed above, we can draw the conclusion that our model exhibits competitive results with fine-grained details, pleasing textures and consistent structures with the help of the combination of the ID-MRF loss, dense fusion blocks and the contextual attention module.

4.3.2. Quantitative Comparison

We evaluate our model with five commonly used image quality metrics: the L1 loss, the L2 loss, the peak signal-to-noise ratio (PSNR), the structure similarity index (SSIM) and the Frechet Inception Distance (FID). Specifically, the PSNR measures the difference in pixel values of two images, and SSIM measures the similarities between the reconstructed image and the original image. Larger PSNR and SSIM values indicate smaller gaps between the generated image and the ground truth. Moreover, the L1 loss can reflect the ability to create the image as possible as closer to the real counterpart. L2 indicates the mean square error. The value of FID was introduced to calculate the Wasserstein-2 distance between the two distributions by utilizing the pre-trained Inception-V3 model [34].
Table 1 and Table 2 list full comparisons of all discussed methods in terms of five metrics with different ratios of irregular masks. As is illustrated in those Tables, we can obtain the conclusion that it is more difficult to repair the missing regions on Paris StreetView dataset since the values of PSNR and SSIM on it are commonly lower than the counterparts on CelebA-HQ dataset and the scores of L1 loss, L2 loss and FID are higher. For discontinuous damaged regions, it is quite obvious that our method shows a relative improvement in terms of all indicators. Moreover, CA shows a competitive value of PSNR at the mask ratio of 0–10%. At the same time, it faces a strong performance degradation accompanying with the increasing damaged regions. PConv, ED and PIC show the almost same performance on those two datasets and gain inferior indicators compared with GC, RWC and ours since they lake the deep understanding of semantic information and the correlation between existing regions and surroundings. The quantitative results demonstrate that our model achieves better scores in most cases. It is worth noting that the average value of PSNR increased by 1.28 for continuous missing regions on Paris StreetView dataset.

4.4. Ablation Studies

In this section, we analyze the contribution of each component in our proposed model to the final performance by presenting the inpainting results quantitatively and qualitatively. First, we respectively compare the repair results under the constraints of L1 and L2 reconstruction loss. Then we present the results obtained by the correlation loss (CL), the style loss (SL), the ID-MRF loss (IM), the combination of IM and adversarial loss (IM+AD). Subsequently, we utilize IM+AD as baseline (BL) and append the attention module into BL (BL+AT) to validate its effectiveness. Finally, we replace the cascaded dilated convolution in BL with several dense fusion blocks (DFB) to identify its role in our whole model, which can be expressed as BL(Rcdc)+DFB.
It can be seen from Figure 5 and Figure 6 that the L2 reconstructed loss suffers from more severe blurring and shadow-like artifacts than L1 loss. Moreover, the CL will incur obvious color discrepancy and SL tends to produce images containing more high-resolution details that are inconsistent with non-missing regions such as grid-like and aliasing artifacts. The reason for this phenomenon may be that those losses lack the advanced semantic feature and the reasonable extraction of spatial information to guide image synthesis. By utilizing the ID-MRF loss, the grid-like artifacts can be alleviated while over-smooth and blurry results will be obtained in large missing regions. Adversarial loss can further mitigate the effect of blurriness, but it is far from meeting the visual requirements. Moreover, pleasing contents and reasonable structures cannot be guaranteed by using either AT or DFB. By integrating them, the repairing images show an obvious visual enhancement on pleasing structures and textures.
As shown in Table 3 and Table 4, the value of L2 loss is far worse than the L1 loss in terms of those five indicators, which is consistent with visual appearance. Compared with the correlation loss and style loss, most of the metrics are enhanced by a large margin under the ID-MRF loss constraint. This phenomenon indicates that the loss based on the non-local mechanism is more appropriate for improving the high relevance between hole and background regions than those counterparts based on pixel-wise correlations. By gradually embedding adversarial loss and the attention module and using the BL(Rcdc)+DFB module, the performance of our model is improved stably. In particular, the combination of CAM and DFB can achieve a superior performance than using any of them alone. We attribute this phenomenon to the joint effect of the enlargement of the receptive field and the reconstruction of non-local relevant features.

5. Conclusions and Future Work

This paper combines two non-local mechanisms which consist of the ID-MRF loss and the contextual attention module (CAM) with a multi-scale method named the dense fusion block (DFB) which relies on the dense connection of dilated convolution. Under the interaction of the various proposed mechanisms, our model can repair large continuous and discontinuous missing regions at the same time. Specifically, the ID-MRF loss can suppress color discrepancies and grid-like artifacts cause by the correlation loss and the style loss in the task of image inpainting. On this basis, we integrate the CAM with DFB to further predict high-quality results with more fine details. The former can capture long-term spatial information by borrowing or copying the feature information from known background patches and the latter can extract multi-scale features by enlarging receptive fields. Experimental results demonstrate that our proposed model can achieve superior performance than state-of-the-art methods both in quantity and quality.
Although DFB have an impact on the performance improvement of our model, this contribution is insufficient, and they require considerable computational resources due to the structure of dense connection. Hence, further improvements can be achieved by reducing the parameters of the network. In addition, the receptive-field-aware module [35] demonstrates a strong capability of enlarging the receptive field in the task of image segmentation. It will be a very meaningful work to combine this technology with our model. Thus, future research will focus on how to apply the receptive-field-aware framework to image inpainting.

Author Contributions

X.H. and Y.Y. proposed the research idea of this paper. X.H. was responsible for the experiments, data analysis and interpretation of the results. Y.Y. was responsible for the verification of the research plan. The paper was mainly written by X.H. and the manuscript was revised and reviewed by Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Z.; He, H.; Tai, H.-M.; Yin, Z.; Chen, F. Color-Direction Patch-Sparsity-Based Image Inpainting Using Multidirection Features. IEEE Trans. Image Process. 2014, 24, 1138–1152. [Google Scholar] [CrossRef]
  2. Li, Z.; Liu, J.; Cheng, J. Exploiting Multi-Direction Features in MRF-Based Image Inpainting Approaches. IEEE Access 2019, 7, 179905–179917. [Google Scholar] [CrossRef]
  3. Cao, J.; Zhang, Z.; Zhao, A.; Cui, H.; Zhang, Q. Ancient mural restoration based on a modified generative adversarial network. Herit. Sci. 2020, 8, 7. [Google Scholar] [CrossRef]
  4. Liu, Q.; Li, S.; Xiao, J.; Zhang, M. Multi-filters guided low-rank tensor coding for image inpainting. Signal Process. Image Commun. 2019, 73, 70–83. [Google Scholar] [CrossRef]
  5. Biradar, R.L.; Kohir, V.V. A novel image inpainting technique based on median diffusion. Sadhana 2013, 38, 621–644. [Google Scholar] [CrossRef] [Green Version]
  6. Bertalmio, M.; Vese, L.; Sapiro, G.; Osher, S. Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 2003, 12, 882–889. [Google Scholar]
  7. Yeh, R.A.; Chen, C.; Lim, T.Y.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Deep Generative Models. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6882–6890. [Google Scholar]
  8. Li, X.; Hu, G.; Zhu, J.; Zuo, W.; Wang, M.; Zhang, L. Learning Symmetry Consistent Deep CNNs for Face Completion. IEEE Trans. Image Process. 2020, 29, 7641–7655. [Google Scholar] [CrossRef]
  9. Chen, M.; Liu, Z.; Ye, L.; Wang, Y. Attentional coarse-and-fine generative adversarial networks for image inpainting. Neurocomputing 2020, 405, 259–269. [Google Scholar] [CrossRef]
  10. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  11. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T.S. Generative Image Inpainting with Contextual Attention. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5505–5514. [Google Scholar]
  12. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. 2017, 36, 1–14. [Google Scholar] [CrossRef]
  13. Liu, G.; Reda, F.A.; Shih, K.J.; Wang, T.-C.; Tao, A.; Catanzaro, B. Image Inpainting for Irregular Holes Using Partial Convolutions. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI’99, Cambridge, UK, 19–22 September 2018; pp. 89–105. [Google Scholar]
  14. Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T. Free-Form Image Inpainting with Gated Convolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 4470–4479. [Google Scholar]
  15. Ma, Y.; Liu, X.; Bai, S.; Wang, L.; Liu, A.; Tao, D.; Hancock, E. Region-wise Generative Adversarial Image Inpainting for Large Missing Areas. arXiv 2019, arXiv:1909.12507. [Google Scholar]
  16. Sagong, M.-C.; Shin, Y.-G.; Kim, S.-W.; Park, S.; Ko, S.-J. PEPSI: Fast Image Inpainting with Parallel Decoding Network. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 11352–11360. [Google Scholar]
  17. Qiu, J.; Gao, Y.; Shen, M. Semantic-SCA: Semantic Structure Image Inpainting with the Spatial-Channel Attention. IEEE Access 2021, 9, 12997–13008. [Google Scholar] [CrossRef]
  18. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  19. Uddin, S.M.N.; Jung, Y.J.; Nadim, U.S.M. Global and Local Attention-Based Free-Form Image Inpainting. Sensors 2020, 20, 3204. [Google Scholar] [CrossRef]
  20. Yang, J.; Qi, Z.; Shi, Y. Learning to Incorporate Structure Knowledge for Image Inpainting. arXiv 2020, arXiv:2002.04170. [Google Scholar]
  21. Liu, D.; Wen, B.H.; Fan, Y.C.; Loy, C.C.; Huang, T.S. Non-Local Recurrent Network for Image Restoration. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 2–8 December 2018; p. 31. [Google Scholar]
  22. Sun, T.; Fang, W.; Chen, W.; Yao, Y.; Bi, F.; Wu, B. High-Resolution Image Inpainting Based on Multi-Scale Neural Network. Electronics 2019, 8, 1370. [Google Scholar] [CrossRef] [Green Version]
  23. Yang, C.; Lu, X.; Lin, Z.; Shechtman, E.; Wang, O.; Li, H. High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4076–4084. [Google Scholar]
  24. Wang, Y.; Tao, X.; Qi, X.J.; Shen, X.Y.; Jia, J.Y. Image Inpainting via Generative Multi-column Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 2–8 December 2018; p. 31. [Google Scholar]
  25. Wang, Q.; Fan, H.; Sun, G.; Cong, Y.; Tang, Y. Laplacian pyramid adversarial network for face completion. Pattern Recognit. 2019, 88, 493–505. [Google Scholar] [CrossRef]
  26. Mo, J.; Zhou, Y. The image inpainting algorithm used on multi-scale generative adversarial networks and neighbourhood. Automatika 2020, 61, 704–713. [Google Scholar] [CrossRef]
  27. Hui, Z.; Li, J.; Wang, X.; Gao, X. Image Fine-grained Inpainting. arXiv 2020, arXiv:2002.02609. [Google Scholar]
  28. Miyato, T.; Kataoka, T.; Koyama, M.; Yoshida, Y. Spectral Normalization for Generative Adversarial Networks. arXiv 2018, arXiv:1802.05957. [Google Scholar]
  29. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
  30. Li, C.; Wand, M. Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 2479–2486. [Google Scholar]
  31. Vo, H.V.; Duong, N.Q.K.; Pérez, P. Structural inpainting. In Proceedings of the 2018 ACM Multimedia Conference (Mm′18), Seoul, Korea, 22–26 October 2018; pp. 1948–1956. [Google Scholar]
  32. Nazeri, K.; Ng, E.; Joseph, T.; Qureshi, F.Z.; Ebrahimi, M. EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning. arXiv 2019, arXiv:1901.00212. [Google Scholar]
  33. Zheng, C.; Cham, T.-J.; Cai, J. Pluralistic Image Completion. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 1438–1447. [Google Scholar]
  34. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2017, arXiv:1706.08500. [Google Scholar]
  35. Singh, V.K.; Abdel-Nasser, M.; Pandey, N.; Puig, D. LungINFseg: Segmenting COVID-19 Infected Regions in Lung CT Images Based on a Receptive-Field-Aware Deep Learning Framework. Diagnostics 2021, 11, 158. [Google Scholar] [CrossRef]
Figure 1. The overall architecture of our method. Region-wise convolution indicates using different convolution filters for different regions, more details can be found in paper [15]. In this architecture, 256 by 256 and 32 denote the size and channels of the feature map respectively.
Figure 1. The overall architecture of our method. Region-wise convolution indicates using different convolution filters for different regions, more details can be found in paper [15]. In this architecture, 256 by 256 and 32 denote the size and channels of the feature map respectively.
Sensors 21 03281 g001
Figure 2. The framework of the dense fusion block. “Conv-3-2” indicates the 3 by 3 convolution layer and the dilated rate is 2. is element-wise summation. The output channels of all convolutional layers are 64, except for the last layer which is 256.
Figure 2. The framework of the dense fusion block. “Conv-3-2” indicates the 3 by 3 convolution layer and the dilated rate is 2. is element-wise summation. The output channels of all convolutional layers are 64, except for the last layer which is 256.
Sensors 21 03281 g002
Figure 3. Qualitative comparisons of different methods on discontinuous missing areas.
Figure 3. Qualitative comparisons of different methods on discontinuous missing areas.
Sensors 21 03281 g003
Figure 4. Qualitative comparisons of different methods on continuous missing areas.
Figure 4. Qualitative comparisons of different methods on continuous missing areas.
Sensors 21 03281 g004
Figure 5. Qualitative results of ablation studies on discontinuous missing regions. (Best viewed with zoom-in.)
Figure 5. Qualitative results of ablation studies on discontinuous missing regions. (Best viewed with zoom-in.)
Sensors 21 03281 g005
Figure 6. Qualitative results of ablation studies on continuous missing regions. (Best viewed with zoom-in.)
Figure 6. Qualitative results of ablation studies on continuous missing regions. (Best viewed with zoom-in.)
Sensors 21 03281 g006
Table 1. Quantitative comparisons on discontinues missing region, where the bold indicates the best performance, and the underline denotes the sub-optimal results, + indicates the higher is better, while—indicates the lower is better.
Table 1. Quantitative comparisons on discontinues missing region, where the bold indicates the best performance, and the underline denotes the sub-optimal results, + indicates the higher is better, while—indicates the lower is better.
MASKCelebA-HQParis StreetView
CAPConvECPICGCRWCOursCAPConvECPICGCRWCOurs
PSNR +0–10%34.8934.2434.5834.6939.2841.7642.2535.3034.2034.5634.0237.8541.5242.19
10–20%27.5431.0131.2231.3132.6534.3234.9528.8330.5230.9130.2630.9734.2635.11
20–30%24.1428.2128.5828.6129.0730.4631.1825.5627.6728.1327.3027.0130.1531.12
30–40%22.8326.9527.3527.5627.9229.4530.1823.8526.3126.7025.8325.9629.0830.04
SSIM +0–10%0.9720.9380.9450.9440.9840.9900.9910.9720.9480.9500.9470.9830.9890.990
10–20%0.9040.9080.9170.9120.9500.9640.9660.9020.9090.9150.9040.9370.9610.965
20–30%0.8380.8720.8790.8760.9080.9300.9350.8350.8690.8710.8500.8730.9180.927
30–40%0.7630.8390.8470.8430.8730.9050.9130.7540.8100.8290.7990.8280.8920.904
L1(10−3)0–10%5.4013.4012.9413.004.101.631.565.0613.3413.1913.484.262.501.61
10–20%18.0916.2216.006.038.685.675.3712.4917.1316.7917.569.926.425.24
20–30%24.4321.0320.3320.1816.8210.309.6321.4622.1721.7823.6418.2012.1110.34
30–40%32.9824.1223.2923.0818.4013.5912.6729.9327.0925.9928.3723.2215.4713.43
L2(10−3)0–10%0.610.540.500.480.190.130.110.420.550.520.580.290.160.13
10–20%2.251.120.980.950.690.490.421.511.191.131.271.090.590.49
20–30%4.641.941.751.701.581.130.963.282.332.102.542.731.451.18
30–40%6.192.552.232.111.951.391.174.683.152.803.383.371.831.49
FID0–10%1.971.581.471.370.370.230.188.325.524.476.123.061.551.13
10–20%8.952.772.612.291.370.840.7025.3713.1210.3314.0612.666.244.58
20–30%13.984.263.743.202.401.621.3836.1417.5315.6721.9424.5813.689.90
30–40%27.066.185.404.363.552.141.8257.9330.0626.2634.2034.1817.0012.98
Table 2. Quantitative comparisons on continues missing region, where the bold indicates the best performance, and the underline denotes the sub-optimal results, + indicates the higher is better, while—indicates the lower is better.
Table 2. Quantitative comparisons on continues missing region, where the bold indicates the best performance, and the underline denotes the sub-optimal results, + indicates the higher is better, while—indicates the lower is better.
MASKCelebA-HQParis Street View
CAPConvECPICGCRWCOursCAPConvECPICGCRWCOurs
PSNR +0–10%32.1132.0232.7032.9136.4337.7038.3032.8331.4833.0932.3033.9836.1837.69
10–20%25.3327.3328.0528.0029.0029.5430.0226.3927.1428.7427.1827.2829.3230.58
20–30%22.8624.5925.1124.8125.6326.1026.6123.4024.4726.0224.2624.3326.1327.50
30–40%20.2121.3221.9921.5822.3122.8023.3920.6921.7323.2421.6021.7923.2424.31
SSIM +0–10%0.9710.9360.9450.9420.9790.9820.9820.9620.9010.9330.9260.9600.9770.981
10–20%0.9050.8980.9060.9010.9440.9440.9470.9000.8800.8950.8810.9030.9280.940
20–30%0.8630.8450.8660.8530.8960.8970.9030.8370.8330.8440.8280.8470.8720.891
30–40%0.7900.7900.8010.7920.8350.8370.8490.7560.7570.7820.7480.7700.8080.836
L1(10−3)0–10%7.1515.3814.2614.214.952.842.656.2815.1014.1414.835.784.152.83
10–20%17.2620.2519.7719.9111.058.858.2615.6022.6719.5522.4214.4910.848.75
20–30%27.4928.1226.9327.6918.9816.7617.1927.0328.8526.2331.5424.4619.2215.85
30–40%45.3641.6839.7541.2431.9129.8027.7242.5140.2436.9845.1037.6631.2826.97
L2(10−3)0–10%1.230.960.840.640.540.460.410.821.140.730.970.720.520.39
10–20%3.932.552.151.041.491.581.463.083.031.923.122.811.781.46
20–30%6.924.874.001.613.073.212.956.005.563.425.795.233.562.75
30–40%12.999.218.012.086.386.665.9810.609.236.359.778.916.535.44
FID0–10%1.091.631.561.480.530.490.437.529.146.728.396.214.783.83
10–20%3.283.062.842.381.471.491.3615.4414.0312.0913.3215.7912.4511.79
20–30%8.024.154.023.312.463.482.4527.6623.4520.1124.5327.4121.5619.81
30–40%12.446.235.864.634.136.204.0138.5336.2328.4337.3738.8530.4530.53
Table 3. Quantitative results of ablation studies on discontinuous missing regions, where + indicates the higher is better, while—indicates the lower is better, the bold indicates the best performance.
Table 3. Quantitative results of ablation studies on discontinuous missing regions, where + indicates the higher is better, while—indicates the lower is better, the bold indicates the best performance.
CelebA-HQParis Street View
MASKL2L1CLSLIMIM+AD
(BL)
BL+ATBL(Rcdc)
+DFB
All
Model
L2L1CLSLIMIM+AD
(BL)
BL+ATBL(Rcdc)
+DFB
All
Model
PSNR +0–10%39.4041.4941.4741.4541.4641.6041.5541.6142.2540.4941.6341.3141.1841.6941.7442.1442.0142.19
10–20%32.8734.7634.1334.0534.7534.8834.8534.9334.9533.6634.5734.2334.0134.6334.7035.0034.9035.11
20–30%29.3530.9530.2630.1930.9531.0931.0431.1831.1829.9230.5730.2030.0330.5430.6931.0330.9931.12
30–40%28.3530.0229.2629.1830.0230.1430.1230.2230.1828.8229.5129.1628.9629.6330.0329.9229.8730.04
SSIM +0–10%0.9830.9900.9900.9900.9890.9890.9890.9890.9910.9870.9890.9890.9890.9890.9890.9900.9900.990
10–20%0.9470.9660.9620.9620.9650.9650.9650.9650.9660.9540.9620.9600.9600.9610.9620.9640.9630.965
20–30%0.9070.9350.9250.9250.9330.9330.9330.9340.9350.9090.9220.9160.9160.9200.9210.9250.9230.927
30–40%0.8720.9130.8980.8990.9110.9120.9120.9120.9130.8780.8960.8890.8890.8960.8970.9020.8990.904
L1(10−3)0–10%4.543.801.731.703.773.763.783.761.562.111.732.592.541.741.731.641.641.61
10–20%8.157.006.035.926.916.856.926.865.377.035.686.596.515.785.675.365.355.24
20–30%14.7011.1611.1210.8010.9610.8010.9210.769.6313.7511.6912.3912.2211.5611.1810.6010.5910.34
30–40%19.3213.7514.6114.2513.5013.3212.7313.2913.6718.0614.6715.8715.6814.5414.5413.7813.7613.34
L2(10−3)0–10%0.190.140.130.140.140.140.140.130.110.160.140.160.160.140.130.130.130.13
10–20%0.640.450.510.520.450.440.440.430.420.600.530.580.600.520.520.490.500.49
20–30%1.391.021.191.201.010.991.000.970.961.421.311.441.491.291.281.231.231.18
30–40%1.721.231.461.481.221.201.201.181.171.811.631.791.871.651.611.541.511.49
FID0–10%0.510.300.270.240.240.220.220.210.182.392.011.961.801.391.371.261.321.13
10–20%1.841.111.070.930.840.770.780.740.708.687.477.316.675.474.914.684.854.58
20–30%3.532.282.191.791.671.511.531.451.3820.7718.5916.2214.3712.1710.239.989.769.90
30–40%4.813.023.012.392.211.931.951.871.8228.5125.0427.7119.6215.8213.4013.9713.2012.98
Table 4. Quantitative results of ablation studies on continuous missing regions, where + indicates the higher is better, while—indicates the lower is better, the bold indicates the best performance.
Table 4. Quantitative results of ablation studies on continuous missing regions, where + indicates the higher is better, while—indicates the lower is better, the bold indicates the best performance.
CelebA-HQParis Street View
MASKL2L1CLSLIMIM+AD
(BL)
BL+ATBL(Rcdc)
+DFB
All
Model
L2L1CLSLIMIM+AD
(BL)
BL+ATBL(Rcdc)
+DFB
All
Model
PSNR +0–10%34.7037.6537.0037.0337.1737.2937.4438.1938.3034.3936.9235.4935.6737.1637.2637.4737.5837.69
10–20%27.9529.8129.0828.9829.7229.8429.9329.9530.0328.2229.9428.9028.9530.1330.2230.4730.3230.58
20–30%24.7726.3925.7125.5826.3126.4426.4426.5326.6125.3127.0125.7925.8327.0427.1127.2827.3827.50
30–40%21.6523.1422.5422.4223.0223.1523.3023.3423.3922.8123.9322.9522.8324.0124.1924.2324.3324.31
SSIM +0–10%0.9750.9820.9800.9800.9800.9810.9810.9830.9820.9740.9790.9760.9760.9790.9800.9800.9800.981
10–20%0.9310.9470.9380.9390.9440.9450.9460.9470.9470.9240.9350.9260.9250.9350.9360.9370.9370.940
20–30%0.8810.9040.8870.8890.9000.9010.9010.9030.9030.8680.8860.8690.8680.8860.8870.8890.8890.891
30–40%0.8210.8510.8240.8240.8460.8460.8500.8500.8490.8110.8300.8030.8000.8300.8320.8330.8340.836
L1(10−3)0–10%6.622.863.133.045.155.095.032.692.654.433.964.334.203.712.972.952.922.83
10–20%14.898.859.699.4210.8010.6110.388.398.2613.3710.4411.5311.229..919.138.929.048.75
20–30%23.8416.6118.1817.7618.2217.8817.6017.5017.1923.6317.8820.3019.9717.3716.6816.8216.9315.85
30–40%42.6829.0131.6130.9330.5730.2428.7127.7227.7237.1029.8533.0532.7228.5827.8127.5727.3626.97
L2(10−3)0–10%0.660.430.520.520.460.450.450.430.410.600.450.530.550.430.420.420.400.39
10–20%2.121.511.771.771.551.521.521.471.462.081.581.861.941.531.521.451.511.46
20–30%4.143.083.583.623.153.063.012.962.953.782.853.603.762.862.912.812.812.75
30–40%8.326.347.247.306.506.356.086.005.986.615.676.927.185.535.425.515.385.44
FID0–10%0.910.610.580.520.530.490.470.450.437.585.935.324.954.723.883.593.793.83
10–20%2.991.961.891.611.681.511.461.451.3619.0215.0816.6113.7314.0112.6712.3112.3111.79
20–30%6.183.693.512.843.022.672.632.632.4530.9528.4828.8623.8925.2921.4920.5520.2119.81
30–40%11.156.535.984.604.984.484.104.334.0144.3040.3140.8032.8937.7134.8032.6032.6130.53
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, X.; Yin, Y. Non-Local and Multi-Scale Mechanisms for Image Inpainting. Sensors 2021, 21, 3281. https://doi.org/10.3390/s21093281

AMA Style

He X, Yin Y. Non-Local and Multi-Scale Mechanisms for Image Inpainting. Sensors. 2021; 21(9):3281. https://doi.org/10.3390/s21093281

Chicago/Turabian Style

He, Xu, and Yong Yin. 2021. "Non-Local and Multi-Scale Mechanisms for Image Inpainting" Sensors 21, no. 9: 3281. https://doi.org/10.3390/s21093281

APA Style

He, X., & Yin, Y. (2021). Non-Local and Multi-Scale Mechanisms for Image Inpainting. Sensors, 21(9), 3281. https://doi.org/10.3390/s21093281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop