Next Article in Journal
Energy and Environment-Aware Path Planning in Wireless Sensor Networks with Mobile Sink
Next Article in Special Issue
Multiband Photogrammetry and Hybrid Image Analysis for the Investigation of a Wall Painting by Paolo de San Leocadio and Francesco Pagano in the Cathedral of Valencia
Previous Article in Journal
Improved Agricultural Field Segmentation in Satellite Imagery Using TL-ResUNet Architecture
Previous Article in Special Issue
High-Definition Survey of Architectural Heritage Fusing Multisensors—The Case of Beamless Hall at Linggu Temple in Nanjing, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement and Restoration of Scratched Murals Based on Hyperspectral Imaging—A Case Study of Murals in the Baoguang Hall of Qutan Temple, Qinghai, China

1
School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, No. 15 Yongyuan Road, Beijing 102616, China
2
Beijing Key Laboratory for Architectural Heritage Fine Reconstruction & Health Monitoring, No. 15 Yongyuan Road, Beijing 102616, China
3
The Dunhuang Academy, Dunhuang 736200, China
4
Department of Civil Engineering, Toronto Metropolitan University, 350 Victoria Street, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9780; https://doi.org/10.3390/s22249780
Submission received: 31 October 2022 / Revised: 23 November 2022 / Accepted: 6 December 2022 / Published: 13 December 2022

Abstract

:
Environmental changes and human activities have caused serious degradation of murals around the world. Scratches are one of the most common issues in these damaged murals. We propose a new method for virtually enhancing and removing scratches from murals; which can provide an auxiliary reference and support for actual restoration. First, principal component analysis (PCA) was performed on the hyperspectral data of a mural after reflectance correction, and high-pass filtering was performed on the selected first principal component image. Principal component fusion was used to replace the original first principal component with a high-pass filtered first principal component image, which was then inverse PCA transformed with the other original principal component images to obtain an enhanced hyperspectral image. The linear information in the mural was therefore enhanced, and the differences between the scratches and background improved. Second, the enhanced hyperspectral image of the mural was synthesized as a true colour image and converted to the HSV colour space. The light brightness component of the image was estimated using the multi-scale Gaussian function and corrected with a 2D gamma function, thus solving the problem of localised darkness in the murals. Finally, the enhanced mural images were applied as input to the triplet domain translation network pretrained model. The local branches in the translation network perform overall noise smoothing and colour recovery of the mural, while the partial nonlocal block is used to extract the information from the scratches. The mapping process was learned in the hidden space for virtual removal of the scratches. In addition, we added a Butterworth high-pass filter at the end of the network to generate the final restoration result of the mural with a clearer visual effect and richer high-frequency information. We verified and validated these methods for murals in the Baoguang Hall of Qutan Temple. The results show that the proposed method outperforms the restoration results of the total variation (TV) model, curvature-driven diffusion (CDD) model, and Criminisi algorithm. Moreover, the proposed combined method produces better recovery results and improves the visual richness, readability, and artistic expression of the murals compared with direct recovery using a triple domain translation network.

1. Introduction

Murals are a precious part of the world’s cultural heritage and have enormous historical and research value. They are the spiritual home of modern man and a symbol of world civilisation, reflecting the social, political, economic, religious, cultural, and artistic development of countries around the world [1]. However, they are not long-lasting and only a few exist as they have been subjected to long-term natural erosion and man-made deterioration, in addition to a slew of other issues. Scratches have emerged on several murals, significantly reducing their aesthetic value and appreciation.
Virtual restoration has attracted research attention in the field of cultural heritage conservation in recent years because of advancements in computer vision technology. New technologies such as computer image processing, graphics, virtual reality, and hyperspectroscopy are gradually being applied to the field of cultural relic protection and restoration [2]. For example, Pei et al. [3] proposed a virtual restoration algorithm for ancient paintings, based on colour contrast enhancement, missing texture synthesis, and the Markov random field model to repair the stains and cracks in ancient paintings and murals. Baatz et al. [4] proposed a binary image restoration method based on the Cahn–Hilliard equation to restore the binary structure of paintings and derived a general grey-scale image restoration method to repair the paintings. Cornelis et al. [5] extracted information on cracks from oil paintings using three methods: filters, top-hat transform, and K-SVD; these methods improved pre-existing patch-based repair techniques to eliminate the detected cracks. Hou et al. [6] proposed a new virtual restoration method for stains based on the maximum noise fraction (MNF) transformation with hyperspectral imaging. This method can fade or eliminate speckles in the image and restore the style of ancient paintings to a large extent without resulting in large data losses. Purkait et al. [7] proposed a semi-automatic mural restoration system based on coherent texture synthesis and high-frequency enhanced diffusion, which realised the restoration of colour murals in Indian temples. Mol et al. [8] proposed an integrated texture and structure reconstruction technique for ancient wall paintings. The method outperformed other reconstruction techniques in terms of image quality and computational efficiency. Wang et al. [9] used the structural information collected from the guidance of painters and line drawings to study mural image restoration and proposed a structure-guided global and local feature weighting method to repair the murals. Cao et al. [10] proposed a method of restoring sooty murals based on the dark channel a priori and the Retinex hyperspectral imaging technique. This approach can effectively reduce the effects of soot on the frescoes, provide additional details that reveal the original appearances of the frescoes, and improve their visual quality.
With the development of artificial intelligence, deep learning is gradually being applied in the field of digital preservation of cultural heritage. Deepak Pathak et al. [11] first proposed an unsupervised visual feature learning method based on contextual pixel prediction using a neural network (NN) approach, which laid the foundation for many subsequent approaches. Alberto Nogales et al. [12] developed a deep-learning model based on GANs for the automatic digital reconstruction of Greek temples. The method automatically repairs Greek temples based on a rendering of the ruins obtained from the 3D model. Gupta et al. [13] proposed a hybrid model that employs R-CNN-based automatic mask generation and image inpainting with partial convolution and automatic mask update using U-Net architecture. The results show that the proposed method is quite effective in the virtual restoration of digitized artworks. Huang et al. [14] solved the mural degradation detection problem with a multi-path convolutional neural network (CNN) and designed an eight-path CNN. The effectiveness and efficiency of the method were verified by extensive experiments. Wang et al. [15] proposed a Thangka mural restoration method based on multi-scale adaptive partial convolution and stroke-like masks for Tibetan Thangka murals. Li et al. [16] proposed a generative-discriminator network model based on artificial intelligence algorithms for digital image restoration of damaged ancient wall paintings; in adversarial learning. The discriminator network model was optimised in this study, and the proposed algorithm effectively restored wall paintings with point-like damage and complex texture structures.
Scratches are different from small defects such as cracks, as they punctate losses, are often seen in large areas of structural damage, and are irregular in shape. This poses difficulties for the restoration of wall paintings. The majority of existing literature has focused on the restoration of punctate loss, fading, cracks, and other defects in the mural images. However, to date, there are few methods for recovering large areas of scratch damage in mural images. We observed that the scratches in the mural were similar to the creases in the old photograph. The scratches in the mural and the folds in the old photograph are both large, elongated, and white in appearance. In our study, we used pretrained models [17] of old photographs to remove scratches from the murals. However, there are still differences between the murals and the old photographs, and through extensive experiments, we found that direct scratch removal using the triplet domain translation network pretrained model did not produce the best results.
Therefore, we opted to use spectral information for line enhancement, a 2D gamma function to enhance local dark information, and a triplet domain translation network pretrained model and a Butterworth filter to virtually restore scratches. After radiation correction, the mural’s hyperspectral data were subjected to principal component analysis (PCA) and high-pass filtering, producing improved hyperspectral images, and thereby improving the linear information and contrast between scratches and the backdrop of the mural. Second, the mural’s improved hyperspectral picture was synthesised to a true colour image that was converted to the HSV colour space. A multi-scale Gaussian function was used to estimate the image’s lighting component. Thereafter, a 2D gamma function was used to correct the brightness and overcome the problem of poor light in murals. Finally, the triplet domain translation network pretrained model received the augmented mural images. The local branch of the network performs overall image quality restoration to address fading and noise in the image. A partial nonlocal block was used to recover structured defects in the image to resolve scratches, and a Butterworth filter was applied to make the final result clearer. This methodological approach will help us gain a sharper and more comprehensive understanding of murals.
In this study, we have used the murals of the Baoguang Hall at Qutan Temple as an example of virtual restoration of large scratch lesions on their surfaces. The main contributions of this study are summarized as follows.
(1)
A method combining linear information enhancement and triplet domain translation network pretrained model is proposed to recover the scratch lesions in the mural images, which includes using hyperspectral data for principal component analysis and enhancing the first principal component with high-pass filtering, replacing the original first principal component with the enhanced first principal component by principal component fusion, and recovering the data dimension with principal component inversion to produce an improved hyperspectral mural image. Then, a triplet domain translation network pretrained model was used to complete the repair of the scratch lesions. In addition, we added a Butterworth high-pass filter after the pretrained model restoration to produce sharper and higher visual quality mural restoration results. As such, this study fills a gap in the existing literature on the virtual restoration of mural scratches.
(2)
The 2D gamma function light uneven image correction algorithm is applied to the mural to solve the problem of local low luminance, enhance the information in the dark areas, and provide more accurate restoration results.
(3)
The proposed method can provide auxiliary reference and support for the actual restoration of murals. It is helpful to provide conservators with a scratch-free appearance of the murals before the restoration process begins. In addition, the work in this study is an attempt to provide novel ideas for the digital conservation of wall paintings for World Heritage sites.
The rest of the paper is organized as follows. Section 2 describes the experiments designed for the enhancement and restoration of scratched murals, including the experimental materials and acquisition techniques used and the workflow of the experiments. Section 3 describes our experimental results and visual comparisons of the murals before and after restoration. In Section 4, the results of restoration by omitting one of the steps proposed in this paper are discussed in detail, and the comparison of this method with other existing virtual restoration techniques is presented. Section 5 presents the conclusions of this study.

2. Materials and Methods

2.1. Materials

2.1.1. Murals

A scratch is a mark produced by an external force that damages mural patterns [18]. Mural patterns are frequently ruined by scratches, lowering their creative value significantly. The mural data used in this study were from the murals on the east wall of Baoguang Hall, Qutan Temple, Ledu District, Haidong City, Qinghai Province, China. According to historical records, Qutan Temple is a Tibetan Buddhist monastery founded in 1392. The large colourful murals in the temple were created by the court painters of the Ming and Qing dynasties, as shown in Figure 1a. Owing to their refined painting techniques and striking ideas, these murals are considered to be some of the best works of art ever produced. However, several of these works have been extensively damaged, their magnificent patterns have gone unfinished, and their aesthetic and decorative values have diminished. The hyperspectral data of the murals were collected and analysed to eliminate the influence of scratches and restore the original appearance. In this study, two experimental areas were selected, as shown in Figure 1b,c. The images were all true colour images synthesized from hyperspectral images based on wavelengths of 460.20, 549.79, and 640.31 nm.

2.1.2. Data Acquisition

Data from the experimental area were collected with the hyperspectral image analysis system THEMIS-VNIR/400H from Themis Vision System, USA, with a spatial resolution of 1392 × 1000 pixels and a sampling interval of 0.6 nm. The spectral resolution was 2.8 nm, and the images were collected in 1040 bands ranging from 377.45 (visible light) to 1033.10 nm (near-infrared). During the data collection process, the distance between the hyperspectral camera and the mural was about 1 m. Two halogen lamps were used as light sources.

2.2. Methods

Figure 2 shows the overall framework of the enhancement and restoration method for the scratched murals, which includes four main steps:
(1)
Data denoising using radiometric correction;
(2)
Mural line information enhancement based on principal component transformation, high-pass filtering, and principal component fusion;
(3)
Enhancement of local dark information in the mural using multiscale Gaussian and 2D gamma functions;
(4)
Extraction and repair of scratched murals using a triplet domain translation pretrained network model and Butterworth high-pass filter.
Figure 2. Overall restoration process.
Figure 2. Overall restoration process.
Sensors 22 09780 g002

2.2.1. Data Preprocessing

Hyperspectral techniques allow extraction of the maximum amount of information from murals without damaging them owing to their non-contact, non-destructive detection characteristics [19]. Hyperspectral images generally have many bands, a wide spectral range, spectral resolution of the order of nanometres, and a wealth of spectral information. Thus, they can help in the restoration of murals.
During data acquisition with hyperspectral imaging systems, data can be affected by ambient light and the instrument’s dark current noise. Reflectance correction can be used to reduce this type of noise with the following correction formula:
R = R r a w R d a r k R w h i t e R d a r k × 99 % ,
where R is the reflectance, R r a w is the collected hyperspectral data, R d a r k is the dark current data, and R w h i t e is the standard reflector data; the reflectance of a standard reflector is 99%.

2.2.2. Line Information Enhancement

We synthesized true colour images in the red, green, and blue bands at wavelengths of 640.31, 549.79, and 460.20 nm, respectively, to meet the memory requirements of the network restoration model and maintain the integrity of the scratched murals, as a direct restoration would be inaccurate and cause the lines in the murals to fade. Before restoring the network, we used the hyperspectral data from the murals to improve the line information and produce better restoration outcomes. A PCA of the hyperspectral images of the murals to be restored was carried out to compress or combine image information from multiple bands into one image [20]; the contribution from the information in each band was maximised in the new image. The first principal component was selected, as it contained most of the information for all of the bands, and high-pass filtering was performed. High-pass filtering is generally used to reduce blur in an image by enhancing the high-frequency components and eliminating the low-frequency components of the image while maintaining the high-frequency information; it is often used to enhance the information of textures and edges [21]. The high frequencies extracted by the default high-pass filtering in the ENVI 5.3 software from Exelis Visual Information Solutions, USA, were added back to the first principal component image of the original mural to obtain a clearer image. High-pass filtering is normally carried out by applying a transform kernel with a high central value, usually surrounded by negative weights. Using a 3 × 3 template, we calculated the following equation:
H x , y = 1 1 1 1 8 1 1 1 1 ,
where H x , y is the high-pass filtering convolution template.
Principal component fusion is a PCA transformation of the n -band spectral images to obtain the n principal components based on the vector eigenvalues. The high-resolution panchromatic image is histogram-matched to the first principal component to ensure the grey mean and variance of the panchromatic image agree with those of the first principal component image; the matched panchromatic image is then directly replaced by the first principal component image. Finally, the high-resolution spectral fusion image is obtained by PCA inverse transform processing; this image retains the high-frequency information of the original image. Through this processing, the detailed features of the target are more clearly defined, and spectrally richer images are obtained [22]. The first principal component map after high-pass filtering is used as the panchromatic image to replace the original first principal component image. They are inversely transformed by PCA to recover the dimensionality of the original hyperspectral images with enhanced line information, thus enhancing the identification of scratched areas.

2.2.3. Enhancement of Local Darkness

A 2D gamma function, light inhomogeneity image correction algorithm was applied to the murals to solve the problem of localised low brightness in their images [23]. According to Retinex theory, the brightness component of a real scene is mainly present in the low-frequency part of the image with smooth overall changes, while the reflection component is mainly present in the high-frequency parts of the image such as edges and textures, with more intense changes [24].
As the multiscale Gaussian function could effectively compress the dynamic range and accurately estimate the brightness component of the scene [25,26], it was applied to extract the brightness component of the image before performing luminance correction, with the following mathematical expression:
G x , y = λ e x p x 2 + y 2 c 2 ,
where c is the scale factor and λ is the normalisation constant; this equation satisfies G x , y d x d y = 1 .
The convolution of the Gaussian function with the original image yielded an estimate of the light component with the following mathematical expression:
I x , y = i = 1 N ω i   F x , y * G i   x , y ,
where I x , y is the light component value extracted and weighted by Gaussian functions at different scales at the point x , y ; F x , y is the input image; G i   x , y is the Gaussian function; *   denotes the convolution; ω i   denotes the weight; and i = 1 ,   2 , , N is the number of scales used, where N = 1 for single scale and N > 1 for multiple scales.
The 2D gamma function can effectively correct the brightness of an image without changing its overall magnitude [27]. It converts an image from the RGB space to the HSV colour space and changes the brightness of its V luminance component according to the distribution of the image’s light component. This function is used to enhance the light values in dark regions and reduce the light values in bright regions.
Images in HSV space are converted to RGB (red, green, blue) colour space to correct for the brightness of localised darkness in the murals. The mathematical expression is as follows:
O x , y = 255 F x , y 255 γ ,   γ = 1 2 m I x , y m ,
where O x , y is the light value of the corrected image, γ is the exponential value used for luminance enhancement, and m is the mean value of the luminance of the light component.

2.2.4. The Pretrained Model and Butterworth High-Pass Filter for Recovery

As described above, the scratches in the mural were more similar to the creases in the old photograph. We therefore applied the pretrained model of the triplet state domain translation network [17] used to restore the old photographs to the mural restoration work. The network is described as follows.
The triplet domain translation network model consists of two variational autoencoders ( V A E s ) and a mapping network T , each of which can be considered as a separate module. In Figure 3. the mural hyperspectral data are denoted as r , the synthetic data as x , and the truth data corresponding to the synthetic data as y . Of these, the network model synthesized data are broken photos formed by degradation of intact photos and the true value data are intact photos. The real mural hyperspectral data, the synthetic data, and the true value data corresponding to the synthetic data are placed in three different domains; these domains were interconverted in this network model. The real mural’s hyperspectral data domain is denoted as R , the synthetic data domain as X , and the truth data domain corresponding to the synthetic data as Y . The scratches in the mural hyperspectral data are recovered by interconversion and learning between the three domains. Z X , Z Y , and Z R are the latent spaces corresponding to the mural hyperspectral data, the synthetic data, and the truth data corresponding to the synthetic data, respectively. E R , X and E Y are the encoders and G R , X and D Y are the decoders that form the V A E . The images from the three different domains are mapped to the corresponding hidden spaces by the V A E ; the hidden spaces of the mural hyperspectral data and synthetic data are aligned as closely as possible. The recovery of the mural hyperspectral data r   is achieved by learning a mapping process from the hidden space Z X of the synthetic data x to the hidden space Z Y of the synthetic data corresponding to the real value data y . A local branch in the network performs global noise removal and colour restoration for non-structural defect problems such as noise and fading. Another branch consists of partial nonlocal block and several residual blocks. It is primarily aimed at structural defects such as scratches. It uses a mask as input to pre-empt pixels in damaged areas of the mural image from being used to repair diseased areas, and this mask prediction network is a U-net.
Here, V A E 1 consists of an encoder E R , X and a decoder G R , X , which encodes the mural r and the synthetic image x into their corresponding hidden spaces Z R and Z X , respectively, and then recovers them afterwards, and causes the potential encodings both conform to a Gaussian distribution. V A E 2 is used to train the true value data y . The expression for the objective function of the mural r   to be restored is as follows:
L V A E 1 r = Κ L E R , X Z r r N 0 , I + α E Z r ~ E R , X Z r r G R , X r R R z r r 1 + L V A E 1 , G A N r   ,  
where the first term is the Κ L regular term to constrain the distribution of the potential encoding to be close to a Gaussian distribution, and E R , X Z r r denotes the prior probability distribution obeyed by Z r obtained through E R , X at input r . The second term denotes the loss between the recovered result by V A E encoding and the input data r , constraining the main information of the image captured by the hidden encoding; the third term is the least squares generative adversarial network loss rate constraining the V A E generated result to be more detailed. As the mural data r share a V A E with the synthetic data x , the inclusion of an adversarial network is used to further approximate the latent space of both, whose loss is defined as
L V A E 1 , G A N l a t e n t r , x = E x ~ X [ D R , X ( E R , X x ) 2 + E x ~ R ( 1 D R , X E R , X r ) ) 2 ,  
Combined with the latent adversarial loss, the total objective function for V A E 1 becomes
min E R , X , G R , X max D R , X L V A E 1 r + L V A E 1 x + L V A E 1 , G A N l a t e n t r , x ,  
As the mural data to be recovered and the synthetic data are already well domain aligned in the hidden space, the mapping from the hidden space Z X to the hidden space Z Y learned through the paired data x , y can also be well generalised to the recovered mural. At this stage, the two V A E s are fixed and then the mapping network T of the two hidden spaces is learned. The expression of the loss function of this mapping network T is as follows:
L T x , y = λ 1 L T , 1   + L T , G A N   + λ 2 L F M   ,
In Equation (9), The first item is the latent space loss, L T , 1   = E T ( Z x Z y ) 1   ; the second item is the adversarial loss L T , G A N   , to encourage the ultimate translated synthetic image to look real; and the third term is the perceptual loss derived using the VGG network [28].
One of the triplet domain translation network datasets is from the Pascal VOC dataset [29] and the other dataset is a collection of old photographs. The network adopts the Adam solver [30] with β 1 = 0.5 and β 2 = 0.999. The learning rate is set to 0.0002 for the first 100 epochs, with linear decay to zero thereafter. Here, α = 10, λ 1 = 60, and λ 2 = 10 in Equations (6) and (9).
To produce clearer results of the mural restoration, we apply the Butterworth high-pass filter to the restored results of the network to produce visually clearer and sharper restoration results of the mural images. The transfer function of the Butterworth high-pass filter is shown in Equation (10):
H u , v = 1 / 1 + D 0 / D u , v 2 n ,
where D 0   is the cut-off frequency, D u , v = u 2 + v 2 is the distance from the point u , v to the origin of the frequency plane, and n is the order of the filter.

3. Results

3.1. Enhancement of Mural Line Information

As shown in Figure 4a, the mural is badly damaged by scratches. Figure 4b shows the high-pass filtered first principal component image of experimental region 1. As seen in Figure 4c, this image is replaced by the original principal component image using principal component fusion; the remaining principal component images are inverted to obtain a mural image with enhanced line information. Using linear information enhancement prevents the distortion of the mural colours and enhances the information of the lines and details in the background. Observed from the enlarged views Figure 4d–g, the enhanced background black lines are clearer and the colours are more realistic, thus improving the distinction between the scratches and background. These enhancements help the subsequent network pretrained model to identify scratches and improve the problem of ambiguous results owing to direct recovery using the network pretrained model.

3.2. Enhancement of Local Darkness Information for Murals

As shown in Figure 5a,d, the surrounding corners of the murals are dark, and their original fine patterns and colours cannot be seen. Localised darkness in the mural was enhanced before the virtual restoration of the scratches to achieve the best post-restoration visual effect. The light components of the mural were first extracted using a multi-scale Gaussian function with the number of scales i chosen to be 3, where the scale factor c was chosen to be 15, 80, and 250 and the weight factor of the light components extracted at each scale was set to 1/3. The results are shown in Figure 5b,e. Based on the distribution characteristics of the extracted light components, a 2D gamma function was used for correction. The results are shown in Figure 5c,f. Visually, the correction cleared the otherwise unreadable patterns around the perimeter, restoring the dark parts of the mural. This enhances the overall visual impact of the restored mural.

3.3. Restoration of Mural Scratches

We use Python to provide the experimental environment needed to build the pretrained model. In this case, the images of the mural to be restored were fed into the pretrained model as a test set. The details of the experimental environment are shown in Table 1.
The enhanced mural images obtained in the previous step were fed into the network model as test sets to repair the scratches. The first step was the full recovery of the unstructured degradation of the murals using local branches. Thereafter, for scratches, the image was segmented using the U-net network [31] in the partial non-local block; the detected scratch points were set to 1 and the others to 0. The mask file was generated and the trained triplet domain translation network restoration model was invoked to restore the image as a whole and process the mask. The repair model was invoked where there were scratches. The scratches were filled by bilinear interpolation and a global wide-area search. Finally, the Butterworth high-pass filter was used to make the resulting high-frequency detail richer and the content clearer in the mural images; the order n = 2, and the cut-off frequency D 0 = 30 are chosen as parameters for the filter. n = 2 has no significant ringing effect and produces a blurring effect at higher values of n . The higher the cut-off frequency D 0 , the lower the frequency components that are filtered out and the higher the frequency components that are lost. The higher the cut-off frequency D 0 , the more low-frequency components are filtered out, and the more high-frequency components are lost. Therefore, we choose the middle value of 30 as the value of D 0 . Figure 6 shows the scratch extraction and repair results for Area 1 and 2.

3.4. Visual Comparison

Figure 7 shows the visual comparison of the scratched murals before and after enhancement and restoration. The results showed that the proposed method enhanced the line information of the mural using principal component transformation and high-pass filtering. It also restored the partial darkness of the mural using optical component extraction and 2D gamma function correction. Further, it achieved the automatic restoration of the scratch damage of the mural by combining the triplet domain translation network model and Butterworth high-pass filter. We successfully restored the pattern information of the scratched murals and provided a reference for the subsequent conservation and restoration of other ancient painted murals.

4. Discussion

4.1. Combination of Different Steps

To illustrate that the direct use of pretrained models to recover scratch lesions is not ideal, we chose to omit the enhancement of mural line information, local dark enhancement, and Butterworth high-pass filtering steps, while keeping the other processes unchanged to verify the feasibility of the combined enhancement and restoration method proposed in this work. Considering Area 1 as an illustrative example, the results are shown in Figure 8. As can be seen in Figure 8, the omission of the enhancement of mural line information and Butterworth high-pass filtering reduced the accuracy of the network model in detecting scratches and fill effects, resulting in parts of the mural that were not scratched being incorrectly restored. After restoration, the image clarity suffered a loss. Dark enhancement of some of the paintings and Butterworth high-pass filtering has been neglected, leaving their darker areas unclear even after restoration, and the overall viewing of the mural compromised. Similarly, the Butterworth high-pass filter was omitted and some pixels in the restoration result of the mural looked less clear than in the original image. In order to validate the significance of the methods presented in this paper, we introduced the image evaluation metrics of mean gradient, edge strength, and space frequency to evaluate the effectiveness of the restoration process in this paper more objectively. Considering objective quality evaluations, the results in Table 2 show that the average gradient, edge strength, and space frequency of the proposed complete process method were greater than those of the other steps of the truncated process.

4.2. Comparison of Scratched Mural Repair Methods

The total variation (TV) model, curvature-driven diffusion (CDD) model, and Criminisi algorithm are all commonly used approaches for image restoration and can be used for the virtual restoration of scratched murals. A comparative analysis of the methods in this study was carried out. The TV model [32] and CDD model [33] are based on partial differential equations. The pixel diffusion principle was mainly used to find the structural information near the scratched area and rely on the shortest straight line to connect this information to achieve image restoration. The Criminisi algorithm is based on priority order sample filling restoration to drive sampling based on the iso-illumination line process and achieve image restoration by searching for the best similar matching blocks for texture replication [34]. In this study, the enhanced and corrected scratched murals were restored by applying the methods described above, and all of the traditional methods used for comparison used the same enhancement steps as the proposed methods, to ensure that this comparison was fair. The subjective visual effects of the different restoration methods for Areas 1 and 2 are shown in Figure 9.
As shown in Figure 9, the TV and CDD models are diffusion models. The information of the intact area is diffused to the area to be repaired, making it easier to produce blurring and leave repair marks when repairing areas with severe scratches. The Criminisi algorithm is better at repairing small defective areas but cannot completely repair areas affected by larger and longer scratches because it uses the concept of fast matching replication of the texture structure. When there are several random missing pixels, it is impossible to form an effective block match, resulting in unsatisfactory repairs. The proposed method gives the most satisfactory results considering the subjective virtual restoration of scratch-damaged murals, which appear better after repair. To better illustrate the subjective quality, we use a scoring method to compare the proposed method with other methods. We collected the subjective opinions of 15 researchers, experts and students from the fields of heritage conservation and image processing. The researchers were asked to rate the results of the different methods according to the quality of the mural restoration. The full score is 10 points, and the restoration results are scored according to the degree of satisfaction, and then the average score is calculated as the final score. The results are shown in Table 3. The results show that the average score of our method is higher than those of other methods, which indicates the obvious advantage of our method.

4.3. Applicability of the Proposed Method

To demonstrate the generality of the method in this paper, we selected other wall images with scratches from the Baoguang Hall of Qutan Temple (a1, b1, c1, d1, e1, and f1). In addition, we also restored two scratched murals on the west wall of Guanyin Hall in Heilongmiao Village, using data collected during July 2017 (g1 and h1). The results of their recovery are shown in Figure 10.

5. Conclusions

In historical conservation applications, repairing scratched murals is a challenging task. We applied the pretrained model of old photographs to the restoration of fresco scratches. However, there are some differences between mural images and old photographs, and the direct use of this model for mural restoration could cause some of the restoration results to be overly smoothed, resulting in slight blurring or even incorrect restoration of some areas. In response, we proposed a combination of enhancement and restoration that goes some way to improving the outcome of the mural restoration. Linear information enhancement was used to improve the discrimination between scratches and the background in murals. Furthermore, a combination of optical component extraction and 2D gamma function correction was used to enhance local dark information in the mural. The triplet domain translation network pretrained model and Butterworth high-pass filter were used to restore the scratches on the murals. The results produced good aesthetic outcomes, and the repair of scratched murals using different methods was evaluated objectively. In addition, because of the non-reproducible nature of murals as cultural artefacts, scratched murals do not have their intact counterparts as real values to use as a reference. This will be considered in our further work to create some mockups of the murals as true values and artificially carve some scratches on them as synthetic murals. In this way, enough mural images can be collected to create a mural dataset, and a network model can be trained specifically for mural recovery, which helps develop more appropriate restoration techniques to produce better virtual restorations of scratched murals.

Author Contributions

Conceptualization: P.S., M.H., and S.L. (Shuqiang Lyu). Data curation: P.S., W.W., S.L. (Shuyang Li), and J.M. Methodology: P.S., M.H., S.L. (Shuqiang Lyu), and S.L. (Songnian Li). Validation: P.S., M.H., S.L. (Shuqiang Lyu), and S.L. (Songnian Li). Formal analysis: P.S., M.H., S.L. (Shuqiang Lyu), and S.L. (Shuyang Li). Resources: W.W., M.H., and S.L. (Shuqiang Lyu). Writing—original draft: P.S., M.H., S.L. (Shuqiang Lyu), and S.L. (Songnian Li). Writing—review: all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 42171356), the Great Wall Scholars Training Program Project of Beijing Municipality Universities (CIT&TCD20180322), and Research on the basic problems of knowledge modeling of large and complex cultural relics for virtual restoration (No. 42171444).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The research project Protection of Murals in Qutan Temple was initiated and organized by the Dunhuang Academy. The author would like to thank the staff of Dunhuang Academy and Qutan Temple.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertrand, L.; Janvier, P.; Gratias, D.; Brechignac, C. Restore world’s cultural heritage with the latest science. Nature 2019, 570, 164–165. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Pietroni, E.; Ferdani, D. Virtual Restoration and Virtual Reconstruction in Cultural Heritage: Terminology, Methodologies, Visual Representation Techniques and Cognitive Models. Information 2021, 12, 167. [Google Scholar] [CrossRef]
  3. Pei, S.; Zeng, Y.-C.; Chang, C.-H. Virtual Restoration of Ancient Chinese Paintings Using Color Contrast Enhancement and Lacuna Texture Synthesis. IEEE Trans. Image Process. 2004, 13, 416–429. [Google Scholar] [CrossRef] [PubMed]
  4. Baatz, W.; Fornasier, M.; Markowich, P.A.; Schönlieb, C.B. Inpainting of ancient Austrian frescoes. In Bridges Leeuwarden: Mathematics, Music, Art, Architecture, Culture; The Bridges Organization: London, UK, 2008; pp. 163–170. [Google Scholar]
  5. Cornelis, B.; Ružić, T.; Gezels, E.; Dooms, A.; Pizurica, A.; Platisa, L.; Martens, M.; De Mey, M.; Daubechies, I. Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece. Signal Process. 2013, 93, 605–619. [Google Scholar] [CrossRef]
  6. Hou, M.; Zhou, P.; Lv, S.; Hu, Y.; Zhao, X.; Wu, W.; He, H.; Li, S.; Tan, L. Virtual restoration of stains on ancient paintings with maximum noise fraction transformation based on the hyperspectral imaging. J. Cult. Herit. 2018, 34, 136–144. [Google Scholar] [CrossRef]
  7. Purkait, P.; Ghorai, M.; Samanta, S.; Chanda, B. A Patch-Based Constrained Inpainting for Damaged Mural Images, Digital Hampi: Preserving Indian Cultural Heritage; Springer: Singapore, 2017; pp. 205–223. [Google Scholar] [CrossRef]
  8. Mol, V.R.; Maheswari, P.U. The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm. Herit. Sci. 2021, 9, 137. [Google Scholar] [CrossRef]
  9. Wang, H.; Li, Q.; Jia, S. A global and local feature weighted method for ancient murals inpainting. Int. J. Mach. Learn. Cybern. 2019, 11, 1197–1216. [Google Scholar] [CrossRef]
  10. Cao, N.; Lyu, S.; Hou, M.; Wang, W.; Gao, Z.; Shaker, A.; Dong, Y. Restoration method of sootiness mural images based on dark channel prior and Retinex by bilateral filter. Herit. Sci. 2021, 9, 30. [Google Scholar] [CrossRef]
  11. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  12. Nogales, A.; Delgado-Martos, E.; Melchor, Á.; García-Tejedor, J. ARQGAN: An evaluation of generative adversarial network approaches for automatic virtual inpainting restoration of Greek temples. Expert Syst. Appl. 2021, 180, 115092. [Google Scholar] [CrossRef]
  13. Gupta, V.; Sambyal, N.; Sharma, A.; Kumar, P. Restoration of artwork using deep neural networks. Evol. Syst. 2021, 12, 439–446. [Google Scholar] [CrossRef]
  14. Huang, R.; Feng, W.; Fan, M.; Guo, Q.; Sun, J. Learning multi-path CNN for mural deterioration detection. J. Ambient Intell. Humaniz. Comput. 2020, 11, 3101–3108. [Google Scholar] [CrossRef]
  15. Wang, N.; Wang, W.; Hu, W.; Fenster, A.; Li, S. Thanka Mural Inpainting Based on Multi-Scale Adaptive Partial Convolution and Stroke-Like Mask. IEEE Trans. Image Process. 2021, 30, 3720–3733. [Google Scholar] [CrossRef] [PubMed]
  16. Li, J.; Wang, H.; Deng, Z.; Pan, M.; Chen, H. Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator–discriminator network. Herit. Sci. 2021, 9, 6. [Google Scholar] [CrossRef]
  17. Wan, Z.; Zhang, B.; Chen, D.; Zhang, P.; Chen, D.; Liao, J.; Wen, F. Bringing Old Photos Back to Life. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 2744–2754. [Google Scholar] [CrossRef]
  18. GB/T 30237-2013; Ancient Wall Painting Deterioration and Legends. Chinese National Standard for the Protection of Cultural Relics. Cultural Relics Press: Beijing, China, 2008.
  19. Stevens, J.R.; Resmini, R.G.; Messinger, D.W. Spectral-Density-Based Graph Construction Techniques for Hyperspectral Image Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5966–5983. [Google Scholar] [CrossRef]
  20. Lazcano, R.; Madroñal, D.; Salvador, R.; Desnos, K.; Pelcat, M.; Guerra, R.; Fabelo, H.; Ortega, S.; Lopez, S.; Callico, G.; et al. Porting a PCA-based hyperspectral image dimensionality reduction algorithm for brain cancer detection on a manycore architecture. J. Syst. Arch. 2017, 77, 101–111. [Google Scholar] [CrossRef]
  21. Azimbeik, M.; Badr, N.S.; Zadeh, S.G.; Moradi, G. Graphene-based high pass filter in terahertz band. Optik 2019, 198, 163246. [Google Scholar] [CrossRef]
  22. Das, S.; Krebs, W. Sensor fusion of multispectral imagery. Electron. Lett. 2000, 36, 1115–1116. [Google Scholar] [CrossRef]
  23. Liu, Z.; Wang, D.; Liu, Y.; Liu, X. Adaptive correction algorithm for illumination inhomogeneousimages based on 2D gamma function. J. Beijing Univ. Technol. 2016, 36, 191–196. [Google Scholar] [CrossRef]
  24. Land, E.H. An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 1986, 83, 3078–3080. [Google Scholar] [CrossRef] [Green Version]
  25. Fuwen, L.; Weiqi, J.; Wei, C.; Yang, C.; Xia, W.; Lingxue, W. Global color image enhancement algorithm based on Retinex model. J. Beijing Univ. Technol. 2010, 8, 947–951. [Google Scholar] [CrossRef]
  26. Banic, N.; Loncaric, S. Light Random Sprays Retinex: Exploiting the Noisy Illumination Estimation. IEEE Signal Process. Lett. 2013, 20, 1240–1243. [Google Scholar] [CrossRef] [Green Version]
  27. Lee, S.; Kwon, H.; Han, H.; Lee, G.; Kang, B. A Space-Variant Luminance Map based Color Image Enhancement. IEEE Trans. Consum. Electron. 2010, 56, 2636–2643. [Google Scholar] [CrossRef]
  28. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherland, 8–16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 694–711. [Google Scholar]
  29. Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  30. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. Comput. Sci. 2014. [Google Scholar] [CrossRef]
  31. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  32. Shen, J.; Chan, T.F. Mathematical Models for Local Nontexture Inpaintings. SIAM J. Appl. Math. 2001, 62, 1019–1043. [Google Scholar] [CrossRef] [Green Version]
  33. Chan, T.F.; Shen, J. Nontexture Inpainting by Curvature-Driven Diffusions. J. Vis. Commun. Image Represent. 2001, 12, 436–449. [Google Scholar] [CrossRef]
  34. Criminisi, A.; Perez, P.; Toyama, K. Object removal by exemplar-based inpainting. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003. [Google Scholar] [CrossRef]
Figure 1. Mural images and two scratch study areas: (a) Buddha 2 and 3 from the southeast wall of the Baoguang Hall at Qutan Temple, (b) image of the first study area, and (c) image of the second study area.
Figure 1. Mural images and two scratch study areas: (a) Buddha 2 and 3 from the southeast wall of the Baoguang Hall at Qutan Temple, (b) image of the first study area, and (c) image of the second study area.
Sensors 22 09780 g001aSensors 22 09780 g001b
Figure 3. Triplet domain translation network.
Figure 3. Triplet domain translation network.
Sensors 22 09780 g003
Figure 4. Area 1 mural enhancement results: (a) Area 1 true colour image; (b) region 1 first principal component filtering; (c) results of linear information enhancement; (dg) the results of local magnification before and after enhancement of study areas 1 and 2.
Figure 4. Area 1 mural enhancement results: (a) Area 1 true colour image; (b) region 1 first principal component filtering; (c) results of linear information enhancement; (dg) the results of local magnification before and after enhancement of study areas 1 and 2.
Sensors 22 09780 g004
Figure 5. Local darkness of the mural enhancement results: (a) Area 1 after linear enhancement, (b) Area 1 light component extraction, and (c) Area 1 dark enhancement result; (d) Area 2 after linear enhancement, (e) Area 2 light component extraction, and (f) Area 2 dark enhancement result. (The yellow box is the more obvious area of change).
Figure 5. Local darkness of the mural enhancement results: (a) Area 1 after linear enhancement, (b) Area 1 light component extraction, and (c) Area 1 dark enhancement result; (d) Area 2 after linear enhancement, (e) Area 2 light component extraction, and (f) Area 2 dark enhancement result. (The yellow box is the more obvious area of change).
Sensors 22 09780 g005
Figure 6. Scratch extraction and repair results: (a) Area 1 enhancement result, (b) Area 1 scratch extraction result, and (c) Area 1 restoration result; (d) Area 2 enhancement result, (e) Area 2 scratch extraction result, and (f) Area 2 restoration result.
Figure 6. Scratch extraction and repair results: (a) Area 1 enhancement result, (b) Area 1 scratch extraction result, and (c) Area 1 restoration result; (d) Area 2 enhancement result, (e) Area 2 scratch extraction result, and (f) Area 2 restoration result.
Sensors 22 09780 g006
Figure 7. Visual comparisons between murals before and after restoration: (a) original and (b) restored maps of Area 1; (c) original and (d) restored maps of Area 2.
Figure 7. Visual comparisons between murals before and after restoration: (a) original and (b) restored maps of Area 1; (c) original and (d) restored maps of Area 2.
Sensors 22 09780 g007aSensors 22 09780 g007b
Figure 8. Area 1: (a) recovery effect without linear information enhancement and Butterworth high-pass filtering (The yellow box is an error recovery), (b) no local dark enhancement and Butterworth high-pass filtering recovery effect, (c) no Butterworth high-pass filtering, and (d) restoration with the proposed method.
Figure 8. Area 1: (a) recovery effect without linear information enhancement and Butterworth high-pass filtering (The yellow box is an error recovery), (b) no local dark enhancement and Butterworth high-pass filtering recovery effect, (c) no Butterworth high-pass filtering, and (d) restoration with the proposed method.
Sensors 22 09780 g008aSensors 22 09780 g008b
Figure 9. Four visual effects of different methods of repairing murals with scratches. (a1a4,b1b4) are the TV restoration result, CDD restoration result, Criminisi restoration result, and restoration result of the proposed method of study areas 1 and 2.
Figure 9. Four visual effects of different methods of repairing murals with scratches. (a1a4,b1b4) are the TV restoration result, CDD restoration result, Criminisi restoration result, and restoration result of the proposed method of study areas 1 and 2.
Sensors 22 09780 g009
Figure 10. Recovery results of supplementary data. (a1,b1,c1,d1,e1,f1,g1,h1) are original images; (a2,b2,c2,d2,e2,f2,g2,h2) are the restoration results obtained using the proposed method.
Figure 10. Recovery results of supplementary data. (a1,b1,c1,d1,e1,f1,g1,h1) are original images; (a2,b2,c2,d2,e2,f2,g2,h2) are the restoration results obtained using the proposed method.
Sensors 22 09780 g010
Table 1. Experimental environment.
Table 1. Experimental environment.
EnvironmentParameters
SystemsWindows 10 (Microsoft, Redmond, WA, USA)
GPUNVIDIA RTX 2080
(NVIDIA, Santa Clara, CA, USA)
CPUi7-9700 k, CPU @3.60 GHz (8 CPUs)
(Intel, Santa Clara, CA, USA)
RAM16 GB
Table 2. Objective evaluations of different step combinations.
Table 2. Objective evaluations of different step combinations.
Study AreaEvaluation IndicatorsNo Linear Information Enhancement and Butterworth High-Pass FilteringNo Local Dark Enhancement Recovery Effect and Butterworth High-Pass FilteringNo Butterworth High-Pass FilteringComplete Method
Area 1Average gradient10.150610.824714.135428.0683
Edge strength96.850899.6460131.4485168.5789
Space frequency23.368237.090253.540956.3064
Area 2Average gradient7.962010.521517.462630.0952
Edge strength75.446994.9784157.6848179.4221
Space frequency16.681243.256144.578548.2505
Table 3. The average scoring results of the subjective evaluation of the repair effect.
Table 3. The average scoring results of the subjective evaluation of the repair effect.
MethodAverage Score of Area 1Average Score of Area 2
TV7.016.89
CDD7.547.27
Criminisi5.235.03
Proposed Method9.129.07
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, P.; Hou, M.; Lyu, S.; Wang, W.; Li, S.; Mao, J.; Li, S. Enhancement and Restoration of Scratched Murals Based on Hyperspectral Imaging—A Case Study of Murals in the Baoguang Hall of Qutan Temple, Qinghai, China. Sensors 2022, 22, 9780. https://doi.org/10.3390/s22249780

AMA Style

Sun P, Hou M, Lyu S, Wang W, Li S, Mao J, Li S. Enhancement and Restoration of Scratched Murals Based on Hyperspectral Imaging—A Case Study of Murals in the Baoguang Hall of Qutan Temple, Qinghai, China. Sensors. 2022; 22(24):9780. https://doi.org/10.3390/s22249780

Chicago/Turabian Style

Sun, Pengyu, Miaole Hou, Shuqiang Lyu, Wanfu Wang, Shuyang Li, Jincheng Mao, and Songnian Li. 2022. "Enhancement and Restoration of Scratched Murals Based on Hyperspectral Imaging—A Case Study of Murals in the Baoguang Hall of Qutan Temple, Qinghai, China" Sensors 22, no. 24: 9780. https://doi.org/10.3390/s22249780

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop