Next Article in Journal
Spatial Models of Solar and Terrestrial Radiation Budgets and Machine Learning: A Review
Previous Article in Journal
TC–Radar: Transformer–CNN Hybrid Network for Millimeter-Wave Radar Object Detection
Previous Article in Special Issue
Methods for Designating Protective Zones of Historical and Cultural Purpose Using Non-Invasive Methods—Two Case Studies for Ukraine and Poland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image

1
The Palace Museum, Beijing 100009, China
2
China-Greece Belt and Road Joint Laboratory on Cultural Heritage Conservation Technology, Beijing 100009, China
3
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
4
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 2882; https://doi.org/10.3390/rs16162882 (registering DOI)
Submission received: 26 June 2024 / Revised: 4 August 2024 / Accepted: 4 August 2024 / Published: 7 August 2024

Abstract

:
Painted cultural relics hold significant historical value and are crucial in transmitting human culture. However, mold is a common issue for paper or silk-based relics, which not only affects their preservation and longevity but also conceals the texture, patterns, and color information, hindering cultural value and heritage. Currently, the virtual restoration of painting relics primarily involves filling in the RGB based on neighborhood information, which might cause color distortion and other problems. Another approach considers mold as noise and employs maximum noise separation for its removal; however, eliminating the mold components and implementing the inverse transformation often leads to more loss of information. To effectively acquire virtual restoration for mold removal from ancient paintings, the spectral characteristics of mold were analyzed. Based on the spectral features of mold and the cultural relic restoration philosophy of maintaining originality, a 3D CNN artifact restoration network was proposed. This network is capable of learning features in the near-infrared spectrum (NIR) and spatial dimensions to reconstruct the reflectance of visible spectrum, achieving the virtual restoration for mold removal of calligraphic and art relics. Using an ancient painting from the Qing Dynasty as a test subject, the proposed method was compared with the Inpainting, Criminisi, and inverse MNF transformation methods across three regions. Visual analysis, quantitative evaluation (the root mean squared error (RMSE), mean absolute percentage error (MAPE), mean absolute error (MEA), and a classification application were used to assess the restoration accuracy. The visual results and quantitative analyses demonstrated that the proposed 3D CNN method effectively removes or mitigates mold while restoring the artwork to its authentic color in various backgrounds. Furthermore, the color classification results indicated that the images restored with 3D CNN had the highest classification accuracy, with overall accuracies of 89.51%, 92.24%, and 93.63%, and Kappa coefficients of 0.88, 0.91, and 0.93, respectively. This research provides technological support for the digitalization and restoration of cultural artifacts, thereby contributing to the preservation and transmission of cultural heritage.

1. Introduction

The cultural relics of calligraphy and painting, as the precious heritages of millennia-old Chinese tradition, not only encapsulate profound historical and cultural values but also contribute significantly to the cultural legacy of China and even all mankind [1,2,3]. Nevertheless, due to prolonged natural aging, improper preservation, and adverse environmental conditions, these venerable relics frequently confront various pathological threats, with mold being one of the most prevalent and destructive. (In scientific classification, mildew tends to describe a specific type of light-colored surface mold, particularly referring to the order Erysiphales. In contrast, mold is a broader term that encompasses molds of various colors and types, including fungi such as Penicillium, Aspergillus, and Rhizopus. In the professional field of art conservation and restoration, mold is more commonly used to describe fungi that have a destructive impact on artworks.). The growth of mold not only impairs the aesthetic quality of these artifacts, reducing their artistic merit but also leads to the degradation of fiber in paper- or silk-based materials, resulting in irreversible structural damage and substantially escalating the complexity and expense of cultural relics restoration [4,5,6]. Consequently, it is imperative to explore effective strategies for the restoration of calligraphic and pictorial artifacts, especially those methods aiming to restore the original appearance of the artifacts without compromising their original material. This holds important significance for the protection and transmission of cultural heritage.
Traditional methods for mold remediation predominantly encompass physical interventions and chemical treatments [7,8]. Physical interventions, such as the drying of paper or silk, dehumidification, cryogenic freezing, and the application of ultraviolet light, do not cause chemical reactions in the materials of cultural artifacts [8,9]. However, these measures are typically effective only at the initial stages of mold infestation and it is difficult to fundamentally eliminate mold. Conversely, chemical treatments, which include the utilization of disinfectants such as ethanol and sodium hypochlorite, are capable of effectively exterminating mold [1]. Nonetheless, these agents may inflict irreversible damage upon the coloration and fibrous structure of the artifacts thus increasing their fragility and susceptibility to the relics. Consequently, there is a pressing necessity to develop and implement innovative technological methodologies that will enable the restoration of artworks to their original condition effectively and safely, without introducing further detriment.
In recent years, with the emergence of digital technology, particularly the development of virtual restoration techniques, new opportunities have been brought for the digitization and preservation of artifacts [10,11]. The greatest advantage of virtual restoration is that it focuses on images rather than the physical artifacts themselves, substantially mitigating any interference with the original artworks and minimizing the risks associated with their conservation [12,13,14]. Furthermore, this approach underpins the conservation and restoration of physical artifacts with a robust scientific foundation and provides a controlled experimental setting. With high-resolution digital scans, researchers can meticulously analyze mold damage on the surfaces of artifacts, without needing physical contact, and perform virtual restorations through digital simulations. This not only facilitates the evaluation of potential restoration strategies but also enhances the refinement of actual restoration methods, significantly improving the accuracy and efficiency of restoration work.
According to the different methods of collecting data sources for the virtual restoration of painting and calligraphy artifacts, they can be mainly divided into two types. The first one is based on the traditional digital cameras to capture the surface color information of the painting artworks, followed by the application of computerized image processing techniques for restoration. The basic principle of this approach is based on the assumption that the regions within the image requiring restoration exhibit geometric characteristics similar to those present in the original image [15,16,17], that is, according to the existing image information, speculate and fill in the missing or damaged part, so that the repaired image can maintain an appearance and features similar to the original image. This class of approaches can be classified into four basic categories as follows: equation-solving-based methods, transformation-domain-based methods, sample-block-based methods, and deep learning [18]. Equation-solving typically utilizes mathematical equations to describe the features and structure of an image and achieve restoration, by solving these equations. For example, Poisson editing is a method of image restoration by solving the Poisson equation. In particular, this algorithm can effectively make the transition between the restored region and the surrounding environment more natural in color and brightness, avoiding conspicuous boundaries or unnatural transition effects, and is one of the most commonly used algorithms in digital image restoration [19,20]. Additionally, the image repair model based on higher-order partial differential equations proposed by Bertalmio et al. [21] can simulate complex structural changes in images and transform the image restoration problem into a variational problem of solving for the global functional extremum, effectively repairing damaged images. The Curvature-Driven Diffusion (CDD) model, devised by Tony et al. [22], also incorporates a curvature factor. This feature can adjust not only the diffusion intensity according to the gradient magnitude but also the diffusion intensity according to the curvature guidance thus significantly improving the precision and effect of image restoration. Despite their adaptability and capability to manage diverse complex scenarios, these algorithms may lead to a distortion of the texture detail when addressing extensive data loss, which, in turn, can accentuate the blurriness of the images. Transform domain-based image restoration methods perform the restoration process by transferring the image from the spatial domain to another mathematical domain (such as frequency or wavelet domain), where the restoration processes are executed. Subsequently, the restored image is inversely transformed back into the original spatial domain. Common transformations include Fourier transform [23,24], wavelet transforms [25,26,27], and sparse coding dictionary learning [28,29,30,31]. However, these methods may introduce unnatural structures of artifacts into the process, especially when dealing with highly damaged areas or using inappropriate parameters or thresholds. The image restoration algorithm based on sample block is designed to repair damaged regions by identifying and replicating similar sample blocks from undamaged regions in an image. The fundamental principle is to make use of the inherent redundancy within the image, wherein the visual content of certain areas bears resemblance or repeats that of others, that is, the visual content of some regions of the image is similar or repeated to other regions, such as the PatchMatch [13,32,33] and Criminisi [2,34,35] algorithms. However, it is difficult for such algorithms to accurately capture the overall structure of images with complex structures or a lot of details, resulting in unsatisfactory repair results. Recently, deep learning methods have also been used to predict the content of missing areas, for example, Generative Adversarial Networks (GANs) were used to automatically generate the missing image content for Indian temple murals [12].
Another approach involves the use of hyperspectral imaging, which capture the spectral data from images. This technique is a non-invasive and non-destructive form of imaging that records the spectral characteristics of objects at high resolution. Hyperspectral imaging excels in detecting subtle differences in the spectral information of target materials, effectively circumventing the phenomenon where identical materials exhibit different spectral profiles [36]. Previous studies have successfully combined hyperspectral technique with image-processing algorithms to restore cultural relics. These methods can be categorized into spatial information-based restoration methods and spectral information-based restoration methods. For example, based on hyperspectral images, Zhou et al. [37] proposed a classified linear regression method, which corrects the true color spectral bands that are more affected by the stains based on the spectral bands that are less affected by stains. However, it is easy to cause incoherence in texture structure with this method. Hou et al. [10] employed hyperspectral imaging to perform maximum noise fractionation (MNF), then removed the components with stains, and reverse transformed the remaining components for restoration. However, this method is prone to altering the colors of other pigments. Zhou et al. [13] proposed a patch-based color-constrained Poisson editing method by taking advantage of the spectral characteristics of stains, solving the problem of color distortion by gradient threshold fusion. Hou et al. [2] carried out principal component transformation (PCA) on hyperspectral data, then repaired the mold region of the first three principal components with the largest amount of information based on the Criminisi algorithm and finally carried out inverse transformation. However, this algorithm can easily lead to a distortion of the detailed structure. Yang et al. [38] developed a virtual color restoration approach for mural pigments based on bandpass energy integration. However, there are few studies on the restoration of cultural relic mold information, and the algorithms involved do not make full use of the spectrum features of cultural relic mold, resulting in the distortion of cultural relic restoration details and colors.
To effectively remove mold from ancient Chinese paintings and based on the conservation principle of ‘restoring the old as the old’, this paper proposes a virtual restoration method based on 3D Convolutional Neural Networks (3D CNN). Our research indicates that the mold information diminishes with increasing wavelength and disappears at approximately 720 nm. In consideration of this characteristic, we devised a method to extrapolate the visible spectrum reflectance from the near-infrared (NIR) spectral reflectance data to remove mold information. The main contributions of this paper include the following:
(1) Analysis of spectral characteristics in mold patch restoration: The spectral response of mold was studied to analyze the differences in spectral characteristics between molded and non-molded regions. It was observed that as the wavelength increased, the information related to mold gradually diminished. Based on this spectral characteristic, virtual restoration of the affected mold regions was accomplished.
(2) Proposed a restoration system combining mold spectrum characteristics with CNNs: Utilizing a 3D CNN model, we combined NIR spectrum and spatial features to reconstruct the visible spectrum reflectance accurately. This approach significantly improves the accuracy of spectral data recovery, providing technical support for the digitization and restoration of cultural relics.
(3) Evaluation of virtual restoration methods based on classification: From the perspective of cultural relic analysis, the primary focus is on color classification. An application-driven evaluation of the mold virtual restoration method was conducted by examining the impact of the Inpainting, Criminisi, and Inverse MNF transformation methods and the proposed 3D CNN on pixel-level color classification. The color classification of the data after virtual restoration was performed through the support vector machine method. The overall accuracy and Kappa coefficient of the classification results of different restoration methods were compared to evaluate the effect of virtual restoration.
Although hyperspectral data comprise hundreds of bands, this study focuses on recovering the spectral reflectance values of the RGB bands (R: 650.02 nm, G: 539.59 nm, B: 477.88 nm). Since the RGB bands correspond to the three primary color perception channels of the human eye (red, green, and blue), the restored images become more visually appealing and practical, which provides a basis for the display and study of artworks. Additionally, the RGB bands are the key to achieving accurate color reproduction, helping to ensure that the digitized images are consistent with the original artwork colors.

2. Materials and Methods

2.1. Materials

We selected three regions with more mold damage of the ancient painting titled ‘Shen Qinglan Hua Tie Luo’ from the Palace Museum as our experimental data (Figure 1). The artwork is a silk painting featuring the prominent Chinese traditional auspicious pattern ‘huá fēng sān zhù’. This painting was created during the first and tenth year of the reign of Emperor Jiaqing, dating back to the period from 1760 to 1770 AD. In Region 1, ‘Lady’s picture’, there was primarily a lady whose face and hands were affected by mold damage; in Region 2, ‘Clothes’ picture’, the mold damage was mainly on the right side of the clothing, and in Region 3, ‘Branch’s picture’, the mold damage was primarily on the background in the center.
In previous research [39], the primary pigments in the painting were analyzed. It was found that the painting employs a wide variety of pigments, primarily cinnabar, azurite, malachite, lead white, and carbon black, with most of the effects resulting from mixtures of various pigments. The bright red on the characters’ clothing and headgear, as well as the orange on their clothing, are mainly cinnabar, while the screen area predominantly features pure azurite; the white on the faces, primarily made of lead white; and the green on the figure’s clothing, mainly made of malachite [39]. These mold-affected regions significantly reduce the painting’s lifespan and artistic value.
The experimental data were captured by the HS-VN/SW2500CR Heritage Spectral Imaging System, which includes a visible and near-infrared-shortwave infrared (VNIR-SWIR) imaging camera provided by Headwall Corporation (Boston, MA, USA) (see Figure 2). The VNIR camera captures images with 370 wavelength bands ranging from 400 nm to 1000 nm with a spectral resolution of 1.6 nm, while the SWIR camera captures 169 bands from 950 nm to 2500 nm with a spectral resolution of 9.6 nm. The system’s parameters are shown in Table 1.

2.2. Spectral Characteristics of Silk-Based Mold

The mold spots on ancient paintings and calligraphy involve fungi such as Aspergillus niger, Aspergillus flavus, Cladosporium, and so on [4,8,40]. Especially sandalwood- and grass-based Xuan paper are more susceptible to mold growth, due to their hygroscopic and dust-absorbing characteristics [7,41]. At the same time, in silk paintings, the silk fibers are mainly composed of proteins (fibroin and sericin). These organic substances provide a rich source of nutrients for mold, facilitating its growth and reproduction. Metabolites produced by mold, such as water, organic acids, and cellulase, can disrupt the fiber structure and have a significant impact on the preservation and display of the artifacts [42]. As shown in Figure 3, it was obvious that with the increase in wavelength, especially beyond 720 nm, the information of mold spots in the image significantly diminishes, almost becoming undetectable. Based on this observation, we have developed an innovative image restoration strategy that utilizes the reflectance data of NIR spectra to reconstruct the reflectance of the visible spectrum regions, thereby effectively eliminating mold-related information from the image.
To further validate these mold characteristics and acquire the spectral characteristics of the mold, 300 pixels containing mold and 300 non-mold pixels were selected for each study area, and the spectral average value was calculated as the spectral reflectance feature. Envelope removal was carried out to enhance the comparative analysis of spectral features, aiding in the identification and differentiation of spectral characteristics between mold and non-mold pixels.
Compared to the average spectral information between mold areas and healthy backgrounds (Figure 4), several key spectral differences were observed. Firstly, the distinct absorption valley at 942 nm is primarily caused by the vibrational groups of water molecules [43]. Secondly, in the wavelength range of 450–650 nm, the absorption spectra show significant differences between healthy backgrounds and mold areas. The spectral valleys in healthy regions typically occur between 468 and 485 nm, whereas in mold areas, the spectral valleys are found between 560 and 620 nm, accompanied by a wavelength shift from the blue to the red edge. This red shift indicates a shift in spectral features from shorter to longer wavelengths, which is an important indicator for identifying mold. Significantly, whether in mold patches or unaffected background areas, the spectrum tends to peak after 720 nm and remains relatively stable in the range of 720 nm to 942 nm. This phenomenon occurs because Aspergillus niger and Aspergillus flavus exhibit different absorption wavelengths due to their distinct spatial molecular structures. Previous research has shown that their absorption rates are nearly zero around 720 nm, indicating very low absorption spectra in this range [44,45]. Consequently, the underlying spectral information of mold coverage can be effectively revealed in the NIR band. These spectral characteristics provide crucial physical evidence for the detection and removal of mold.

2.3. Data Processing before Virtual Restoration

Before the virtual restoration of mold removal, the mold regions of paintings were extracted based on the Random Forests (RF) algorithm. The process of the RF algorithm includes data sampling, decision tree construction, and model ensemble. In classification problems, majority voting is usually used, that is, a sample is classified as the category with the most votes. This integrated approach can significantly improve the accuracy and stability of the model [46]. It can be seen that the decision tree is the fundamental unit of RF. The core advantage of a random forest is its ability to form a stronger classifier by combining multiple decision trees (regarded as weak classifiers). In addition, since each tree partially sees different aspects of the dataset when it is built, it has strong robustness. More detailed information and theoretical support on RF can be found in [46]. With this method, we can effectively extract the regions affected by mold in paintings.
To optimize the extraction results, the morphological operations were applied as follows: using a 3 × 3 structuring element for opening and dilation operations. Through these processes, the small noise points in the image were removed, expanding the mold areas, and making their edges smoother and more coherent.

2.4. Reconstruction of Mold Region Using 3D CNN

In this paper, we propose the virtual restoration of painting mold removal 3D CNN based on hyperspectral images. The framework is shown in Figure 5. The structure of this model is similar to U-Net, wherein the left contracting pathway is tasked with extracting high-dimensional spatial and spectral features, while the right symmetric expanding pathway is utilized for information recovery. Through the inclusion of batch normalization (BN) and activation functions after each convolutional layer, this model attains accelerated convergence speed and heightened stability. Finally, flattened and dense layers are appended at the model’s conclusion to facilitate prediction.
Different from the traditional U-Net model, the 3D CNN was used for feature extraction in this model. Using 3D CNNs to process HSI data involves several steps as follows: block extraction, convolution, pooling, batch normalization, feature vector flattening, and prediction [47]. Initially, small 3D data blocks are extracted from the complete HSI dataset, typically sized (h × w × b), where h × w denote the width and height of the spatial dimensions, respectively, and b represents the spectral depth. Ensuring the target pixel resides at the center of these blocks allows for the effective capture of the surrounding spatial and spectral features. These blocks undergo convolutional layer processing, utilizing kernels of various sizes to extract features, encompassing local characteristics such as spatial textures and spectral signatures. Subsequently, pooling operations are applied post-convolution to reduce feature spatial dimensions, thereby mitigating computational complexity and overfitting while retaining crucial feature information. Batch normalization follows pooling to standardize convolutional layer activations, facilitating accelerated network convergence and enhanced training stability. After multiple convolutions and pooling layers, feature vectors are flattened into one-dimensional vectors for processing by fully connected layers. Finally, a fully connected layer maps the extracted high-dimensional features to the final regression values, culminating in the network predicting the target pixel’s attributes, thereby completing the regression task.
Using BN layers to normalize the input data of each batch enables the model to utilize larger learning rates, reduces the dependence of gradients on parameters or their initial values, enhances the stability of input data for each hidden layer, and accelerates the training process of deep neural networks [48]. The calculation formula is as follows:
R n o r k = R k E [ R k ] V a r [ R k ] y k = γ k R n o r k + β k
where R n o r k is the normalized value, E [ R k ] is the average value, V a r [ R k ] is the variance value, γ k and β k represents the scaling and translation parameters, respectively.
ReLU activation function uses the zero-setting strategy to enhance the sparsity of neurons, accelerating the convergence of deep learning models and avoiding gradient vanishing issues [49]. The calculation formula is as follows:
Re L U = x ,   i f   x > 0 0 ,   i f   x 0
To take full advantage of the known spectral reflectance in this study, we randomly selected 50% of the pixels in which mold pixels had been removed as training samples and 50% as validation samples to construct the model.

2.5. Quantitative Analysis

To evaluate the performance of the restored models, the average value of 60 pixels around the mold was considered to be the true value of the mold [2,13], and the following three parameters were calculated: the root mean squared error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE). The definitions were using the following formulas:
R M S E = 1 N i = 1 N ( R i R ) 2
M A P E = 100 N | R R | R
M A E = 1 N | R R |
where N is the total number of recovery samples, Ri represents the reflectance values of the recovered mold-affected pixels, and R’ represents the average reflectance of adjacent pixels in the non-mold-affected area, respectively.
Pigment classification is the most concerning issue in the analysis of painted artifacts and understand painting techniques. We also conducted an application-driven evaluation of the virtual restored methods, assessing the effectiveness of virtual restoration through classification. This evaluation method allows us to understand the accuracy and effectiveness of different restoration methods, providing a scientific basis for the conservation and restoration of artworks. Based on the knowledge of painting and calligraphy experts, different colors were outlined, from which 60% of random points were used as training samples and the remaining 40% were taken as test samples. Based on the training samples, SVM was used to classify the virtual restoration results, and the overall classification accuracy and Kappa coefficient of the test samples in different virtual restoration results were compared, to evaluate different virtual restoration methods.

3. Results

The results of mold regions are shown in Figure 6b, Figure 7b and Figure 8b. As the results depict, the mold regions were completely and accurately extracted. The virtual restoration results based on 3D CNN-Unet are shown in Figure 6f, Figure 7f and Figure 8f, and we also compared them with the inpainting method proposed by Telea [50], the Criminisi method proposed by Criminisi et al. [34], and the inverse MNF method proposed by Hou et al. [10], as shown in Figure 6, Figure 7 and Figure 8c–e, respectively.
It can be observed from the comparison between Figure 6c,d that both Inpainting and Criminisi’s methods damaged the internal details of the image when dealing with mold, noticeable detail distortions are seen in areas such as the lady’s earlobes, forehead, and hands, specifically showing structural losses in hands and earlobes, with inconsistencies in the forehead and surrounding information. This is due to their reliance on surrounding pixel information for restoring missing areas, with Criminisi focusing primarily on texture synthesis by extracting and synthesizing texture information from the surrounding region, while the Inpainting method prioritizes filling missing regions based on local information and inferring pixel values for repair. The result of inverse MNF was shown in Figure 6e and indicated that the inverse MNF visual mold removal effect is the weakest. Owing to the mold information not effectively focused on a single component, the inverse MNF method has resulted in an unsatisfactory performance.
As shown in Figure 7, the results of Region 2, the Clothes’ picture, showed that in cases where the structural texture was relatively simple, the Inpainting and Criminisi methods had a better virtual restoration result. This is because both the Inpainting and Criminisi methods are capable of utilizing the information surrounding the missing regions for restoration. In the case of a relatively simple texture structure, these methods can more accurately infer the content of the missing regions from the surrounding pixel information. Moreover, the simplicity of the texture structures implies a reduced presence of texture information in the vicinity of the missing regions. Consequently, the Inpainting and Criminisi methods are less susceptible to interference from complex textures during the restoration process, thereby enabling them to more effectively utilize the surrounding information for restoration purposes. Unfortunately, this method also loses detailed texture information, as shown in Figure 7, the contents of the yellow circle. Meanwhile, the inverse MNF alters the color of the face. The reason was that the components of mold information also contained a lot of color information about the face.
Similarly, in the results for Region 3, the Branch’s picture, due to the complexity of color and texture, the Inpainting and Criminisi methods modified the original texture structure, especially the region in the yellow circle, so the repair effect is visually poor. At the same time, compared with the Inpainting and Criminisi’s methods, the inverse MNF’s result significantly changed the original color of the painting, and red, cyan pigments. Overall, our method significantly outperforms the other methods in visual mold removal effectiveness during visual inspections.

4. Discussion

4.1. Quantitative Analysis

For the quantitative analysis, Table 2, Table 3 and Table 4 showed the performance of image restoration methods in different regions. Larger values of RMSE, MAPE, and MAE indicate greater errors between the restored values and the surrounding values, implying poorer restoration performance.
In Region 1 (Lady’s picture), the proposed method performs best according to the metrics. For the white pigment on the face, the Inverse MNF ranks second, and Criminisi third; while for the pigment on the hand, the Inverse MNF ranks second, and Inpainting third. This is because the Inpainting and Criminisi methods, when inferring values in the mold-affected regions, rely on the most similar surrounding values, which may introduce other influences and result in a loss of structural details, consistent with visual observations. In Region 2 (Clothes’ picture), Table 3 shows that Criminisi performs best in the light red region, followed by the Inpainting method, with the proposed method ranking third. However, for the dark red region, our method performs the best. This may be because in the light red region, details are relatively simple, and Criminisi and Inpainting methods can rebuild based on surrounding values, while in the dark red area, our method performs better. In Region 3 (Branch’s picture), our proposed method performs best in the R (650.02 nm) and G (539.59 nm) bands, while the Inverse MNF performs best in the B band (477.88 nm), but it alters the overall color, contradicting the principle of restoring the image to its original state.
In conclusion, our proposed 3D CNN method achieves relatively better performances in image restoration, balancing the need to repair damaged regions while retaining the original colors and textures, making it highly effective in tasks that require the preservation of the image’s original integrity.

4.2. Classification of Pigments

Pigment classification was performed on the original RGB and restored RGB images based on SVM, as shown in Figure 9, Figure 10 and Figure 11. The accuracy of the classification results was evaluated through manually inspected samples. As shown in Figure 9, in Region 1 (Lady’s picture), the colors were mainly divided into red, blue, yellow, black, white, brown, light red, cyan, and other colors (mainly mold colors). Overall, the classification results based on 3D CNN for image restoration outperformed other methods, demonstrating lower misclassification and missed classification rates. In terms of the overall accuracy and Kappa coefficients (as shown in Table 5), the overall accuracy of the classification results of different restoration methods reached more than 85%. Among them, with the overall accuracy and Kappa coefficients of 89.51% and 0.88, the classification accuracy of 3D CNN method performed the highest, which was 0.95% and 0.01 higher than that of the original image, respectively. At the same time, the Criminisi method performed the lowest classification accuracy. The overall accuracy and Kappa coefficient of this method were 87.28% and 0.85. Compared with the 3D CNN method, the overall accuracy and Kappa coefficient were 2.23% and 0.03 lower, respectively. In addition, the classification accuracy based on the inverse MNF, Inpainting, and Criminisi methods were lower than the original data. This is due to the fact that most of the mold information was not well repaired in the classification of images by the inverse MNF method, while the Inpainting method and Criminisi method were wrong in repairing the texture structure of the earlobe, resulting in texture distortion that caused the white color of the earlobes portion to be incorrectly classified as yellow and brown, etc.
In Region 2 (Clothes’ picture) of Figure 10, the colors were mainly divided into red, dark red, white, black, yellow, off-white, cyan, brown, black-red, dark teal, gray, and other colors (primarily mold colors). Overall, the classification results of repaired images based on 3D CNN were superior to those of other methods. As assessed by overall accuracy and Kappa coefficients (as shown in Table 5), the overall accuracy of the classification results of different restoration methods reached more than 90%. Moreover, the classification accuracy of 3D CNN-restored images was almost the same as that of the original data, and the overall accuracy was 92.24% and 92.25%, respectively, while the Kappa coefficients were both 0.91. The classification accuracies of the Inpainting-restored images were the lowest, with overall accuracy and Kappa coefficients of 90.69% and 0.9, respectively. In general, the accuracies of these methods were relatively close, as the texture of this region was relatively simple, resulting in good restoration effects.
As shown in Figure 11, in region 3 (Branch’s picture), the colors primarily included red, black, light gray, gray, brown, white, background color, cyan, yellow, and other colors (mainly mold colors). The 3D CNN-restored images still exhibited the best performance with lower misclassification and missed classification rates in the classification results. According to overall accuracy and Kappa coefficients (as shown in Table 5), the overall accuracy of the classification results of different restoration methods reached more than 90%. The classification accuracy of 3D CNN-restored images was the highest in particular, which was 2.5% and 0.03 higher than the original images. The inverse MNF-restored image achieved the second highest classification, with an overall accuracy and Kappa coefficient of 93% and 0.92, which were 1.87% and 0.02 higher than the original images, respectively. Conversely, the Criminisi-restored image displayed the lowest classification accuracy, with overall accuracy and Kappa coefficient of 90.45% and 0.89, indicating a reduction of 3.18% and 0.04 in comparison to the 3D CNN restored image. Additionally, it should be noted that the accuracy of the images restored by the Inpainting and Criminisi methods was also lower than the original data. This is because the texture structure of this region is relatively complex, and the Inpainting and Criminisi methods made errors in the restoration of texture structures, misclassifying the colors of the tree branch background as black and brown, among other colors.
Through the above analysis, it can be seen that the method of 3D CNN combined with the mold spectral characteristics proposed in this study can effectively leverage both spectral and spatial information by predicting the visible reflectance from the NIR spectral reflectance data, and it had performed exceptionally well in the virtual restoration of paintings for mildew removal in three different research regions. However, only RGB mold information was restored in this study. The method of structural spectral unmixing will be considered in the following study, and the whole hyperspectral information of mold regions can then be recovered to provide strong technical support for the digitization and the protection of cultural relics.

5. Conclusions

Aimed at solving the problem of mold in ancient painting and calligraphy, the spectral characteristics of mold were analyzed in this study. It was found that the reflectance of the region affected by mold spots stabilized after 720 nm and remained relatively stable within the wavelength of 720 nm to 942 nm. Based on these, a 3D CNN restoration framework was proposed. This method effectively estimates the reflectance of the visible spectrum using the NIR reflectance to acquire the virtual restoration of paintings for mildew removal. Experiments were conducted in three regions with different colored backgrounds and compared with the inverse MNF, Inpainting, and Criminisi restoration methods, respectively. Both visual inspection and quantitative analysis have demonstrated that the 3D CNN restoration achieved better performance than other methods. Specifically, this method completely removed mold information while preserving the original texture and colors of paintings and calligraphy, whereas the Inpainting and Criminisi methods altered the texture structure, and the inverse MNF method caused changes in other colors. Furthermore, color classification of the repaired images was performed based on the same samples and the SVM method. The results showed that the restored images with 3D CNN obtained the highest classification accuracies among the three regions, with overall accuracies of 89.51%, 92.24%, and 93.63%, and Kappa coefficients of 0.88, 0.91, and 0.93, respectively, proving the effectiveness of this method. However, only the RGB mold information was recovered in this study, and future studies will explore the method of combining mixed pixels, unmixing to achieve a more comprehensive restoration of mold spectral information and provide more effective technical support for the protection and restoration of ancient paintings and calligraphy.

Author Contributions

Conceptualization, S.W. and Y.C. (Yi Cen); methodology, S.W., Y.C. (Yi Cen) and L.Q.; validation, S.W., Y.C. (Yao Chen). and G.L.; formal analysis, G.L., Y.C. (Yi Cen), S.W. and L.Z.; data curation, G.L. and Y.C. (Yao Chen); writing—original draft preparation, S.W. and Y.C. (Yao Chen); writing—review and editing, S.W., Y.C. (Yi Cen), G.L., L.Z. and L.Q.; visualization, S.W.; supervision, L.Q., Y.C. (Yi Cen) and L.Z.; project administration, L.Q. and Y.C. (Yi Cen); funding acquisition, L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China (Project No. 2022YFF0904400), the Peach and Plum Program of the Palace Museum, and Vanke Foundation.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to thank Bi Xiaohui from the Palace Museum for his professional knowledge of painting.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lu, Y.; Kong, M. A Discussion on the Conservation and Restoration Methods for Mold-Damaged Calligraphy and Paintings. In Identification and Appreciation to Cultural Relic; Anhui Publishing Group Co., Ltd.: Hefei, China, 2021; pp. 98–100. [Google Scholar]
  2. Hou, M.; Wang, Q.; Tan, L.; Wu, W.; Lv, S. Virtual restoration of mildew stains on calligraphy and paintings based on abundance inversion and spectral transformation. Sci. Conserv. Archaeol. 2023, 35, 8–18. [Google Scholar]
  3. Li, Q.; Chen, J.; Xiong, L.; Guo, X.; Zhang, B.; Zhang, K.; Geng, J.; Ben, S. Identification of the Contaminated Fungus in the Ancient Books “ZhongyongDaxue”and the Removal of Mildew from Paper Samples. J. Liaoning Univ. (Nat. Sci. Ed.) 2019, 46, 294–299. [Google Scholar]
  4. Zhang, Y.; Liu, Z.; Liu, G.; Li, B.; Pan, J.; Ma, Q. Isolation and identification of fungi from some cultural relics and packaging boxes in Tianjin Museum storerooms. Sci. Conserv. Archaeol. 2019, 31, 61–67. [Google Scholar]
  5. Liu, S. Preliminary study on the damage mechanism of ancient paintings. Sci. Conserv. Archaeol. 2003, 15, 39–42. [Google Scholar]
  6. Yang, J. Mold identification and remediation of cultural relics based on scanning electron microscopy. J. Chin. Electron Microsc. Soc. 2020, 39, 65–70. [Google Scholar]
  7. Zhang, N.; Chen, X. Study on the Effect of Cleaning Agent for Paper Cultural Relics Mildew Spot; China Cultural Heritage Scientific Research: Beijing, China, 2018; pp. 70–74. [Google Scholar]
  8. Zhen, C.; Zhao, D. Research Progress of Bio-diseases on Paper Relics. J. Beijing Inst. Graph. Commun. 2019, 27, 32–37. [Google Scholar]
  9. Li, M. Prevention and Control of Mold Damage to Paper-Based Cultural Artifacts; China Cultural Heritage Scientific Research: Beijing, China, 2011; pp. 34–37. [Google Scholar]
  10. Hou, M.; Zhou, P.; Lv, S.; Hu, Y.; Zhao, X.; Wu, W.; He, H.; Li, S.; Tan, L. Virtual restoration of stains on ancient paintings with maximum noise fraction transformation based on the hyperspectral imaging. J. Cult. Herit. 2018, 34, 136–144. [Google Scholar] [CrossRef]
  11. Ma, Y. Research on Digital Oil Painting Based on Digital Image Processing Technology. In Proceedings of the 2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS), Shenyang, China, 11–13 December 2020; pp. 344–347. [Google Scholar]
  12. Rakhimol, V.; Maheswari, P.U. Restoration of ancient temple murals using cGAN and PConv networks. Comput. Graph. 2022, 109, 100–110. [Google Scholar] [CrossRef]
  13. Zhou, P.; Hou, M.; Lv, S.; Zhao, X.; Wu, W. Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands. Remote Sens. 2019, 11, 1384. [Google Scholar] [CrossRef]
  14. Mishra, R.; Mittal, N.; Khatri, S.K. Digital Image Restoration using Image Filtering Techniques. In Proceedings of the 2019 International Conference on Automation, Computational and Technology Management (ICACTM), London, UK, 24–26 April 2019; pp. 268–272. [Google Scholar]
  15. Guillemot, C.; Meur, O.L. Image Inpainting: Overview and Recent Advances. IEEE Signal Process. Mag. 2014, 31, 127–144. [Google Scholar] [CrossRef]
  16. Pushpalwar, R.T.; Bhandari, S.H. Image Inpainting Approaches—A Review. In Proceedings of the 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 27–28 February 2016; pp. 340–345. [Google Scholar]
  17. Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S.; Akbari, Y. Image Inpainting: A Review. Neural Process. Lett. 2020, 51, 2007–2028. [Google Scholar] [CrossRef]
  18. Li, M.; Qi, Q. Review of digital image restoration techniques. Inf. Commun. 2016, 29, 130–131. [Google Scholar]
  19. Pérez, P.; Gangnet, M.; Blake, A. Poisson Image Editing. ACM Trans. Graph. 2003, 22, 313–318. [Google Scholar] [CrossRef]
  20. Di Martino, J.M.; Facciolo, G.; Meinhardt-Llopis, E. Poisson Image Editing. Image Process. Line 2016, 6, 300–325. [Google Scholar] [CrossRef]
  21. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image Inpainting. In Proceedings of the 27th annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  22. Chan, T.F.; Shen, J. Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 2001, 12, 436–449. [Google Scholar] [CrossRef]
  23. Deng, C.; Wang, S.; Cao, H. Fourier-Curvelet Transform Combined Image Restoration. Acta Opt. Sin. 2009, 29, 2134–2137. [Google Scholar] [CrossRef]
  24. Yaroslavsky, L. Compression, restoration, resampling, 'compressive sensing’: Fast transforms in digital imaging. J. Opt. 2015, 17, 073001. [Google Scholar] [CrossRef]
  25. Jiang, J.; Deng, Q.; Zhang, G. Regularization algorithm for blind image restoration based on wavelet transform. Opt. Precis. Eng. 2007, 15, 582–586. [Google Scholar]
  26. Starck, J.-L.; Fadili, J.; Murtagh, F. The undecimated wavelet decomposition and its reconstruction. EEE Trans. Image Process. 2007, 16, 297–309. [Google Scholar] [CrossRef]
  27. Hong, H.; Zhang, T. Fast restoration algorithm for turbulence-degraded images based on wavelet decomposition. J. Infrared Millim. Waves 2003, 22, 451–456. [Google Scholar]
  28. Su, Z.; Zhu, S.; Lv, X.; Wan, Y. Image restoration using structured sparse representation with a novel parametric data-adaptive transformation matrix. Signal Process. Image Commun. 2017, 52, 151–172. [Google Scholar] [CrossRef]
  29. Chen, Y.; Tao, M.; Ai, Y.; Chen, J. Algorithm for Dunhuang Mural Inpainting Based on Gabor Transform and Group Sparse Representation. Laser Optoelectron. Prog. 2020, 57, 221015. [Google Scholar] [CrossRef]
  30. Li, J.; Chen, X.; Zou, D.; Gao, B.; Teng, W. Conformal and Low-Rank Sparse Representation for Image Restoration. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 235–243. [Google Scholar]
  31. Hanif, M.; Tonazzini, A.; Savino, P.; Salerno, E. Non-Local Sparse Image Inpainting for Document Bleed-Through Removal. J. Imaging 2018, 4, 68. [Google Scholar] [CrossRef]
  32. Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (TOG) 2009, 28, 24. [Google Scholar] [CrossRef]
  33. Newson, A.; Almansa, A.; Gousseau, Y.; Pérez, P. Non-Local Patch-Based Image Inpainting. Image Process. Line 2017, 7, 373–385. [Google Scholar] [CrossRef]
  34. Criminisi, A.; Pérez, P.; Toyama, K. Object removal by exemplar-based inpainting. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; pp. 721–728. [Google Scholar]
  35. Wang, H.; Jiang, L.; Liang, R.; Li, X.-X. Exemplar-based image inpainting using structure consistent patch matching. Neurocomputing 2017, 269, 90–96. [Google Scholar] [CrossRef]
  36. Zhang, L.; Wang, S.; Zhang, Y.; Yuan, D.; Song, R.; Qi, W.; Qu, L.; Lu, Z.; Tong, Q. Progress of hyperspectral remote sensing applications on cultural relics protection. Acta Geod. Cartogr. Sin. 2023, 52, 1126–1138. [Google Scholar]
  37. Zhou, P.; Miaole, H.; Zhao, X.; Lv, S.; Hu, Y.; Zhang, X.; Zhao, H. Virtual Restoration of Ancient Painting Stains Based on Classified Linear Regression of Hyper-spectral Image. Geomat. World 2017, 24, 113–118. [Google Scholar]
  38. Yang, W.; Tang, X.; Zhang, P.; Hu, B.; JIn, Z. Research on a method for virtual restoration of the colors of tomb mural pigments based on spectral fusion analysis. Sci. Conserv. Archaeol. 2023, 35, 11–23. [Google Scholar]
  39. Li, G.; Chen, Y.; Duan, P.; Qu, L.; Sun, X.; Zhang, H.; Lei, Y. Study on the application of an automatic hyperspectral scanning system to investigate Chinese paintings. Chin. Mus. 2021, S2, 180–185. [Google Scholar]
  40. Yan, L.; Gao, Y.; Jia, D. Isolation and identification of contaminated mold on ancient painting and calligraphy relics. China Cult. Herit. Sci. Res. 2011, 78–82. [Google Scholar]
  41. Cappitelli, F.; Principi, P.; Pedrazzani, R.; Toniolo, L.; Sorlini, C. Bacterial and fungal deterioration of the Milan Cathedral marble treated with protective synthetic resins. Sci. Total Environ. 2007, 385, 172–181. [Google Scholar] [CrossRef]
  42. Malešič, J.; Kolar, J.; Strlič, M.; Kočar, D.; Fromageot, D.; Lemaire, J.; Haillant, O. Photo-induced degradation of cellulose. Polym. Degrad. Stab. 2005, 89, 64–69. [Google Scholar] [CrossRef]
  43. Cen, Y.; Zhang, L.; Sun, X.; Zhang, L.; Lin, H.; Zhao, H.; Wang, X. Spectral Analysis of Main Mineral Pigments in Thangka. Spectrosc. Spectr. Anal. 2019, 39, 1136–1142. [Google Scholar]
  44. Yu, Q. Research on Optical Fiber Sensor for Online Detection of Mold and Disease Process Parameters on Paper Cultural Relics; Chongqing University of Technology: Chongqing, China, 2023. [Google Scholar]
  45. Qu, L.-L.; Jia, Q.; Liu, C.; Wang, W.; Duan, L.; Yang, G.; Han, C.-Q.; Li, H. Thin layer chromatography combined with surface-enhanced raman spectroscopy for rapid sensing aflatoxins. J. Chromatogr. A 2018, 1579, 115–120. [Google Scholar] [CrossRef]
  46. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  47. Qi, W.; Zhang, X.; Wang, N.; Zhang, M.; Cen, Y. A Spectral-Spatial Cascaded 3D Convolutional Neural Network with a Convolutional Long Short-Term Memory Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 2363. [Google Scholar] [CrossRef]
  48. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
  49. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef]
  50. Telea, A. An Image Inpainting Technique Based on the Fast Marching Method. J. Graph. Tools 2004, 9, 25–36. [Google Scholar] [CrossRef]
Figure 1. The original true-color visualizations of hyperspectral data of (a) Lady’s picture, (b) Clothes’ picture, and (c) Branch’s picture (R: 650.02 nm, G: 539.59 nm, B: 477.88 nm, 400 × 400 pixels, the region inside the yellow box is where mold was mainly concentrated, and in figure (df), the red dots indicate selected mold and the black dots indicate selected non-mold in Section 2.2).
Figure 1. The original true-color visualizations of hyperspectral data of (a) Lady’s picture, (b) Clothes’ picture, and (c) Branch’s picture (R: 650.02 nm, G: 539.59 nm, B: 477.88 nm, 400 × 400 pixels, the region inside the yellow box is where mold was mainly concentrated, and in figure (df), the red dots indicate selected mold and the black dots indicate selected non-mold in Section 2.2).
Remotesensing 16 02882 g001
Figure 2. HS-VN/SW2500CR Heritage Spectral Imaging System.
Figure 2. HS-VN/SW2500CR Heritage Spectral Imaging System.
Remotesensing 16 02882 g002
Figure 3. The reflectance figures of (ad) Lady’s picture, (eh) Clothes’ picture, (il) Branch’s picture at 534 nm, 720 nm, 830 nm and 930 nm (the region inside the yellow box is where mold was mainly concentrated).
Figure 3. The reflectance figures of (ad) Lady’s picture, (eh) Clothes’ picture, (il) Branch’s picture at 534 nm, 720 nm, 830 nm and 930 nm (the region inside the yellow box is where mold was mainly concentrated).
Remotesensing 16 02882 g003aRemotesensing 16 02882 g003b
Figure 4. Spectral characteristics (a,c) and removed enveloped reflectance (b,d) of mold and background in Region 1 Lady’s picture, Region 2 Clothes’ picture, and Region 3 Branch’s picture.
Figure 4. Spectral characteristics (a,c) and removed enveloped reflectance (b,d) of mold and background in Region 1 Lady’s picture, Region 2 Clothes’ picture, and Region 3 Branch’s picture.
Remotesensing 16 02882 g004
Figure 5. Flowchart of the 3D CNN network for restoration of the mold regions.
Figure 5. Flowchart of the 3D CNN network for restoration of the mold regions.
Remotesensing 16 02882 g005
Figure 6. The results of Region 1 Lady’s picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Figure 6. The results of Region 1 Lady’s picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Remotesensing 16 02882 g006
Figure 7. The results of Region 2 Clothes’ picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Figure 7. The results of Region 2 Clothes’ picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Remotesensing 16 02882 g007
Figure 8. The results of Region 3 Branch’s picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Figure 8. The results of Region 3 Branch’s picture (a) original, (b) mold region, (c) Inpainting (d) Criminisi (e) Inverse MNF, and (f) the proposed 3D CNN.
Remotesensing 16 02882 g008
Figure 9. The classification results of Region 1 Lady’s picture are (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Figure 9. The classification results of Region 1 Lady’s picture are (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Remotesensing 16 02882 g009
Figure 10. The classification results of region 2 Clothes’ picture (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Figure 10. The classification results of region 2 Clothes’ picture (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Remotesensing 16 02882 g010
Figure 11. The classification results of Region 3 Branch’s picture are (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Figure 11. The classification results of Region 3 Branch’s picture are (a) original, (b) Inverse MNF, (c) Inpainting (d) Criminisi, and (e) the proposed 3D CNN methods.
Remotesensing 16 02882 g011
Table 1. Instrument parameters of HS-VN/SW2500CR were used to record the images.
Table 1. Instrument parameters of HS-VN/SW2500CR were used to record the images.
ParametersVNIRSWIR
Spectral range/nm400–1000950–2500
Spectral sampling/nm1.69.6
Sensor typesCMOSStirling cooled MCT
Digitalizing bit16 bit16 bit
Slit width/µm2025
Light sourcehalogen lamphalogen lamp
Table 2. Quantitative analysis of Region 1 Lady’s picture.
Table 2. Quantitative analysis of Region 1 Lady’s picture.
RMSEMAPEMAE
RGBRGBRGB
White material on the faceOriginal0.0648 0.0667 0.1238 8.4878 13.5048 29.5463 0.0476 0.0541 0.1183
Inverse MNF transformation0.0600 0.0616 0.1182 7.6019 12.2222 28.1356 0.0427 0.0489 0.1126
Inpainting0.0949 0.0811 0.1348 11.4468 14.4198 31.4747 0.0642 0.0577 0.1260
Criminisi0.0882 0.0731 0.1305 10.1655 11.9957 29.2393 0.0570 0.0480 0.1170
3D CNN0.0503 0.0510 0.0989 6.3177 9.8145 22.4118 0.0355 0.0393 0.0897
White material on handOriginal0.0799 0.0868 0.0668 9.8863 14.7794 14.0087 0.0592 0.0664 0.0503
Inverse MNF transformation0.0722 0.0790 0.0594 8.8873 13.4263 12.3301 0.0532 0.0604 0.0442
Inpainting0.0756 0.0631 0.0559 7.6091 9.2662 10.4840 0.0456 0.0417 0.0376
Criminisi0.1130 0.0908 0.0792 9.9254 11.8401 13.8018 0.0594 0.0532 0.0495
3D CNN0.0576 0.0477 0.0425 6.9609 7.9137 8.9051 0.0417 0.0356 0.0319
Table 3. Quantitative analysis of Region 2 Clothes’ picture.
Table 3. Quantitative analysis of Region 2 Clothes’ picture.
RMSEMAPEMAE
RGBRGBRGB
Light red material Original0.0626 0.0517 0.0316 8.6691 13.3147 11.8176 0.0493 0.0423 0.0260
Inverse MNF transformation0.0433 0.0328 0.0254 6.0930 8.5844 9.3905 0.0347 0.0272 0.0207
Inpainting0.0361 0.0286 0.0235 4.8210 6.9725 8.3940 0.0274 0.0221 0.0185
Criminisi0.0343 0.0288 0.0232 4.6273 7.3410 8.5941 0.0263 0.0233 0.0189
3D CNN0.0383 0.0337 0.0276 5.1616 8.2030 10.0719 0.0294 0.0260 0.0222
Crimson materialOriginal0.0845 0.0565 0.0302 12.6938 17.7171 13.3757 0.0650 0.0459 0.0239
Inverse MNF transformation0.0548 0.0415 0.0342 8.3783 12.9686 15.1888 0.0429 0.0336 0.0271
Inpainting0.0590 0.0385 0.0239 8.2551 11.6485 10.7408 0.0423 0.0301 0.0192
Criminisi0.0585 0.0511 0.0359 9.0721 16.4108 16.3552 0.0465 0.0425 0.0292
3D CNN0.0547 0.0374 0.0242 8.1121 11.1807 10.0496 0.0416 0.0289 0.0179
Table 4. Quantitative analysis of Region 3 Branch’s picture.
Table 4. Quantitative analysis of Region 3 Branch’s picture.
RMSEMAPEMAE
RGBRGBRGB
Brown backgroundOriginal0.0983 0.1111 0.0949 17.5595 26.6170 27.2211 0.0766 0.0884 0.0738
Inverse MNF transformation0.0839 0.0766 0.0766 14.9242 18.2449 22.2645 0.0651 0.0606 0.0603
Inpainting0.1195 0.1052 0.0927 19.3230 22.8916 25.3922 0.0843 0.0760 0.0688
Criminisi0.1178 0.1032 0.0957 19.9626 23.4313 27.1039 0.0871 0.0778 0.0735
3D CNN0.0776 0.0739 0.0832 13.1059 16.2916 23.2589 0.0572 0.0541 0.0630
Table 5. Classification accuracy of different Restoration methods.
Table 5. Classification accuracy of different Restoration methods.
ClassOriginalInverse MNF TransformationInpaintingCriminisi3D CNN
Prod. Accuracy (%)User. Accuracy (%)Prod. Accuracy (%)User. Accuracy (%)Prod. Accuracy (%)User. Accuracy (%)Prod. Accuracy (%)User. Accuracy (%)Prod. Accuracy (%)User. Accuracy (%)
Region1 Lady’s picture
Red98.3310098.3310098.3310098.3310096.67100
Black93.3393.339593.4491.6791.6791.6791.6798.3396.72
Light Red59.2696.9751.6796.88551005510063.33100
Blue9892.4598989894.239894.2310090.91
White84.1370.6785.2967.4486.7670.2482.3569.1488.2475
Cyan9088.249690.579286.799286.798886.27
Yellow9285.199283.649285.199285.199288.46
Brown9490.389288.469488.689483.939286.79
Overall Accuracy88.56%87.95%87.95%87.28%89.51%
Kappa Coefficient0.870.860.860.850.88
Region2 Clothes’ picture
Red10090.4810082.1910077.9210077.9210085.71
Deep Red10010096.6710096.6710096.6710096.67100
White92.597.3797.592.8692.510092.510092.5100
Black87.594.5992.597.378096.978096.979097.3
Yellow901009010087.510087.510090100
Offwhite959588.3398.159596.619596.6193.3394.92
Cyan9094.74901009085.719085.719097.3
Brown9079.4193.3383.5891.6783.3391.6783.3391.6779.71
Black red98.0891.079088.529088.5291.6788.7198.3390.77
Dark teal93.3393.3393.3393.3393.3393.3393.3393.3393.3393.33
Gray76.6786.7978.3385.4576.6788.4676.6790.276.6788.46
Overall Accuracy92.25%91.72%90.69%90.86%92.24%
Kappa Coefficient0.910.910.900.900.91
Region3 Branch’s picture
Red90.1698.2195.0889.2396.7283.198.3681.0891.898.25
Gray98.3393.6598.3396.7298.3389.3998.3395.1698.3393.65
White9510093.33100951009510093.33100
Black91.6794.8393.3390.3288.3310093.3396.5591.67100
Brown88.3377.9496.6787.889579.1788.3381.549589.06
Background94.1287.2710092.3193.3391.89591.9496.6786.57
Light gray6062.0776.6782.1443.3372.2246.6760.878072.73
Cyan9610088100961009610096100
Yellow93.331008010083.3310073.3310093.33100
Overall Accuracy91.13%93.00%90.66%90.45%93.63%
Kappa Coefficient0.900.920.890.890.93
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Cen, Y.; Qu, L.; Li, G.; Chen, Y.; Zhang, L. Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image. Remote Sens. 2024, 16, 2882. https://doi.org/10.3390/rs16162882

AMA Style

Wang S, Cen Y, Qu L, Li G, Chen Y, Zhang L. Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image. Remote Sensing. 2024; 16(16):2882. https://doi.org/10.3390/rs16162882

Chicago/Turabian Style

Wang, Sa, Yi Cen, Liang Qu, Guanghua Li, Yao Chen, and Lifu Zhang. 2024. "Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image" Remote Sensing 16, no. 16: 2882. https://doi.org/10.3390/rs16162882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop