1. Introduction
The temple mural is one of the dominant types of Chinese murals, usually painted on the walls of Buddhist and Taoist temples. However, due to human activities and environmental changes, these murals have been damaged, sometimes beyond recognition [
1]. A very common practice in China involves burning incense and praying in temples, making the murals vulnerable to sootiness. This paper proposes a series of methods aimed to address problems related to the discovery of hidden information in murals (mainly including hidden patterns [
2], text [
3,
4], underdrawings [
5,
6,
7], smeared information [
8], and restoration marks [
9]) and the understanding of the artistic methods used to create them, in order to provide evidence for the identification and restoration of murals.
Traditional methods, such as X-ray imaging technology and UV-induced Fluorescence Photography [
10,
11], which are often used to reveal hidden information, have major problems due to logistics, the necessity to acquire permission by public administrations, and protection for operators. Therefore, traditional information extraction methods either cause certain damage to the cultural relics or do not effectively extract the implicit information.
New technologies have been used for the digital protection of murals, including hyperspectral imaging, which is a remote-sensing technology developed in the 1980s. It has been widely applied in many fields, such as terrestrial military [
12], mineral exploration [
13], geology [
14], precision agriculture [
15], and food engineering [
16]. Hyperspectral imaging has a number of advantages [
17,
18], including being considered a safe detection technique due to it being non-contact and non-destructive. It also has the advantages of high spectral resolution with multiple bands and narrow bandwidth, and the ability to provide spectral information for each pixel in an image, i.e., information for each pixel in the paintings of a continuous spectral response curve, which provides the basis for performing implicit analysis and processing information. Because of these unique features, hyperspectral imaging has been introduced into the study of cultural relics, and this has become the focus of related research [
19,
20,
21].
Hyperspectral technology is widely used for visual enhancement and to reveal hidden details in ancient manuscripts and paintings [
22]. In 2008, Padoan et al. [
3] studied the watermark of plant leaves in the 17th century, the Atlas manuscript was visible by enhancing the spectral image to restore the image affected by the ink. In 2017, Wu et al. [
9] identified the mineral pigment and extracted traces of historical paintings based on hyperspectral short-wave infrared imaging. These two studies proved that hyperspectral imaging technology is a feasible and viable approach in the study of cultural relics. In 2011, Kim et al. [
23] used hyperspectral imaging to assist in visual enhancement of old documents. Goltz et al. [
24] used principal component analysis to somewhat reduce the effects of iron and ink stains on old documents and enhance document readability. In 2017, Guo et al. [
25] removed background information using the principal component analysis technique and were able to enhance the smearing information around the crowns of the Qing emperor and officials. However, no hidden information could be extracted by means of the proposed method. In 2003, Balas et al. [
26] described in detail the imaging characteristics and advantages of hyperspectral imaging technology, restoring scripts that were erased and covered in old manuscripts. In 2007, Salerno et al. [
4] applied principal component analysis and independent component analysis to enhance Archimedes’ manuscript information that was erased from ancient parchments, which provided an important basis for modern historians. However, the information above murals is much more rich and complicated than that of documents. Pan et al. [
27] extracted faint leaf-like patterns in tomb murals based on the normalization and density slice of hyperspectral images. However, this method is ineffective when there is no obvious absorption feature. Overall, there are still some challenges that require new methods and/or practical solutions.
The main objective of this study is to propose an effective method to enhance the visual value of patterns of ancient mural using hyperspectral imaging and to extract hidden information from the sootiness. One of the internal wall paintings in the Guanyin Temple was analyzed by hyperspectral imaging in order to extract the information covered by sootiness. After processing using forward and inverse Minimum Noise Fraction (MNF) transform, the patterns under the sootiness were greatly enhanced by image subtraction; then, information was extracted by density slicing. Hence, the proposed case study represents the application of a data elaboration method able to extract hidden information from sootiness and enhance the visual value of patterns of ancient murals.
2. Methods
2.1. The Case Study
The mural selected in this study is on the western wall of the Guanyin Temple, located in Hei Long Temple Village in Yanqing District of Beijing, China. According to records in China, the temple was built in the Daoguang period of the Qing Dynasty, with a history of nearly two hundred years. The lower half of the mural consists of nine portraits of Arhats and the upper half is a variation of Guan Shiyin Bodhisattva. The characters are interesting and exquisite. As shown in
Figure 1a, the mural’s preservation status is not very good, but still is discernible. It can be seen that the lower right corner is contaminated by sootiness, which results in the patterns of the mural being covered up and being unable to be recognized by the naked eye. In order to reveal the information covered by this sootiness, the hyperspectral data of the mural was captured and analyzed.
2.2. Analytical Method
In this study, the data of the experiment areas were captured by the Themis Vision Systems LLC VNIR400H portal scanning hyperspectral imaging camera, which integrated an automatic scanner, a spectrometer, and an and image sensor into one unit and was corrected by an accurate spectrum. The spatial resolution of the image obtained by the VNIR400H camera was 1392 × 1000 pixels. The spectral range of the camera was 400–1000 nm. The spectral resolution was 2.8 nm and the sampling interval was 0.6 nm.
The data were recorded as a hypercube with two-dimensional spatial images and wavelength bands as the third dimension, and hundreds of spectral bands were extracted from each pixel to form a relatively continuous spectral curve. As such, the hyperspectral data provided abundant reliable spatial and spectral information for the murals. Since some electrical interference occurred when the hyperspectral camera was in operation, some noise bands were caused in the original images. Therefore, 100–850 bands (433–910 nm) of the 1040 bands in the range of 400–1000 nm were selected manually so as to reduce the noise band interference.
Figure 1a shows the complete picture of the mural, which is 5.10 m wide and 2.27 m high. The hyperspectral imaging camera used had certain limitations regarding the imaging range and resolution. When the instrument was about 0.8 m from the mural, areas measuring about 70 × 50 cm could be scanned. Therefore, the data of the entire mural were collected in fractions. Since the sootiness in the lower right corner of the entire mural was more serious, two experimental areas were selected here, with area 1 being 650 × 600 pixels and area 2 being 555 × 440 pixels; these are marked in red boxes in
Figure 1a. The images of the two study areas, shown in
Figure 1b,c, are color images (synthesized from original hyperspectral images) based on images at 460 nm, 549 nm, and 640 nm bands.
2.3. Overall Workflow
Figure 2 shows the overall workflow of the proposed method for the extraction of the hidden information of murals, including three main steps: (1) Data preprocessing, (2) visual enhancement using MNF, feature bands selection, and image subtraction, and (3) extraction using density slicing. The details of each step are discussed in the following sub sections.
The MNF in the above figure is used to determine the inner dimension of the image data, separate the noise in the data, and reduce the computational demand in the subsequent processing. The DN value in the above figure is the luminance value of the remote sensing image pixel and the gray value of the recorded feature.
2.4. Data Preprocessing
During the process of camera acquisition, the data were interfered with by uneven intensity distribution and dark current noise. Therefore, before the next analysis, the reflectance of the original hyperspectral data was radiation corrected using the following formula:
In Formula (1), is the corrected reflectance image, is the original image of the mural, is the reference image obtained from the whiteboard, and is the dark current image acquired with the light source off and the lens covered. The reflectance of the standard whiteboard is 99%.
If the whole hyperspectral image was processed directly, the effect and accuracy of extracting hidden information would have been interfered with. Therefore, in order to reduce other information interference, the image needed to be cropped.
2.5. Visual Enhancement
For the sootiness in the mural, direct feature extraction is usually unable to achieve good, precise results (see
Section 4), therefore, visual enhancement is an essential step.
The visual enhancement was done via three steps (see
Figure 1). First, MNF and inverse MNF transform were applied to enhance the features of the pattern and reduce the features of the scratches and sootiness in the mural’s background. This transformation performed two principal component analyses, respectively transforming the noise covariance matrix of the data and the noise-whitened data, and finally retaining the hyperspectral images with a large signal-to-noise ratio (SNR). In this process, the MNF transform was performed first, and then the hyperspectral image was represented by several bands. The bands containing pattern information were selected to perform inverse MNF transform. Then, the mean spectrum of the pattern and background was obtained, and the feature bands ranging between them were selected, i.e., the bands with the greatest difference. Next, image subtraction was applied to further enhance the pattern information. The formula used in this study was B1–B2, with B1 and B2 representing the maximum band and minimum band of the characteristic band range, respectively. This helped to enhance the pattern information, with a single band grayscale image obtained after this operation.
2.6. Extraction of Hidden Information
After the image subtraction, the digital number (DN) value range of the image was arbitrary. In order to make the target more consistent with human vision and facilitate the subsequent selection of the threshold and extraction, the DN value of each pixel in the subtracted hyperspectral image was transformed by the following formula:
In Formula (2), for the image obtained after subtraction, is the transformed DN value, is the maximum value of all of the DN values, and is the DN value of each pixel.
Finally, density slicing was performed on the grayscale image after DN value calculation to extract the patterns under the sootiness. This assumed that a certain kind of substance was expressed in a data range in a grayscale image, and this part of the pixel was separated from the image to form a class. The density slicing involved using thresholds for image segmentation so that all pixels with a grayscale value greater than or equal to a certain threshold were classified as target substances. The threshold was determined by examining the accuracy of extraction under different values. This achieved better extraction results when there was a strong contrast between the target and the background.
4. Comparison with Other Methods
Interactive histogram stretching is a commonly used image enhancement method. The main stretching methods are linear, equalization, Gaussian, and square root. In order to compare the image enhancement effects of the method in this paper and the histogram stretching, all of these methods were applied to the original hyperspectral image of area 1 respectively. By adjusting the stretching range, the enhanced image with the best visual effect was obtained; the results are shown in
Figure 9.
The original hyperspectral image without MNF transform was directly subjected to image subtraction, and the result is shown in
Figure 10. It can be seen that the significant scratch and noise band removal by MNF transform had a significant effect on image enhancement.
In addition, blind separation technologies, such as principal component analysis (PCA) and independent component analysis (ICA), also proved to improve the readability of degraded images and reveal hidden information [
23]. In order to compare the image enhancement effects of the method in this paper with that of PCA and ICA, these methods were applied to the original hyperspectral image of area 1. The bands with the most enhanced visual effects after PCA and ICA transform were given, and the results are shown in
Figure 11.
The results show that the method mentioned in this paper achieved better image enhancement effects.
Image classification is a common method for feature extraction and recognition. For hyperspectral images, Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) are two commonly used classification methods. To compare the results obtained by the proposed method with those of SAM and SVM, both SAM and SVM were applied to the data. For SAM classification, the spectral curves of each area needed to be calculated first. The similarity between the two spectral curves was determined by calculating the generalized angel between the two curves; the smaller the angle, the more similar the two curves. The threshold was adjusted by multiple trials, and the visual effect was best when the thresholds were 0.025 for area 1 and 0.012 for area 2, respectively. The result of SAM is shown in
Figure 12b,e. As for SVM, a very efficient machine learning algorithm, the basic idea was to find an optimal classification hyperplane that maximized the separation between the two types of areas. The result of SVM is shown in
Figure 12c,f.
Compared to the two methods above, the proposed method in this paper provides better visual effects with higher accuracy. As shown in
Table 4, for Area 1, the extraction accuracy of the SAM classification was 84.72%, and the accuracy of the SVM classification method was 80.01%. For Area 2, the accuracies of the SAM and SVM methods were 57.46% and 71.74%, respectively, which is a good demonstration of the effectiveness of the proposed method. In order to perform more accurate classification, in this study, the target area was enhanced first and then the sootiness pattern was extracted, which effectively improved the accuracy of the extraction. Therefore, there were some advantages of the proposed method to some extent.
5. Conclusions
Extracting hidden information from damaged murals is a challenging task in cultural conservation. The objective of this study was to develop a new method of hidden information detection using hyperspectral imaging techniques. Hyperspectral images of a Guanyin Temple mural was used as the study case. A series of spectral analysis approaches was integrated and applied to enhance the visual expression of the hidden information, and the pattern under the sootiness was extracted from hyperspectral images effectively. This was done by applying MNF transformation to reduce the features of the scratches and sootiness in the mural background, and by spectral feature analysis and image subtraction to enhance the characteristics of ancient murals, which highlighted the blurred image of the mural under the sootiness and increased the readability and artistic expressiveness of the ancient murals. Finally, the pattern covered by the sootiness was extracted by density slicing. By adjusting the threshold, the overall accuracy reached 88.97%.
However, as mentioned above, there are many kinds of hidden information, such as hidden patterns and text, underdrawings and manuscript, smeared information, and restoration marks. The proposed method is not suitable to all situations, because the spectral curves of different substances are also different from each other; to enhance the target information, a certain difference in the reflectance of the objective and other substances is required. Furthermore, the input of image subtraction needs to be analyzed according to the specific extraction object. Therefore, further studies are needed to deal with different types of occlusion and to adapt the method to different extraction targets.