Next Article in Journal
Research on Predictive Auxiliary Diagnosis Method for Gastric Cancer Based on Non-Invasive Indicator Detection
Previous Article in Journal
Effects of Nail Biting (Onychophagy) on Upper Central Incisors in Children and Young Adolescents
Previous Article in Special Issue
Analytical Evaluation of Laser Cleaning Effectiveness in the Context of Contemporary Muralism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advances in Automated Pigment Mapping for 15th-Century Manuscript Illuminations Using 1-D Convolutional Neural Networks and Hyperspectral Reflectance Image Cubes

by
Roxanne Radpour
1,2,*,
Tania Kleynhans
3,
Michelle Facini
1,
Federica Pozzi
4,
Matthew Westerby
1 and
John K. Delaney
1,*
1
National Gallery of Art, Washington, DC 20565, USA
2
Departments of Art Conservation and Electrical & Computer Engineering, College of Arts and Sciences, College of Engineering, University of Delaware, Newark, DE 19716, USA
3
Hydrosat, Washington, DC 20036, USA
4
Centro per la Conservazione ed il Restauro dei Beni Culturali “La Venaria Reale”, 10078 Venaria Reale (Turin), Italy
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 6857; https://doi.org/10.3390/app14166857
Submission received: 1 July 2024 / Revised: 29 July 2024 / Accepted: 30 July 2024 / Published: 6 August 2024
(This article belongs to the Special Issue Advances in Analytical Methods for Cultural Heritage)

Abstract

:
Reflectance imaging spectroscopy (RIS) is invaluable in mapping and identifying artists’ materials in paintings. The analysis of the RIS image cube first involves classifying the cube into spatial regions, each having a unique reflectance spectrum (endmember). Second, endmember spectra are analyzed for spectral features useful to identify the pigments present to create labeled classes. The analysis process for paintings remains semi-automated because of the complex diffuse reflectance spectra due to the use of intimate pigment mixtures and optically thin paint layers by the artist. As a result, even when a group of related paintings are analyzed, each RIS cube is analyzed individually, which is time consuming. There is a need for new approaches to more efficiently analyze RIS cubes of related paintings to address the growing interest in the study of related paintings within a group of artists or artistic schools. This work builds upon prior investigations of 1-D spectral convolutional neural networks (CNNs) to address this need in two ways. First, an expanded training set was used—ten illuminated manuscripts created by artists stylistically grouped under the notname “Master of the Cypresses” (15th century Seville, Spain). Second, two 1-D CNN models were trained from the RIS cubes: reflectance and the first derivative. The results showed that the first derivative-trained CNN generally performed better than the reflectance-trained CNN in creating accurate labeled material maps for these illuminated manuscripts.

1. Introduction

Investigating artists’ painting methods, including the identification and distribution of pigments, paint application methods, and compositional aesthetics, can reveal the facture, or working methods, of a single artist or a group of artists [1,2,3]. Understanding artists’ pigment selection and unique paint application methods, especially for certain pictorial elements within a composition, is essential evidence for art conservators, curators, and historians. Furthermore, identifying paint materials and their applications among artists from the same school can reveal differences and changes in working methods over time. Specific pigments can also act as markers in providing a historical context for a work of art [4,5,6]. Thus, while identifying a single instance of a colorant is informative, the ability to discuss how that material was mixed and applied in the broader context of an entire painting is much more instructive as it gives insight into an artist’s working methods.
The adoption of chemical imaging techniques on the macroscale to acquire large-scale maps of paintings and other 2D artworks provides novel insights into their study [7,8,9]. In particular, reflectance imaging spectroscopy (RIS) has emerged as a powerful imaging modality in conservation science to help identify and map pigments on polychrome works of art [10,11,12]. Each pixel in the recorded image contains a diffuse reflectance spectrum representing the localized interaction of light with the painting surface [13]. This results in a 2D spatial image with a third dimension, the diffuse reflectance spectra, in the form of a data-rich 3D image cube [14]. The reflectance spectrum encodes information about electronic transitions and vibrational features, which can aid in the identification of the pigments present. The analytical objective is to find the pigments used by the artist based on representative spectra extracted from the data cube (a set of endmembers) and to create a labeled material map that shows the spatial distribution of these endmember spectra across the image scene, which gives direct insight into the painting techniques employed by the artist [15,16]. Various approaches can be used to create material maps that demonstrate the variability and distribution of the pigments present in an artwork. Since the image cubes are large, computational approaches require varying amounts of processing time.
The increased use of RIS hypersepctral cameras in conservation science laboratories has been driven by the wider availability of lower cost hyperspectral cameras [17]. However, the analysis of the reflectance image data cubes, especially from paintings, remains laborious and time-intensive, even for an expert. Thus, processing of the image cubes remains the bottleneck in the overall workflow of RIS. Currently, the processing of these image cubes is performed using multivariate statistical software tools developed for remote sensing applications. One of the software tools used for the analysis of reflectance cubes of paintings is the ENVI Spectral Hourglass Wizard, which first involves reducing the spectral dimensionality, through principal component transforms, to ease the manual process of finding endmembers. While the ENVI Spectral Hourglass Wizard has been found to be robust, it works with a reduced set of reflectance spectra from the image cube and is semi-manual, requiring an experienced user [18,19]. Making a good classification map with endmembers can take hours to a few days.
Other multivariate statistics methods that use convex hull geometries, such as MaxD [20] or Sequential Maximum Angle Convex Cone (SMACC) [21], have been found to be robust for the reflectance remote sensing community. These workflows use all of the spectra in the image cube and are automatic, taking seconds to run. However, when applied to reflectance image cubes from paintings, they only find about 70 percent of the spectral endmembers present [20]. This is likely because these methods assume a linear mixing model which is not representative of pigment mixtures in paintings. Artists produced paints by grinding pigments together; as a result, the pigment particles are in intimate contact with each other in the paint layer (also referred to as intimate mixtures). The reflectance spectrum of such a paint is not simply the sum of the reflectance spectra of the individual pigments. Rather, the reflectance spectrum has a non-linear dependence on the concentration of the pigment particles and their wavelength-dependent absorption and scattering coefficients [22,23]. Thus, while such methods offer a good starting point for finding spectral endmembers, manual methods are required to find the rest of the endmembers present.
Various researchers have explored the use of the Kubelka–Munk model to address the non-linearity in RIS data resulting from the intimate mixture of pigments. Here, the reflectance cube is converted into a ratio of the absorption and scattering spectra (denoted K and S, respectively), assuming the paint layer is optically thick. The K/S spectra in the RIS image cube is then linearly fit with combinations of K/S spectra from a library of K/S spectra of pigments [24]. The approach works for paintings where an optically thick paint layer of intimately mixed pigments is present but has problems providing correct pigment mixtures, as other pigment mixtures can, on occasion, be found to also fit the painting spectra. Recently, a neural network was explored to select the pigments used in a given mixture to avoid this problem, and the results from this study offer a way around the uniqueness problem [25]. Still, artists often use non-optically thick paint layers and thick glazes to achieve the wide range of colors in paintings; thus, further work is needed to find a robust solution to labeling pigments using the Kubelka–Munk model.
As an alternative approach to a physics-based model to automatically produce labeled pigment maps from reflectance image cubes, we developed a workflow which involves building representative labeled spectral libraries from the reflectance spectra of paints found in artworks produced by a school, workshop, or group of affiliated artists. Then, these libraries are used to train a 1-D convolutional neural network to apply to reflectance image cubes of other paintings from the same workshop or school. The central idea is that the artist within a given school will use similar pigment mixtures and paint layering to achieve a range of colors in their work. Differences between their artworks point to changes in working methods, which is of interest. This workflow was first tested using four illuminations from the Laudario of Sant’Agnese (c. 1340), with three used for the training and one held for testing [26]. The labeled map results were promising, only failing if a pigment or pigment mixture was not included in the training set.
In this paper, we further explore this workflow by expanding the number of artworks in the training and the testing phases as well as training a second neural network using the first derivative of reflectance (calculated with respect to wavelength) libraries and test cubes. The training of a first derivative neural network was to see if the enhanced spectral features that result from the first derivative would make the workflow more robust. Working with first derivative-reflectance image cubes, in general, has been found to make for more robust endmember maps [27,28] once the endmembers are found. For this study, a collection of 15th-century Sevillian manuscript illuminations—cuttings of miniatures that originally were part of a large choir book—were selected. These works have been attributed to the “Master of the Cypresses”, a notname for an anonymous individual artist that has been used to describe a group of miniaturists producing illuminations in choir books in the middle to late 15th century, often featuring cypress trees in the backgrounds of painted scenes [29]. This label is debated by art historians. Under careful observation, the facture, or working methods, of multiple artists can be discerned across this specific collection of illuminations grouped under this label. However, since the paintings were produced for discrete series of choir books and appear to employ similar materials, pigment application choices, and stylistic traits, the collection provides an excellent opportunity to utilize a larger number of artworks from an interconnected group of painter-illuminators to provide sufficient variability and data to train the neural networks. In this study, a second classification approach, spectral angle mapper using the training libraries for the novel neural networks, is also utilized to compare the results between the novel neural networks.

2. Materials and Methods

2.1. Illuminations

The illuminations used for this research to train and test the two convolutional neural network (CNN) models are those currently attributed to the so-called “Master of the Cypresses”. There are thirteen illuminations in the Rosenwald Collection at the National Gallery of Art (Washington, D.C.), which are cuttings from the same set of choir books, produced for the monastery of San Isidoro del Campo (Santiponce, Spain). All of these cuttings have historiated initials that are figural (e.g., feature scenes with King David, a saint, or monks involved in a performance of liturgy) or have designs only involving flowers and animals. Ten of these were used to build the spectral libraries used to train the two CNN models. Three illuminations were held in reserve, Initial I with David (1964.8.1226, dim. 15.2 cm × 11.5 cm) (Figure 1a), Initial D (1964.8.1223, dim. 15.2 cm × 16.3 cm) (Figure 1b), and Initial A (1964.8.1220, dim. 18.9 cm × 20.9 cm), to test the two CNN models. Model results from the first two initials are presented here and are compared to a truth classification map.

2.2. RIS Image Cube Collection and Pre-Processing

2.2.1. Reflectance Imaging Spectroscopy

Visible to near-infrared (VNIR, 400–1000 nm) reflectance imaging spectroscopy (RIS) was used to collect image cubes of the illuminations. These miniatures were imaged using a modified hyperspectral camera (SOC-730, Surface Optics Corporation, San Diego, CA, USA) with a V10E transmission grating spectrometer (Specim Corporation, Oulu, Finland) and a high sensitivity backside electron-multiplying (EM) CCD detector array (ProEM-1024, Princeton Instruments, Trenton, NJ, USA), with a per line exposure time of 150 milliseconds [30]. The camera features 2.4 nm spectral sampling, 280 spectral channels, and 1024 spatial pixels along the slit of the spectrometer. Two 50 watt Solux lamps (4700 K, Tailored Lighting, Inc., Rochester, NY, USA) powered by a DC power supply were used to illuminate the miniatures at ∼45 degrees (Figure 2) from the painting normal. An image cube of a diffuse white panel (Labsphere, Inc., North Sutton, NH, USA) was collected to correct for non-uniformity in lighting and to calibrate the spectral data to achieve apparent reflectance. This was carried out by dividing each illumination’s dark noise-corrected image cube by the white panel’s dark-noise corrected image cube. The reflectance image cubes were collected with a spatial sampling of 0.2 mm per pixel.
Each of the RIS image cubes was initially pre-processed by applying masks across the image scene to exclude some of the areas with gold foil (due to issues with specular reflectance) as well as background content not relevant to the artistic decoration (e.g., bulk areas of parchment). These masks were created by using the region of interest (ROI) tool in ENVI (Environment for Visualizing Images, L3Harris Technologies, Melbourne, FL, USA), an image processing software for processing and analyzing remote sensing data. While the spectra from the reflectance image cubes have some noise (approximately <0.3% RMS in reflectance), before proceeding to take the first derivative of the image cubes, it was necessary to reduce the noise further. This was carried out using the Minimum Noise Fraction (MNF) transform function within ENVI [31]. This produces a set of eigenimages that represents relevant image content as well as noise in the image cube. To de-noise the image cube and bring it back to the original diffuse reflectance space, we applied an inverse MNF transform, keeping only the set of eigenimages with image content (i.e., observable painting features). The first-derivative RIS cubes were then calculated from the MNF de-noised diffuse reflectance image cubes using derivative filtering implemented in a Matlab script. The derivative filtering uses a top hat filter whose width was set to 5 channels. Because of the width of the filter, the spectral range of the derivative cubes was reduced from 400 to 950 nm to 414 to 900 nm.

2.2.2. Spectral Training Libraries

Two spectral libraries of material-labeled classes were built to train the Reflectance and Derivative CNN classification models from the 10 illuminations reserved for training (see Table 1). The materials needed to build classes are those present in the RIS image cube: the paints, gold leaf, and parchment. To do this, the classes for the Reflectance CNN were manually built using the Region of Interest (ROI) tool in ENVI to extract spatial areas of high spectral similarity from the reflectance image cube (Figure 3a), and, hence, the same material composition. The spectral library for the Derivative CNN were created using the exact same ROIs used for the Reflectance CNN but directly from the first-derivative calculated RIS cube. See Figure 3b, where the false color image of the 1964.8.1218’s derivative cube shows that the ROIs extracted correspond exactly to those in the reflectance cube in Figure 3a.
The most complex part of building the library was to create the pigment-labeled classes representing the colorants in the illuminations. The objective was to create labeled classes encompassing a wide range of color tones and hues involving a primary colorant or mixture/layers of colorants. For a single colorant, this would involve collecting reflectance spectra having a range of hues and tones. Each class in the library spans the range of hues and tones the artistic school is achieving with these pigments. For example, in defining the pigment malachite class, several green colored areas having the same spectral features specific to malachite in the VNIR were manually selected from the dragon heads with a light blue-green ROI (Figure 3a). The spectra in each class were then plotted to make sure that no clear spectral outliers (e.g., other materials) were accidentally included in the training class. Care was also taken to ensure the spectra for a given class encompassed the range of variation in hue and tone present. This was carried out for all malachite applications across the reserved training illuminations; extracted spectra with similar profiles across different illuminations were placed into the same training class, “Malachite”. The same process was applied to create the other labeled paint classes. In the end, twenty-seven training classes were created for each spectral library.
To confirm or supplement the pigment assignments by an analysis of the RIS spectra, results from single-spot spectroscopic techniques fiber-optic reflectance spectroscopy (FORS, 350–2500 nm, ASD Fieldspec 3, Malvern PANalytical, Malvern, UK) and X-ray fluorescence spectroscopy (XRF, in-house design: Rh X-ray source, 50 kV, 0.75 mA (XOS, NY) with a silicon drift detector (Vortex-90EX, Hitachi High-Technologies Science America, Inc., Schaumburg, IL, USA)) were used.
The labels for classes representing paint applications were assigned by the pigment or pigments that dominate the RIS spectral signature. For example, a class label of “Ultramarine” meant that absorptions due to ultramarine’s molecular structure dominated the spectral features of the reflectance spectra in that class. The reflectance values for ultramarine’s absorption centered at 600 nm are low, below 20% (see Figure 3c). Reflectance values increase in areas that have generous amounts of white pigment mixed into paints. If another pigment had a significant contribution, the paint was noted as either a mixed pigment class (with) or a layered pigment class (over) with two pigment names, e.g., “Azurite with red lake” or “Red lake over red lead”. Areas of layered pigments were examined and confirmed under the microscope. The training pigment classes identified in total comprised natural and synthetic, as well as inorganic and organic, pigments (Table 2): azurite ( C u 3 ( C O 3 ) 2 ( O H ) 2 ) , ultramarine ((Na,Ca)8 A l 6 S i 6 O 24 ( S , S O ) ), malachite ( C u 2 C O 3 ( O H ) 2 ) , red lead ( P b 2 2 + P b 4 + O 4 ), vermilion ( H g S ), an insect-based red lake, a goethite ( F e O ( O H ) )-rich ochre, a hematite ( F e 2 O 3 )-rich ochre, lead white (hydrocerussite, Pb3 ( C O 3 ) 2 ( O H ) 2 ), lead–tin yellow, folium (chrozophoridin), and a carbon-based black material. Some class labels are simply the observed color, as we were not able to determine the exact pigment composition: for example, “Brown”, which has spectral features of an iron earth pigment.
For the identification of folium, Surface-Enhanced Raman Spectroscopy (SERS) analysis was carried out on one micro-sample taken from a purple area in the medallion of the illumination Initial D (1964.8.1223) at the Department of Scientific Research of The Metropolitan Museum of Art using a Bruker Senterra Raman spectrometer equipped with a CCD detector and an Olympus 20× long working distance microscope objective. Samples were treated with hydrofluoric acid vapor in a polyethylene micro-chamber for 5 min [32] and then covered with a 2 μL drop of silver nanoparticles, followed by 0.5 μL of 0.5 M potassium nitrate as aggregating agent. Silver colloids were prepared following a synthesis previously published [33]. Excitation at 488 and 633 nm was provided by a Spectra Physics Cyan solid state laser and a Melles Griot He–Ne laser, respectively. An output laser power of 4 mW was employed for the analysis, with two integrations of 30 s.
Some training classes ended up being extremely large in number of spectra because they were used extensively throughout all of the illuminations (e.g., “Red lake” and “Malachite“), while some classes had fewer spectra due to pigments less frequently used, such as “Red earth”. To limit the impact of imbalanced class sizes on the neural network’s training, a random per-class sub-selection was made to reduce the size of the larger classes down to 6000 spectra each while maintaining sufficient spectral variability within the classes (Figure 3c,d). The resultant two training spectral libraries, each containing all twenty-seven classes, were then used to test the different classification methods described in the following section.

2.3. Pigment Classification Models

2.3.1. Library Spectra Spectral Angle Mapping (LS-SAM)

To determine the added value of the CNN, the spectral angle mapper (SAM), which is a simple classification algorithm or model, was used with the training libraries on the test paintings. SAM is a simple classification approach to identifying unknown materials in an image scene with a large amount of sample spectra. SAM measures the degree of spectral similarity between each spectra in the image cube and each spectra in a reference spectral library. Using an inner product to calculate the angle between two vectors that represent a “known” material’s spectrum and a spectrum from a pixel in the image scene, SAM assigns the label of the spectrum to that pixel with the smallest angle from the spectral libraries. Performing this calculation with all of the pixels in the image scene produces a material map of the data cube where each unique color represents a different reference material. In this approach, every single pixel is classified and with no user-defined threshold.

2.3.2. ENVI Spectral Hourglass Wizard (ENVI-SHW)

In ENVI, the Spectral Hourglass Wizard (ENVI-SHW) is a semi-automated approach to creating a library of reference spectra (endmembers) by a manual selection of clustered data in an n-dimensional (n-D) visualizer. The main advantage of this method is that, given that the endmembers are not truly linearly unmixable, one can cluster in n-d dimensional space and select clusters visually. Final pigment classification is performed by using SAM (described in the previous section). Here, spectral angle tolerance values are assigned to each endmember, determined by the expert user by adjusting the histogram of angles per endmember and analyzing the spectral features of spectra in the classified areas to evaluate the degree of similarity [12,20]. Thus, pixels that do not have spectral angle values that fall within tolerance values of any endmembers will not be classified. The ENVI-SHW was used in this study to create the truth material maps to compare to the maps produced automatically by the different classification methods proposed here, which utilized the two spectral libraries built from the training cubes.

2.3.3. Convolutional Neural Network (CNN) Classification Models

Building on the success of pigment identification with neural networks [26], the same 1-D spectral convolutional neural network was used to automatically create material maps for a variety of illuminations. See Figure 4 for the design architecture. Although the architecture is the same, the training data differed from the previous research in the form of using derivative spectra to create the Derivative CNN. The training data described in Section 2.2.2 were used to train and validate two classification models. The first was trained using the reflectance spectra as input and the second using the derivative data as input to the model. Both models had the same architecture with two hidden layers, a rectified linear unit (ReLU) activation function [34], and the Softmax function in the last output layer. The softmax function is a supervised learning algorithm that assigns a probability distribution to each class in a multi-class problem. Since this was a multi-class classification, categorical cross-entropy was used as a metric to optimize the models. The data were split 80/20 for training and validation, respectively. Initially, a 90/10 split was used, but due to the very small class, this was changed such that every validation set had at least a handful of spectra to evaluate. Training samples (batches) of 50 were input to the model and training ended after 100 epochs (cycles through full training dataset), with a learning rate of 0.01, and the Adam optimizer. The Adam optimizer, used to train neural networks to minimize their loss function, is widely used since it is fast and robust across networks. The output of the models are the probabilities of each pixel belonging to a specific class. The threshold was chosen as 90% to reduce the number of false-positives. See [26] for a detailed explanation of the neural network parameters.

3. Results and Discussion

Two 1-D convolutional neural network (CNN) models, one using the reflectance data (Reflectance CNN) and another the first derivative of reflectance data (Derivative CNN), were trained using spectral libraries generated from ten illuminated manuscripts produced for the same choir book. The training data comprised twenty-seven material classes (pigment mixtures and/or layered paints) identified in the combined large data set (Table 2). The CNNs were first evaluated by mean-per-class accuracy metrics and confusion matrices to obtain a measure of the models’ prediction capabilities. The CNNs and the LS-SAM classification models were then applied to the two test illuminations held in reserve, and the results are discussed in detail.

3.1. Neural Network Classification Models’ Evaluations

Two methods were used to evaluate the accuracy of each CNN and LS-SAM classification model. The first was a validation of the CNN model using k-fold cross validation [35], a method often used to evaluate machine learning models. This was carried out by calculating the average mean-per-class accuracy using 5-fold cross validation. For this cross-validation, the training data were divided into five parts: the model was trained on four parts and tested on the fifth. For the next round, a different part was used for the test set and the remaining four parts used for training. After five iterations, the accuracy results of the five models were averaged. Note, 5-fold was used due to the 80/20 train/validation split explained in Section 2.3.3. The per-class accuracy results were averaged to produce a less-biased estimate of the model’s performance. The Reflectance CNN model produced a mean-per-class accuracy of 98.252% and the Derivative CNN model produced a mean per class accuracy of 99.07%. The second method to evaluate the models involved visually comparing the truth ENVI-SHW labeled material maps to the models’ results to determine if they predicted the correct assignment.
In addition, a confusion matrix was created for each CNN model to visualize the results. Figure 5 displays both confusion matrices. The off-axis colors are darkened for better visualization. The strong diagonal response indicates good response between predicted and truth data. Looking at the Reflectance CNN confusion matrix, it suggests that the model will struggle a bit with distinguishing (thus labeling) between red lake, red lake over vermilion, and red lake with white. Interestingly, this appears to not be the case for the Derivative CNN.

3.2. Results from Applying CNN and LS-SAM Classification Models to Image Cubes of Test Illuminations Held in Reserve

For each of the two reserved RIS illumination image cubes, all four classification models (Reflectance CNN, Derivative CNN, Reflectance LS-SAM, Derivative LS-SAM) were applied to produce labeled classification maps to be compared with a “truth”-labeled classification map produced using the ENVI-SHW tool to find spectral endmembers (Figure 6).
The first test illumination RIS image cube was Initial I with David. The semi-automatic ENVI-SHW model was used to find the spectral endmembers, and the SAM model was used to make the material “truth” map (Figure 6b). Thirteen spectral endmembers were found that described the artwork (see Table S1 in Supplementary Materials for specific spectral assignments used in the identification and Figure S1 for corresponding truth map with endmembers). No single endmember was found that mapped all of the gold leaf in the background. However, the endmember that mapped to the gold application on the central pillar of the letter I was kept. Most of the endmembers are dominated by a single pigment, although the color tone varies; in several cases, some endmembers were made of two different pigments layered (e.g., lead–tin yellow over malachite, red lake over red lead, and red lake over vermilion) or in two other cases, mixed pigments (lead white with ultramarine or red lake). Overall, the truth material map shows that the robe of King David was painted using a red lake with varying amounts of lead white to denote the highlighted and shadowed portions of the robe. The assignment of red lake is made from the spectral shape and presence of two small absorption sub-bands seen in the endmember spectra and the FORS spectra at ∼530 and 570 nm, consistent with insect-derived red lakes such as kermes [36]. The central column of the letter I plus the leaves in the marginalia were painted with ultramarine, with varying amounts of lead white to create visual highlights and shadows. Ultramarine was identified by a symmetric absorption at 600 nm. The top and bottom horizontal lines of the letter I were painted with a copper green pigment, malachite (identified by the broad near-infrared charge transfer absorption band and the short-wave infrared carbonate feature [37] (Figure S2), on which lead–tin yellow (identified by the presence of lead and tin in the XRF spectra as well as the first-derivative reflectance spectra) brushstrokes were applied as highlights. The green belt is also painted with malachite. The orange-red leaves on the horizontal lines of the letter and emerging from the blue leaves were painted with red lead (identified by the sharp reflectance inflection at 567 nm) and in the shadowed areas red lake has been added to darken it. Finally, lead–tin yellow was used to paint the bright yellow V at the top and bottom of the column of the letter I in a field of yellow-brown ochre. The same two pigments were used to denote the decorative central band of the column. The flesh of King David was painted with red and some yellow ochre along with a white pigment (likely lead white), and in the cheeks vermilion was used. Thus, a colorful palette was used to paint the illuminated initial.
When the two trained CNN and two LS-SAM classification models were applied to the respective reflectance and the first-derivative reflectance image cubes of Initial I with David, four labeled material class maps were obtained (Figure 6b), which could be compared with the “truth” map from the ENVI-SHW workflow. An examination of these maps revealed that the CNN and LS-SAM classification models used the same fourteen labeled classes to produce their maps. Moreover, thirteen of these labeled classes matched the pigments found in the endmembers obtained by the application of the semi-automatic ENVI-SHW workflow. The differences between the maps of CNN and LS-SAM library classification models are the spatial distribution of these labeled material classes. The same can be said for the differences between these four maps and the “truth” map. Thus, the first observation is that both the CNN and LS-SAM classification models performed well. The differences in these distributions, though, provide insight into how well each of the models performed. The pigment-labeled class areas where these differences are most marked are in the light and dark areas of King David’s red robe, the light and dark blue areas of the letter I, the face of David, and the orange-red floral decoration elements. For example, there are differences in how certain labeled classes were used for some places in the shadowed portion of David’s red robe. The LS-SAM classification models, reflectance and derivative, found areas where the class of “red lake over vermilion” was a better match to the spectra in the image cube than the “red lake” class, though incorrect. This was not the case for the two CNN models, which correctly assigned these areas to the “red lake” class. The “truth map” shows that there is no “red lake on vermilion” in the robe indicating, an mis-assignment by the LS-SAM classification models compared to the CNN models in these shaded areas.
The region of David’s head highlights further differences in the performance of the CNN and LS-SAM classification models (Figure 7), especially in the subtlety of the flesh tones, which was not a focus of the training given the complexity of how faces are painted. While the “truth map” arguably is not the best classification map, it does show that a small area of vermilion was applied for David’s red cheek and ochres with lead white for the more neutral (“nude”) flesh tone. Interestingly, all four (NN and LS-SAM) maps show that the best description of David’s face requires two ochre classes. Moreover, all show an improvement over the “truth map”, pointing out a limitation in the ENVI-SHW workflow which relies on a single endmember spectrum to assign a given endmember class. A clear difference between the Reflectance CNN and the first-derivative CNN and LS-SAM is apparent with regards to the red paint application in Davids’ cheek. Specifically, only the first-derivative Reflectance CNN model and first-derivative LS-SAM model classified the red paint in David’s cheek correctly with the “vermilion” class. Interestingly, the LS-SAM models (reflectance and derivative) performed well with the white paint visible in the beard, although the first-derivative CNN model also classified some of the white areas of the beard correctly compared to the Reflectance CNN model, and it was the only one to correctly classify some of the painted lips as vermilion. Thus, the Derivative CNN and Derivative LS-SAM mapped the widest range of materials to classify the facial region, incorporating the red ochre-dominated flesh tone, the nude-colored flesh tone, as well as the vermilion-concentrated cheek and lips. However, the Derivative LS-SAM model did not find the vermilion on the lips and classified some of the grey beard as an ochre, whereas the Derivative CNN model did use all of the correct classes but not to the extent expected. In summary, the Derivative CNN model, overall, performed better than the others in labeling the head of David.
The other challenging area of the illumination’s RIS image cube is the orange-red floral elements, specifically those at the ends of the blue marginalia decoration and the orange-red petals emerging from the green horizontal vines (Figure 6 and Figure 7). The ENVI-SHW material’s “truth” map and visual microscopic inspection show that both of these orange-red floral elements were first painted with red lead. To make the shaded portions of the orange-red leaves, the red lead was layered with two applications of red lake: the first to create a mid-tone and then a second application of red lake to render the dark shadows. In the ENVI-SHW truth map, the endmembers for the two applications of red lake are “red lake over red lead” and for the deeper red areas simply “red lake”, even though it is a layer of red lake applied over the first application of red lake on a layer of red lead. The derivative spectra from the RIS image cube for the orange-red leaves by the head of David confirm the presence of red lead. The bright orange-red paint’s first-derivative reflectance spectrum has a narrow and symmetrical peak at 568 nm whose full width at half maximum (FWHM) is ∼50 nm, indicative of red lead. In an area of the glazed red leaf where only a single application of red lake over red lead is present (as seen under the stereo microscope), the first derivative peak appears asymmetrical and broad at 603 nm, with a shoulder at 568 nm at an amplitude of ∼0.4 of that of the 603 nm peak. Together, these features confirm the red lake over red lead. The area of the second application of red lead has a peak at 611 nm that is asymmetrical and even broader, but with a weak shoulder at 565 nm. This explains why the red lake endmember mapped consistently to the area of the second application of red lake as it is close to being an optically thick layer of red lake. Looking at the maps from the CNN models and the LS-SAM models, differences in the class labeling performance emerge. First, all four classified regions of the orange-red leaves as “red lead” correctly and “red lake on red lead” only partially correctly. The misclassifications are in the areas of the second application of red lake that the truth map had labeled as red lake. None of the four assigned it as “red lake”; the Derivative CNN model classified it as “red lake over red lead”, which is correct; interestingly, the other three classified it as either “red lake over red lead” or “red lake over vermilion”, which is incorrect. In summary, examining in detail the small but complex painted areas of the illumination, we observe that the Derivative CNN yielded the most accurate mapping results. Thus, overall, the four methods correctly classified the general regions of pigments in the RIS image cubes, but the Derivative CNN outperformed in the most complex areas of paint application.
Given the pigment classification labeled map results of the four classification methods, a test of the Reflectance and Derivative CNN models was performed on a second illumination from the same choir book, Initial D (1964.8.1223) and compared to the material “truth” map made using the ENVI-SHW tool. The labeled class results from the Derivative CNN model were similar, if not better, to those from the Reflectance CNN model (see Supplementary Figure S4 for comparison); thus, only the labeled class maps from the Derivative CNN model versus the ENVI-SHW truth map are presented here (Figure 8c).
A semi-manual analysis with the ENVI-SHW tool (Figure 8b) showed that eleven spectral endmembers were needed to describe the reflectance image cube (see Figure S3); as described earlier, a spectroscopic analysis was performed to label the endmembers with the pigments dominant in the diffuse reflectance spectra. The pigment-labeled classification “truth” map shows that a similarly colorful palette to that found in Initial I with David was selected to create Initial D. Azurite, with varying amounts of black and white, was used to paint the blue dragons and the central blue flower, with additional touches of black and white to emphasize bodily features such as curves and eyes. Consistent with Initial I with David, the green leaf decoration was painted with malachite, highlighted with yellow outlines of a lead–tin yellow overlay, while the bulk of the orange-red flowers were painted with red lead and shaded by an insect-based red lake paint. Three bands of white-, yellow-, and light brown-colored paint outline the interior edges of the dragons’ bodies, which define the initial D shape; the latter two colors are made of lead–tin yellow and yellow-brown ochre, respectively. Similar to Initial I with David, only one endmember was kept for the gold applications in this initial in the region to the right of the illumination, enveloped in the blue bodies of the dragons. No endmembers were found for the background gold foil and parchment and, thus, were not mapped. One endmember was recovered to classify the two purple central flowers within the initial, and an examination of the endmember reflectance spectra revealed absorption features at ∼526 and 569 nm and a concave rising reflectance beyond 600 nm with a weak broad band absorption around 650 nm. While narrow absorption features show the presence of a red lake (insect-based), the broad absorption at approximately 650 nm indicates the presence of another pigment, for example, a blue pigment such as ultramarine, which explains the purple appearance of the flowers—a paint mixture combining red lake with ultramarine to produce a colorful purple hue. This same purple color featuring similar reflectance spectra was also found in one of the training illuminations, Initial D with David (1964.8.1230), applied to the gem in King David’s crown as well as his stockings. The pigment label classification for this class was assigned as red lake and ultramarine with additional unknown(s) (“Red lake/ultramarine/TBD”).
The Derivative CNN model pigment-labeled class map presents eighteen material classes compared to the “truth” map that found only eleven classes (Figure 8b). Similar to the first case study, the eleven “truth” endmember classes match well to those of the pigment-labeled classes from the Derivative CNN model (see classes boxed in green within the legend).
The strong spatial overlap of the classifications between the two maps indicates that the Derivative CNN did a good job labeling a bulk of the illumination’s painting materials. Again, the differences in the spatial extent of these classes are what inform us on how the CNN model is performing. For example, the Derivative CNN classified portions of tarnished/discolored gold leaf (not mapped with ENVI-SHW) as the “Gray” training class. We also observed that the transition regions between the two circular paint bands of lead–tin yellow and yellow-brown ochre on the inner edges of the dragons were mapped as “PbSnY with yellow ochre”. These labels appear correct, particularly for the latter given the spatial sampling size at the painting.
Finally, three additional pigment classes were used for the central purple flowers besides the “Red lake/Ultramarine/TBD” class that matches that of the endmember class from the “truth” map. These additional pigment classes were “red lake”, “ultramarine with white”, and “red lake over vermilion”, and the logical question that arises from this is as follows: are these mis-assignments or logical on the CNN’s behalf? Is the use of one endmember to represent a range of hues through the ENVI-SHW workflow too simplistic? A visual examination of the purple flowers reveals areas of deep purple layered with more transparent applications of purple paint with a red paint layer below; light-colored crosshatched marks are also present across the flower petals. The more saturated purple areas were classified as “red lake” while the lighter/thinner purple applications with some red undertones appearing through were labeled “red lake over vermilion”. The light-colored outlines and crosshatched marks were labeled as “ultramarine" and “ultramarine with white”. These assignments do not seem unreasonable as these areas have less red/purple in them, which means the red lake double-structured absorption is weak, if present at all, and the concave spectral shape beyond 600 nm dominates the spectral profile. The Derivative CNN model, thus, addresses the inherent inhomogeneity resulting from the more complex painting application present compared to other areas of the illumination.
The only misclassification was the assignment of the “red lake over vermilion” class within the purple flowers, a class not present in general in this illumination. This result may be due to the lack of training of red lake/ultramarine combinations. We sought to understand the nature of the purple colorant better by revisiting the training illumination Initial D—David (1964.8.1230). An examination of King David’s purple crown under optical microscopy helped to confirm that the color is a product of two layered paint mixtures (Figure 9a). The upper layer contains blue pigment particles suspended in a semi-transparent paint appearing to consist of a translucent blue/purple-toned binding media, applied over a pink-red base paint layer with red particles. The blue/purple paint layer does not appear to have spatially resolved pigment particles contributing to its color, suggesting the contribution of an additional, possibly organic, unidenified colorant.
Revisiting the diffuse reflectance spectra (RIS and FORS) from the blue, pink, and purple painted areas reaffirmed that the purple color could be explained by a combination of red lake (insect) with ultramarine (see Supplementary Figure S5 for spectra and absorption assignments), not azurite since hydroxyl and carbonate bands were not observed in the short-wave infrared wavelength range of the FORS spectra. However, without performing detailed model studies or modeling the reflectance spectra using Kubelk–Munk theory, the presence of another pigment could not be ruled out. A microsample was removed from one of the purple flowers (a blotter paper fiber from a prior conservation intervention that absorbed some of the pigment). The microsample was analyzed using surface-enhanced Raman spectroscopy (SERS) (Figure 9b), which identified folium, a popular medieval purple watercolor pigment derived from the fruits of the plant chrozophora tinctoria, whose chemical structure was recently solved [38]. Thus, these flowers were painted with three pigments (at minimum): red lake, ultramarine, and some folium, though it is unclear how much of each was used. As folium has characteristic absorption bands at approximately 545 and 580 nm [39], the lack of direct evidence for their presence in the reflectance spectra might suggest the amount of folium could be small. To reach a firm conclusion on the material nature of the purple, further analysis is required.
In summary, the results of applying the four pigment-labeled classification models to the two test illuminations held in reserve revealed the following. First, all of the major areas of painting (e.g., relatively uniform areas of color) were classified accurately, meaning each model performed well in labeling these types of uniformly decorated areas as achieved with the semi-automatic ENVI-SHW workflow, which takes at minimum a few hours to perform. This is noteworthy especially since the better alternative to the ENVI-SHW workflow in the remote sensing community is the automatic convex-hull models such as MaxD and SMACC, which produce results in seconds but have been shown to find at best 70 percent of the classes present when applied to paintings [20]. A test using SMACC (see Figure S6, Supplementary Materials) on the RIS derivative image cube of Initial I David recovered only 50 percent of the endmembers found with the ENVI-SHW.
The main difference among the four models was the classification of complex painting applications such as different flesh tones and areas of layered red lake with varying degrees of saturation. In such cases, the Derivative CNN more often than not outperformed the Reflectance CNN and the two LS-SAM models. The latter outcome indicates that the CNN brings value compared to simply using the libraries alone to classify the illuminations. That the benefit shows up in the most complex areas of these paintings, which are not easily labeled even with the ENVI-SHW workflow, is promising.
A valuable outcome of this study is a reflection on our label of the ENVI-SHW “truth” map. To this point, the ENVI-SHW approach has been our best approximation to create a classification map of an artwork in which we have confidence of the assignments. However, we recognize that manually mapping and labeling painting applications even for expert users still has limitations. We hoped to observe that in some of these limitations, more automated methods would supersede the manual “truth” approach, which we believe to be the case, e.g., mapping the detailed execution of flesh tones.
The next steps to be explored include looking to see how the performance of CNN models and LS-SAM models are affected when the training sets are thinned or reduced. The degree of success of the LS-SAM models may have come from robust training spectral libraries. Also, we will see if the performance of the Derivative CNN can be improved by adding a weighting function to the wavelength ranges of the derivative spectra that encode important information for pigment identification.

4. Conclusions

This study expands on our prior work to test a previously built 1-D reflectance neural network architecture by utilizing a larger number of training paintings and exploring an additional neural network model based on the first-derivative calculation of the reflectance spectrum. To test the added utility of the neural network, the results were compared with the maps produced from the training spectral libraries using spectral angle mapping. Several conclusions can be drawn from this study. First, both the Reflectance and Derivative CNN models and the LS-SAM models came close to or exceeded results achieved with the ENVI-SHW workflow. Secondly, the Derivative CNN model performed better in the most complex painted areas of the illuminations, which are found in small, detailed areas of the image scene such as the faces and multi-paint layered floral decoration. Finally, the results show that the Derivative CNN does provide an improvement over robust training libraries and the ENVI-SHW workflow, though how transformative this impact is remains to be seen.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app14166857/s1. Figure S1: ENVI-SHW ’truth’ map and endmember list for illumination 1964.8.1226, Initial I with David; Table S1: Spectral assignments and material class labels for endmembers in Figure S1; Figure S2: Diffuse reflectance spectra (400–2500 nm) of parchment and malachite from 1964.8.1226; Figure S3: ENVI-SHW ’truth’ map and endmember list for illumination 1964.8.1223, Initial D; Figure S4: Classification maps for 1964.8.1223 (Initial D); Figure S5: Diffuse reflectance spectra from training illumination 1964.8.1230 (Initial D-David) selected from painted areas of blue, pink, and purple; Figure S6: Sequential Maximum Angle Convex Cone (SMACC) abundance images and endmember results from illumination 1964.8.1226.

Author Contributions

Conceptualization, J.K.D., R.R., T.K. and M.W.; methodology, J.K.D., T.K. and R.R.; software, T.K.; formal analysis, J.K.D., T.K., F.P. and R.R.; investigation, J.K.D., M.F., F.P. and R.R.; writing—original draft preparation, T.K., R.R. and J.K.D.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article can be made available by the authors upon reasonable request.

Acknowledgments

The authors thank and acknowledge the support of the Scientific Research Department at the National Gallery of Art, Washington D.C. to carry out this research. The authors are also grateful to The Metropolitan Museum of Art’s Department of Scientific Research for providing access to their laboratories and resources to conduct SERS analyses.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

RISReflectance imaging spectroscopy
CNNConvolutional neural network
SAMSpectral angle map
ANAccession number(s)
MotCMaster of the Cypresses
VNIRVisible to near-infrared

References

  1. Schulte, F.; Brzezinka, K.W.; Lutzenberger, K.; Stege, H.; Panne, U. Raman spectroscopy of synthetic organic pigments used in 20th century works of art. J. Raman Spectrosc. Int. J. Orig. Work. All Asp. Raman Spectrosc. Incl. High. Order Process. Also Brillouin Rayleigh Scatt. 2008, 39, 1455–1463. [Google Scholar] [CrossRef]
  2. Clark, R.J. Pigment identification by spectroscopic means: An arts/science interface. Comptes Rendus Chim. 2002, 5, 7–20. [Google Scholar] [CrossRef]
  3. Ricciardi, P.; Pallipurath, A.; Rose, K. ‘It’s not easy being green’: A spectroscopic study of green pigments used in illuminated manuscripts. Anal. Methods 2013, 5, 3819–3824. [Google Scholar] [CrossRef]
  4. Barnett, J.R.; Miller, S.; Pearce, E. Colour and art: A brief history of pigments. Opt. Laser Technol. 2006, 38, 445–453. [Google Scholar] [CrossRef]
  5. Feller, R.L. Artist’s Pigments: A Handbook of Their History and Characteristics; Archetype Publications: London, UK, 1986; Volume 1. Available online: https://www.nga.gov/content/dam/ngaweb/research/publications/pdfs/artists-pigments-vol1.pdf (accessed on 30 July 2024).
  6. Eastaugh, N.; Walsh, V.; Chaplin, T.; Siddall, R. Pigment Compendium: A Dictionary of Historical Pigments; Routledge: Oxford, UK, 2007. [Google Scholar]
  7. González-Cabrera, M.; Wieland, K.; Eitenberger, E.; Bleier, A.; Brunnbauer, L.; Limbeck, A.; Hutter, H.; Haisch, C.; Lendl, B.; Domínguez-Vidal, A.; et al. Multisensor hyperspectral imaging approach for the microchemical analysis of ultramarine blue pigments. Sci. Rep. 2022, 12, 707. [Google Scholar] [CrossRef] [PubMed]
  8. Rosi, F.; Miliani, C.; Braun, R.; Harig, R.; Sali, D.; Brunetti, B.G.; Sgamellotti, A. Noninvasive analysis of paintings by mid-infrared hyperspectral imaging. Angew. Chem. Int. Ed. 2013, 52, 5258–5261. [Google Scholar] [CrossRef] [PubMed]
  9. Van der Snickt, G.; Legrand, S.; Caen, J.; Vanmeert, F.; Alfeld, M.; Janssens, K. Chemical imaging of stained-glass windows by means of macro X-ray fluorescence (MA-XRF) scanning. Microchem. J. 2016, 124, 615–622. [Google Scholar] [CrossRef]
  10. Vermeulen, M.; Miranda, A.S.O.; Tamburini, D.; Delgado, S.E.R.; Walton, M. A multi-analytical study of the palette of impressionist and post-impressionist Puerto Rican artists. Herit. Sci. 2022, 10, 44. [Google Scholar] [CrossRef]
  11. Dal Fovo, A.; Mattana, S.; Ruberto, C.; Castelli, L.; Ramat, A.; Riitano, P.; Cicchi, R.; Fontana, R. Novel integration of non-invasive imaging techniques for the analysis of an egg tempera painting by Pietro Lorenzetti. Eur. Phys. J. Plus 2023, 138, 71. [Google Scholar] [CrossRef]
  12. Cucci, C.; Delaney, J.K.; Picollo, M. Reflectance hyperspectral imaging for investigation of works of art: Old master paintings and illuminated manuscripts. Acc. Chem. Res. 2016, 49, 2070–2079. [Google Scholar] [CrossRef]
  13. Delaney, J.K.; Dooley, K.A. Visible and Infrared Reflectance Imaging Spectroscopy of Paintings and Works on Paper. In Analytical Chemistry for the Study of Paintings and the Detection of Forgeries; Springer: Berlin/Heidelberg, Germany, 2022; pp. 115–132. [Google Scholar]
  14. Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  15. Grabowski, B.; Masarczyk, W.; Głomb, P.; Mendys, A. Automatic pigment identification from hyperspectral data. J. Cult. Herit. 2018, 31, 1–12. [Google Scholar] [CrossRef]
  16. Balas, C.; Epitropou, G.; Tsapras, A.; Hadjinicolaou, N. Hyperspectral imaging and spectral classification for pigment identification and mapping in paintings by El Greco and his workshop. Multimed. Tools Appl. 2018, 77, 9737–9751. [Google Scholar] [CrossRef]
  17. Huang, S.Y.; Mukundan, A.; Tsao, Y.M.; Kim, Y.; Lin, F.C.; Wang, H.C. Recent advances in counterfeit art, document, photo, hologram, and currency detection using hyperspectral imaging. Sensors 2022, 22, 7308. [Google Scholar] [CrossRef] [PubMed]
  18. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  19. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef]
  20. Kleynhans, T.; Messinger, D.W.; Delaney, J.K. Towards automatic classification of diffuse reflectance image cubes from paintings collected with hyperspectral cameras. Microchem. J. 2020, 157, 104934. [Google Scholar] [CrossRef]
  21. Gruninger, J.H.; Ratkowski, A.J.; Hoke, M.L. The sequential maximum angle convex cone (SMACC) endmember model. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery X, Orlando, FL, USA, 12–15 April 2004; Volume 5425, pp. 1–14. [Google Scholar]
  22. Heylen, R.; Parente, M.; Gader, P. A review of nonlinear hyperspectral unmixing methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1844–1868. [Google Scholar] [CrossRef]
  23. Poulet, F.; Erard, S. Nonlinear spectral mixing: Quantitative analysis of laboratory mineral mixtures. J. Geophys. Res. Planets 2004, 109. [Google Scholar] [CrossRef]
  24. Zhao, Y.; Berns, R.S.; Taplin, L.A.; Coddington, J. An investigation of multispectral imaging for the mapping of pigments in paintings. In Proceedings of the Computer Image Analysis in the Study of Art, San Jose, CA, USA, 28–29 January 2008; Volume 6810, pp. 65–73. [Google Scholar]
  25. Rohani, N.; Pouyet, E.; Walton, M.; Cossairt, O.; Katsaggelos, A.K. Nonlinear unmixing of hyperspectral datasets for the study of painted works of art. Angew. Chem. 2018, 130, 11076–11080. [Google Scholar] [CrossRef]
  26. Kleynhans, T.; Schmidt Patterson, C.M.; Dooley, K.A.; Messinger, D.W.; Delaney, J.K. An alternative approach to mapping pigments in paintings with hyperspectral reflectance image cubes using artificial intelligence. Herit. Sci. 2020, 8, 84. [Google Scholar] [CrossRef]
  27. Radpour, R.; Gates, G.A.; Kakoulli, I.; Delaney, J.K. Identification and mapping of ancient pigments in a Roman Egyptian funerary portrait by application of reflectance and luminescence imaging spectroscopy. Herit. Sci. 2022, 10, 8. [Google Scholar] [CrossRef]
  28. Radpour, R.; Delaney, J.K.; Kakoulli, I. Acquisition of high spectral resolution diffuse reflectance image cubes (350–2500 nm) from archaeological wall paintings and other immovable heritage using a field-deployable spatial scanning reflectance spectrometry hyperspectral system. Sensors 2022, 22, 1915. [Google Scholar] [CrossRef] [PubMed]
  29. Westerby, M.J. The Masters of the Cypresses: Archbishops and Observant Jeronymites as Patrons of Illuminated Manuscripts in Seville, c. 1430–1490. 2025; in preparation. [Google Scholar]
  30. Delaney, J.K.; Dooley, K.A.; Van Loon, A.; Vandivere, A. Mapping the pigment distribution of Vermeer’s Girl with a Pearl Earring. Herit. Sci. 2020, 8, 4. [Google Scholar] [CrossRef]
  31. Luo, G.; Chen, G.; Tian, L.; Qin, K.; Qian, S.E. Minimum noise fraction versus principal component analysis as a preprocessing step for hyperspectral imagery denoising. Can. J. Remote. Sens. 2016, 42, 106–116. [Google Scholar] [CrossRef]
  32. Pozzi, F.; Lombardi, J.R.; Bruni, S.; Leona, M. Sample treatment considerations in the analysis of organic colorants by surface-enhanced Raman scattering. Anal. Chem. 2012, 84, 3751–3757. [Google Scholar] [CrossRef] [PubMed]
  33. Leona, M. Microanalysis of organic pigments and glazes in polychrome works of art by surface-enhanced resonance Raman scattering. Proc. Natl. Acad. Sci. USA 2009, 106, 14757–14762. [Google Scholar] [CrossRef] [PubMed]
  34. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
  35. Berrar, D. Cross-Validation; Academic Press: Oxford, UK, 2019; pp. 542–545. [Google Scholar] [CrossRef]
  36. Aceto, M.; Agostino, A.; Fenoglio, G.; Idone, A.; Gulmini, M.; Picollo, M.; Ricciardi, P.; Delaney, J.K. Characterisation of colourants on illuminated manuscripts by portable fibre optic UV-visible-NIR reflectance spectrophotometry. Anal. Methods 2014, 6, 1488–1500. [Google Scholar] [CrossRef]
  37. Clark, R.N.; Swayze, G.A.; Wise, R.A.; Livo, E.; Hoefen, T.M.; Kokaly, R.F.; Sutley, S.J. USGS Digital Spectral Library Splib06a; Technical Report; US Geological Survey: Reston, VA, USA, 2017.
  38. Nabais, P.; Oliveira, J.; Pina, F.; Teixeira, N.; De Freitas, V.; Brás, N.; Clemente, A.; Rangel, M.; Silva, A.; Melo, M. A 1000-year-old mystery solved: Unlocking the molecular structure for the medieval blue from Chrozophora tinctoria, also known as folium. Sci. Adv. 2020, 6, eaaz7772. [Google Scholar] [CrossRef]
  39. Aceto, M.; Calà, E.; Agostino, A.; Fenoglio, G.; Idone, A.; Porter, C.; Gulmini, M. On the identification of folium and orchil on illuminated manuscripts. Spectrochim. Acta Part Mol. Biomol. Spectrosc. 2017, 171, 461–469. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Color images of the illuminations used to test the classification algorithms. (a) Illumination 1964.8.1226 (Initial I with King David) and (b) 1964.8.1223 (Initial D) from the Rosenwald collection at the National Gallery of Art in Washington D.C.
Figure 1. Color images of the illuminations used to test the classification algorithms. (a) Illumination 1964.8.1226 (Initial I with King David) and (b) 1964.8.1223 (Initial D) from the Rosenwald collection at the National Gallery of Art in Washington D.C.
Applsci 14 06857 g001
Figure 2. Images of the experimental setup: RIS capture of an attributed Master of the Cypresses illumination from the Rosenwald Collection at the National Gallery of Art in Washington D.C.
Figure 2. Images of the experimental setup: RIS capture of an attributed Master of the Cypresses illumination from the Rosenwald Collection at the National Gallery of Art in Washington D.C.
Applsci 14 06857 g002
Figure 3. An example of the regions of interest (ROI) selected to build the Reflectance and Derivative CNN training class spectral libraries. (a) ROI selection from RIS image cube of Initial S—King David as Scribe (1964.8.1218), with areas of similar spectra grouped together into the same class (denoted in the image by the same color ROI). (b) ROI selection from the first-derivative calculated RIS image cube of 1964.8.1218. These areas are the same as those selected in the reflectance cube. (c) After extracting spectra across the ten training illuminations, the final classes resulted in thousands of spectra with a small range of spectral variability due to hue/saturation, as shown in these four example classes. (d) The same four classes in (c) but extracted in derivative-space.
Figure 3. An example of the regions of interest (ROI) selected to build the Reflectance and Derivative CNN training class spectral libraries. (a) ROI selection from RIS image cube of Initial S—King David as Scribe (1964.8.1218), with areas of similar spectra grouped together into the same class (denoted in the image by the same color ROI). (b) ROI selection from the first-derivative calculated RIS image cube of 1964.8.1218. These areas are the same as those selected in the reflectance cube. (c) After extracting spectra across the ten training illuminations, the final classes resulted in thousands of spectra with a small range of spectral variability due to hue/saturation, as shown in these four example classes. (d) The same four classes in (c) but extracted in derivative-space.
Applsci 14 06857 g003
Figure 4. Design schematic of the 1D-CNN used to train the the Reflectance Derivative CNN models. The input spectra graph displays the mean spectra for each class as an example input batch.
Figure 4. Design schematic of the 1D-CNN used to train the the Reflectance Derivative CNN models. The input spectra graph displays the mean spectra for each class as an example input batch.
Applsci 14 06857 g004
Figure 5. The confusion matrices. (a) Confusion matrix for the reflectance spectra-trained neural network; (b) confusion matrix for the derivative spectra-trained neural network. Please note that “lead–tin yellow” has been abbreviated here to “PbSnY”.
Figure 5. The confusion matrices. (a) Confusion matrix for the reflectance spectra-trained neural network; (b) confusion matrix for the derivative spectra-trained neural network. Please note that “lead–tin yellow” has been abbreviated here to “PbSnY”.
Applsci 14 06857 g005
Figure 6. Labeled material maps from the four pigment-trained classification models as compared to the truth material map for the test illumination Initial I with David. (a) Color image of manuscript illumination 1964.8.1226. (b) ENVI-SHW truth material map. (c) Reflectance CNN material map. (d) Reflectance LS-SAM material map. (e) Derivative CNN material map. (f) Derivative LS-SAM material map. Below, the map is the color legend for 1964.8.1226’s material maps. Note that lead–tin yellow has been abbreviated as “PbSnY”. Within the red box is the class that was not used to help create the ENVI-SHW material map.
Figure 6. Labeled material maps from the four pigment-trained classification models as compared to the truth material map for the test illumination Initial I with David. (a) Color image of manuscript illumination 1964.8.1226. (b) ENVI-SHW truth material map. (c) Reflectance CNN material map. (d) Reflectance LS-SAM material map. (e) Derivative CNN material map. (f) Derivative LS-SAM material map. Below, the map is the color legend for 1964.8.1226’s material maps. Note that lead–tin yellow has been abbreviated as “PbSnY”. Within the red box is the class that was not used to help create the ENVI-SHW material map.
Applsci 14 06857 g006
Figure 7. (a) Details of King David’s face in test illumination Initial I with David (1964.8.1226) and corresponding pigment-labeled class maps from (b) ENVI-SHW; (c) Reflectance CNN; (d) Derivative CNN; (e) Reflectance LS-SAM; (f) Derivative LS-SAM.
Figure 7. (a) Details of King David’s face in test illumination Initial I with David (1964.8.1226) and corresponding pigment-labeled class maps from (b) ENVI-SHW; (c) Reflectance CNN; (d) Derivative CNN; (e) Reflectance LS-SAM; (f) Derivative LS-SAM.
Applsci 14 06857 g007
Figure 8. Labeled material maps from the four models as compared to the truth material map for the test illumination Initial D. (a) Color image of manuscript illumination 1964.8.1223. (b) ENVI-SHW truth material map. (c) Derivative CNN material map. Below the maps is the color legend for 1964.8.1223’s material maps. Note that lead–tin yellow has been abbreviated as “PbSnY”. Within the green boxes are the endmember classes also used to create the ENVI-SHW map.
Figure 8. Labeled material maps from the four models as compared to the truth material map for the test illumination Initial D. (a) Color image of manuscript illumination 1964.8.1223. (b) ENVI-SHW truth material map. (c) Derivative CNN material map. Below the maps is the color legend for 1964.8.1223’s material maps. Note that lead–tin yellow has been abbreviated as “PbSnY”. Within the green boxes are the endmember classes also used to create the ENVI-SHW map.
Applsci 14 06857 g008
Figure 9. (a) Photomicrograph from the crown of King David in illumination Initial D—David (1964.8.1230) demonstrating the unique purple hue present in two of the collection’s cuttings. (b) Surface-enhanced Raman spectrum of a sample from illumination Initial D (1964.8.1223), revealing the presence of folium.
Figure 9. (a) Photomicrograph from the crown of King David in illumination Initial D—David (1964.8.1230) demonstrating the unique purple hue present in two of the collection’s cuttings. (b) Surface-enhanced Raman spectrum of a sample from illumination Initial D (1964.8.1223), revealing the presence of folium.
Applsci 14 06857 g009
Table 1. Illuminations used in this research. All illuminations are from the Rosenwald Collection, National Gallery of Art, Washington, D.C.
Table 1. Illuminations used in this research. All illuminations are from the Rosenwald Collection, National Gallery of Art, Washington, D.C.
Illuminations used to train the classification modelsAccession number
Initial S–King David as Scribe1964.8.1218
Initial T–Monks Singing Before an Altar1964.8.1219
Initial D1964.8.1221
Initial L1964.8.1222
Initial U (?)1964.8.1224
Initial N (?)–David in Prayer1964.8.1225
Initial I–David1964.8.1227
Initial C–David (King Saul?)1964.8.1228
Initial L–Old Testament Prophet1964.8.1229
Initial D–David1964.8.1230
Illuminations used to test classification modelsAccession number
Initial I–David1964.8.1226
Initial D1964.8.1223
Initial A1964.8.1220
Table 2. The spectral library of labeled classes used for both reflectance and first-derivative of reflectance convolutional neural networks.
Table 2. The spectral library of labeled classes used for both reflectance and first-derivative of reflectance convolutional neural networks.
AzuriteParchmentAzurite with red lake
BlackRed ochreAzurite with white
BrownRed fleshLead–tin yellow over malachite
Red lake/Ultramarine/TBDRed lakeLead–tin yellow with yellow ochre
GrayRed leadRed lake over red lead
GoldUltramarineRed lake over vermilion
Lead–tin yellowWhiteRed lake with white
MalachiteVermilionRed ochre over vermilion
Nude fleshYellow-brown ochreUltramarine with white
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Radpour, R.; Kleynhans, T.; Facini, M.; Pozzi, F.; Westerby, M.; Delaney, J.K. Advances in Automated Pigment Mapping for 15th-Century Manuscript Illuminations Using 1-D Convolutional Neural Networks and Hyperspectral Reflectance Image Cubes. Appl. Sci. 2024, 14, 6857. https://doi.org/10.3390/app14166857

AMA Style

Radpour R, Kleynhans T, Facini M, Pozzi F, Westerby M, Delaney JK. Advances in Automated Pigment Mapping for 15th-Century Manuscript Illuminations Using 1-D Convolutional Neural Networks and Hyperspectral Reflectance Image Cubes. Applied Sciences. 2024; 14(16):6857. https://doi.org/10.3390/app14166857

Chicago/Turabian Style

Radpour, Roxanne, Tania Kleynhans, Michelle Facini, Federica Pozzi, Matthew Westerby, and John K. Delaney. 2024. "Advances in Automated Pigment Mapping for 15th-Century Manuscript Illuminations Using 1-D Convolutional Neural Networks and Hyperspectral Reflectance Image Cubes" Applied Sciences 14, no. 16: 6857. https://doi.org/10.3390/app14166857

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop