Next Article in Journal
Urban Pedestrian Routes’ Accessibility Assessment Using Geographic Information System Processing and Deep Learning-Based Object Detection
Next Article in Special Issue
Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration
Previous Article in Journal
A Context-Aware Navigation Framework for Ground Robots in Horticultural Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?

by
Abdelhamid N. Fsian
1,*,
Jean-Baptiste Thomas
1,2,
Jon Y. Hardeberg
2 and
Pierre Gouton
1
1
Imagerie et Vision Artificielle (ImVIA) Laboratory, Department Informatique, Electronique, Mécanique (IEM), Université de Bourgogne, 21000 Dijon, France
2
Colourlab, Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3666; https://doi.org/10.3390/s24113666
Submission received: 22 April 2024 / Revised: 1 June 2024 / Accepted: 1 June 2024 / Published: 5 June 2024
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)

Abstract

:
Spectral imaging has revolutionisedvarious fields by capturing detailed spatial and spectral information. However, its high cost and complexity limit the acquisition of a large amount of data to generalise processes and methods, thus limiting widespread adoption. To overcome this issue, a body of the literature investigates how to reconstruct spectral information from RGB images, with recent methods reaching a fairly low error of reconstruction, as demonstrated in the recent literature. This article explores the modification of information in the case of RGB-to-spectral reconstruction beyond reconstruction metrics, with a focus on assessing the accuracy of the reconstruction process and its ability to replicate full spectral information. In addition to this, we conduct a colorimetric relighting analysis based on the reconstructed spectra. We investigate the information representation by principal component analysis and demonstrate that, while the reconstruction error of the state-of-the-art reconstruction method is low, the nature of the reconstructed information is different. While it appears that the use in colour imaging comes with very good performance to handle illumination, the distribution of information difference between the measured and estimated spectra suggests that caution should be exercised before generalising the use of this approach.

1. Introduction

Spectral imaging systems (SIs) capture the distribution of light in a scene across several spectral bands. As a result, they offer more complete visual data compared to conventional colour cameras, which only operate within three broad spectral bands (red, green, and blue). SIs present numerous advantages for various computer vision applications such as medical imaging [1,2], remote sensing [3,4], and object tracking [5], to cite a few. Nonetheless, their utilisation has been constrained by factors such as size, cost, and low spatial resolution. Notably, the limitations stem from the availability and diversity of spectral data.
The computer vision field evolved together with imaging technology from grayscale to colour, and from colour to multi-modal, spectral, or polarisation imaging. Each evolution is bringing access to new information in order to overcome the limitations of the previous modality. Despite remarkable performance with colour images, spectral imaging emerges as a promising avenue specifically tailored to address the limitations of colour imaging, offering a richer and more nuanced understanding of the underlying data [6]. This shift in emphasis underscores the need to explore beyond conventional RGB datasets and highlights spectral imaging as a key approach in overcoming the constraints associated with colour representation.
Existing spectral databases have played a pivotal role in advancing research in computer vision, providing valuable datasets for various applications [7]. However, the current repositories, though valuable, face limitations in terms of diversity, scale, and representation of real-world scenarios. These databases often cover specific domains or scenes, making them less suitable for broader applications. It is worth mentioning that existing databases cannot be concatenated, mostly because of standardisation problems, spectral specificity, and spatial resolution [8].
Furthermore, a richer and more diverse spectral database would enable researchers to explore a wider range of applications beyond the current scope. These include, but are not limited to, fields such as object detection, scene understanding, and autonomous driving, where spectral imaging holds immense potential. Notably, these tasks often demand robust deep learning models, necessitating a substantial volume of high-quality training data. Through spectral reconstruction (SR) (Figure 1), also referred to as spectral uplifting or spectral superresolution, we gain access to a practically unlimited source of RGB images present in various datasets like Imagenet [9].
This article explores the limitation of reconstructed spectral data from RGB by conducting experiments over the spectral distribution and colorimetric relighting analysis. Three primary initiatives have been considered for obtaining spectral data. The first involves capturing more data using spectral cameras. However, this approach faces challenges related to standardisation, given the varying configurations of spectral cameras [8]. The second method entails generating data in computer graphics [10], leveraging tools like Mitsuba [11], but is often tied to specific reflectance models, and the low number of scenes created lacks generalisation ability. Lastly, the third initiative, which forms the focus of this study, revolves around spectral reconstruction from RGB data [7]. This cutting-edge technique marks a significant advancement in the field, as it enables the transformation of conventional RGB images into highly detailed representations encompassing a more extensive range of spectral information.
This article is structured as follows. In Section 2, we delve into the existing body of work related to spectral reconstruction methods. In Section 3, we present our experimental protocol, discussing both the spectral dataset and the deep learning-based model used, as well as the application of performance-based metrics, along with spectral analysis using principal component analysis (PCA). Section 4 serves as the analysis section, where we delve into the interpretation and commentary on the obtained results. In addition to the analysis of reconstruction accuracy and spectral information representation, Section 4 also includes a colorimetric analysis. Specifically, we compute the Euclidean distance Δ E a b * between the spectral reconstructed data and the ground-truth spectral data under various illuminants, providing further insights on how the spectral reconstruction quality impacts the specific colour imaging application. Finally, Section 5 presents our conclusion, summarising key insights and implications and proposing potential avenues for future research. In this context, we show, on two independent datasets and two different spectral reconstruction methods, that the information in the original spectra and the estimated ones seems very different on a PCA space, which suggest caution in the use of the estimated data. On the other hand, we also show that, from a colorimetric perspective, the estimated spectra are sufficient to perform relighting of the scene or chromatic adaptation.

2. Related Work

2.1. Spectral Image Acquisition

Recent advancements in imaging systems have introduced various sophisticated techniques for capturing spectral images. Despite the progress, these methods still face significant challenges. Traditional scanning techniques, although widely used, are often slow and cumbersome. Technologies such as pushbroom and whiskbroom scanners, commonly employed in remote sensing and other applications, require time-consuming processes and large, non-portable equipment [12,13].
In an effort to address these limitations, innovative solutions like snapshot compressive imaging (SCI) systems have been developed. These systems can compress complex hyperspectral data into a single 2D image, offering a more efficient approach compared to conventional methods [12,14,15,16,17]. Among these, the Coded Aperture Snapshot Spectral Imaging (CASSI) system stands out for its potential to revolutionise the field [16,18].
However, despite their potential, these advanced imaging systems are still limited by high costs and practical challenges. The expense of SCI systems makes them inaccessible for broader use. Additionally, issues such as spectral estimation errors persist, impacting the accuracy and reliability of the captured data [19].
These challenges highlight the critical need for further research in spectral reconstruction. Improving these technologies to be more affordable, efficient, and accurate is essential for their widespread adoption and application in various fields.

2.2. SR from RGB

The first SR techniques looked for three-dimensional linear spectrum models. It was then demonstrated that the spectra may be precisely retrieved from RGB using a linear transform if such a “3D” linear model is applicable [20,21]. Simple statistical models like regression [22,23,24] and Bayesian inference [25,26] have been provided, which facilitate higher- or full-dimensional spectrum recovery, despite the fact that a 3D model can only cover a limited variance in real-world spectra [27,28]. With the growing quantity of accessible data, novel methods such as deep neural networks (DNN) [29,30,31,32,33,34], sparse coding [35], and shallow networks [36,37,38] have been based on richer inference algorithms. A comprehensive comparison of the approaches is not yet accessible, though, because not all early and modern methods have been benchmarked on the same database. Nonetheless, it is reasonable to state that DNNs are acknowledged as the top SR technique.
Regression [22], one of the earliest techniques, has become popular because of its straightforward, fast, accurate, and closed-form solution. RGB and their spectral estimations are related in the most basic “linear regression” [22] by a single linear transformation matrix. Moreover, polynomial and root-polynomial regression [23,24] expand the RGB into polynomial/root-polynomial terms, which are subsequently transferred to spectra using a linear transform, in order to add non-linearity. Regressions that minimise the mean squared error (MSE) in the training set are sometimes referred to as “least-squares” regressions. Nevertheless, Lin and Finlayson [39] proposed a “relative-error-least-squares” minimisation strategy for regressions, which further enhances the performance of regression-based SR because SRs are—at least recently—more frequently assessed using relative errors [20,29,35,40].
Many recent methods for SR rely on DNN architectures, specifically, convolutional neural networks (CNNs) or generative adversarial networks (GANs), where large image patches serve as standard inputs. In the NTIRE 2018, 2020, and 2022 Spectral Reconstruction Challenges, DNN-based solutions dominated the top rankings. For instance, the NTIRE 2018 challenge was won by “HSCNND”, which utilised a densely connected structure, while “AWAN” emerged victorious in the NTIRE 2020 challenge, employing an attention network structure. Notably, the latest winner of the NTIRE challenge (edition 2022), “MST++”, proposed a novel approach using a transformer-based model for efficient spectral reconstruction. However, despite these advancements, most DNN evaluations are conducted on optimally captured images, neglecting more challenging real-world conditions such as exposure variations and diverse scene compositions. Comprehensive assessments reveal that DNNs are often susceptible to exposure changes, unfamiliar scenes, and scenes lacking specific image contents [41].
Initially, spectral reconstruction relied on regression-based [42] and sparse-coding techniques [43]. While not completely replacing linear methods, deep learning models [33], which are mostly non-linear procedures, have considerably increased in popularity in recent years. Moreover, community challenges have recently emerged to stimulate research in developing robust and reliable networks for spectral reconstruction from RGB images [7,35]. Consequently, spectral reconstruction techniques have been extensively explored within the research community, with a multitude of studies contributing to this area [35,43]. For a detailed overview and in-depth information, we direct the interested reader to the comprehensive review on spectral reconstruction methods in the literature [44]. Spectral reconstruction from RGB images has been significantly influenced by the pioneering work of the colour imaging community, as demonstrated by [45]. In the image formation model, the spectral function r ( λ ) represents the intensity distribution across wavelengths that is defined as the radiance spectrum. In accordance with this, the sensitivities of the R, G, and B sensors are represented as s k ( λ ) , where k = R , G , B . Therefore, the RGB image creation is expressed as the inner productbetween the spectral sensitivity and the measured radiance [45]:
ρ k = λ ω s k ( λ ) r ( λ )
where ω denotes the visible range, which in this article is set to [400, 700] nanometers, and λ ω . Moreover, the ground-truth spectra are sampled at n equally spaced wavelengths. Equation (1) can therefore be vectorised:
S T r ̲ = ρ ̲
where the 3-value RGB colour is represented by ρ = ( R , G , B ) T , the n × 3 spectral sensitivity matrix is S = ( s ̲ R , s ̲ G , s ̲ B ) , and r R n is the discrete representation of spectra. The linear colour or raw camera response, which is frequently utilised as ground-truth RGB for training spectral reconstruction algorithms [29], is denoted by this ρ vector.
Spectral reconstruction methods are employed to map RGB colours to spectral estimations. This mapping is accomplished through a representation of the spectral reconstruction algorithm using a specific mapping function ψ : R 3 R n ; SR can be written as follows:
ψ ( ρ ̲ ) r ̲

3. Methodology

Given the maturation of spectral reconstruction methodologies, we advocate for the adoption of state-of-the-art approach for deriving spectral information from RGB imagery, but we want to emphasise the limitation of the technique. Therefore, to validate the generalisation capability and replicability of the MST++ [46] and A++ [47] models in spectral reconstruction, and to demonstrate their potential applicability to diverse scenarios, we assess their performance on unseen data. Additionally, our investigation extended to analysing the spectral distribution patterns using the Spectral Image Database for Quality dataset (SIDQ) [48], which introduced a hyperspectral image database consisting of nine scenes. These scenes were meticulously chosen to represent diverse materials such as textile, wood, and skin. The dataset provides spectral reflectance data, acquired using a hyperspectral system (HySpex VNIR-1600 manufactured by Neo, Oslo, Norway), with a spectral range spanning from 410 to 1000 nm, containing 160 spectral bands (where 85 bands are in the visible light spectrum), coded over 16 bits. Importantly, the SIDQ dataset includes not only hyperspectral data but also their RGB counterparts. To ensure comparability of the results obtained from the different models and datasets used in this study, we unified the interval by considering only the bands from 410 to 700 nm when working with the SIDQ dataset. As we possess the RGB counterparts of the hyperspectral images, our next step involves leveraging SR models to reconstruct spectra from the RGB images. It is essential to note that the resulting spectra will be in the interval of [400, 700] nm. Subsequently, we compare the reconstructed spectra obtained through the MST++ and A++ models with the original hyperspectral images from the SIDQ dataset. This comparative analysis provides insights into the accuracy and efficacy of the spectral reconstruction model, offering a robust evaluation of our approach. Moreover, we used the CAVE dataset [49] in addition to the SIDQ dataset for our analysis. The CAVE dataset contains 32 different spectral reflectance data spanning from 400 to 700 nm, with 31 spectral bands coded over 16 bits and their RGB counterparts. Both the CAVE and SIDQ datasets are normalised between 0 and 1. This normalisation, along with the diversity in terms of the number of scenes and spectral bands, allows for a more comprehensive evaluation and generalisation capability of the models used in this study.

Evaluation Metrics

To assess the accuracy and fidelity of the spectral reconstruction, we employed quantitative performance metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Relative Absolute Error (MRAE), computed between the spectral reconstructed image x ^ and the ground-truth images x. These metrics provide a comprehensive evaluation of the reconstructed spectra by quantifying the similarity and deviation from the ground truth spectral data.
  • Root Mean Square Error:
    R M S E = 1 n x x ^ 2
    where n represents the number of spectral bands. Moreover, RMSE is scale-dependent, that is, the overall brightness level in which the compared spectra reside will reflect on the scale of RMSE;
  • Peak Signal-to-Noise Ratio:
    P S N R = 20 × log 10 x max R M S E
    where x max is the maximum possible value for our images;
  • Structural Similarity Index:
    SSIM ( x , x ^ ) = ( 2 μ x μ x ^ + C 1 ) ( 2 σ x x ^ + C 2 ) ( μ x 2 + μ x ^ 2 + C 1 ) ( σ x 2 + σ x ^ 2 + C 2 )
    where μ x , μ x ^ , σ x 2 , and σ x ^ 2 are the mean and variance of the reference image x and estimated image x ^ , respectively, while σ x x ^ is the covariance. The SSIM of all bands is acquired by calculating the SSIM of each channel separately and averaging all SSIMs;
  • Mean Relative Absolute Error:
    M R A E = 100 × 1 n | | x x ^ x | | 1
    where n is the number of spectral channels, and we perform an element-wise division to compute the L 1 norm. In essence, the MRAE metric calculates the average absolute deviation across all spectral channels. This metric is widely recognised as the standard measure for ranking and assessing SR algorithms in the latest benchmark studies [29];
  • Entropy Similarity Metric:
    While commonly employed in fields like molecular spectroscopy [50] for its ability to capture the similarity in entropy distributions between spectra, entropy similarity remains relatively underutilised within the spectral imaging community. Unlike conventional metrics, Entropy Similarity provides a comprehensive assessment of the fidelity of spectral reconstruction by quantifying the agreement between the spectral entropy patterns of the reconstructed and ground truth spectra.
    ES ( x , x ^ ) = 1 H ( x ) H ( x ^ ) max ( H ( x ) , H ( x ^ ) )
    where H ( x ) represents the entropy of image x. Similarly to SSIM and PSNR, ES is acquired by calculating the ES for each spectral channel separately and averaging all ESs. Since it is a similarity metric, a higher score indicates better alignment, with a score of 1 representing perfect alignment.
In addition to the performance metrics, we conducted PCA to examine the the variance in spectral distribution:
  • This analysis involved concatenating the reconstructed spectra (from both MST++ and A++) with the ground-truth spectral image. The concatenated data facilitated the generation of clouds of points, enabling a visual comparison of the spectral distributions. By aligning the reconstructed and original spectral data on the same axis, PCA allowed for a comprehensive exploration of the variance within the spectra;
  • Also, we performed PCA without concatenation, directly computing the eigenvectors to investigate the spectral distribution of reconstructed data, generated by both models, against original spectral data and their RGB counterpart. This approach provided insights into the underlying structures of the spectral data without the influence of concatenation. Through the computation of eigenvectors, we gained a deeper understanding of the spectral variability and the principal components driving the variance within the spectra.
Furthermore, a relighting analysis was conducted to evaluate the colorimetric performance of the two state-of-the-art SR methods to observe their capacity to predict colorimetric values under different lights. We specifically considered Illuminants D65 and A, but also a white LED light, LED-B1 (see Figure 2). The analysis involved using reflectance factors from spectral data and multiplying them with the respective illuminant to obtain radiance data. For the reconstructed spectra (from both the A++ and MST++ models), we assumed an E illumination for the initial RGB images, implying that the colour images were white-balanced, thus approximating a flat spectral distribution. This step ensures that the reconstructed data maintain consistency with the assumed illumination conditions. It is noteworthy that such an assumption is not needed for the original spectral data provided by the CAVE and SIDQ datasets, as they are already provided as reflectance data. The radiance data were then converted to the CIE 1931 XYZ colour space using the 2 degrees standard observer colour-matching functions. Subsequently, the XYZ values were transformed into the CIELAB colour space to enable perceptually uniform colour comparisons [51]. Finally, the Euclidean distance was computed between the reconstructed spectral data and the ground-truth spectral data in the CIELAB colour space to assess the colour accuracy of the SR methods under different illumination conditions (see Figure 3). This colorimetric analysis provides valuable insights into the robustness and generalisation capabilities of the SR methods across varying lighting scenarios.

4. Analysis

4.1. Spectral Analysis

Table 1 provides an overview of the performance metrics associated with spectral reconstruction models, specifically MST++ and A++, applied to the SIDQ dataset [48]. The metrics include PSNR, SSIM, MRAE, and ES. These metrics offer insights into the quality and fidelity of the reconstructed spectral data across different scenes within the SIDQ dataset. Moreover, Table 2 presents the same performance metrics (PSNR, SSIM, MRAE, and ES) for spectral reconstruction achieved by both the MST++ and A++ models across all scenes within the CAVE dataset [49]. It is important to note that neither MST++ nor A++ models were trained on both datasets. We observe that, for conventional metrics such as PSNR, SSIM, and MRAE, the transformer solution (MST++) outperforms the pixel-based solution (A++ model). While both models perform well, the A++ model outperforms MST++ for the Entropy Similarity (ES) metric, highlighting the importance of exercising caution when evaluating models. In addition, the illustrations in Figure 4 corroborate the findings in Table 1 and Table 2, revealing a notably low error map between the reconstructed spectral image from RGB and the original hyperspectral image across various scenes.
It is important to note that both models performed equivalently across the two datasets tested. However, there was a magnitude difference observed between the results obtained from the two datasets (see Table 1 and Table 2). We observed an overall better performance of the models when using the SIDQ dataset compared to the CAVE dataset. This observation can be attributed to the differences in scene content between the two datasets. Specifically, the CAVE dataset consists of more complex scenes with a higher prevalence of specular and dark areas, which can pose challenges for spectral image reconstruction algorithms. In contrast, the SIDQ dataset is characterised by smoother and flatter scenes with fewer specular and dark regions, which may facilitate more accurate reconstruction of spectral images. Therefore, the differences in scene complexity and the presence of specular and dark areas could explain the observed performance differences between the two datasets.
However, upon closer examination of the error map, it becomes apparent that, in the specular regions, the errors are more pronounced compared to other areas for both tested methods. The spectral reconstruction encounters notable challenges in accurately capturing and reproducing these specular reflections, leading to an increased error in these specific regions. In the context of the Sample Painting (Figure 4, third row), the white pixels stand out significantly in error, particularly in regions with saturated appearance. These pronounced errors are closely linked to the over-exposition of certain regions in the images, which poses a significant challenge for the neural-based spectral reconstruction model in accurately representing these regions. The struggle to reconstruct these over-exposed areas correctly contributes to the observed increase in errors. Additionally, it is worth noting that Y.T. Lin et al. [45] demonstrated that under-exposure spectral images similarly affects the neural based model performance, emphasising the sensitivity of the spectral reconstruction process to both over- and under-exposed conditions.
Furthermore, our investigation (see Figure 5, Figure 6 and Figure 7, first rows) into the spectral information contained in the reconstructed and original spectral images has brought to light discernible disparities between the two spectral images. The reconstructed data for both models, notably, may not faithfully replicate the exact spectral information inherent in the original spectral image. The challenges encountered in accurately representing over-exposed areas, among other factors, highlight a fundamental limitation: spectral reconstruction does not capture the full extent of the spectral information. This underscores the necessity for prudence in interpreting the spectral content of the reconstructed data.
Moreover, to delve deeper into the distribution of data, we extended our analysis by examining the eigenvectors of each of the two first principal components obtained from the PCA for the reconstructed spectral data, original spectra, and RGB counterpart. The eigenvectors represent the direction of maximum variance within the data. Plotting these eigenvectors enables a direct comparison between reconstructed spectral data, original spectral data and the RGB data. In Figure 5, Figure 6 and Figure 7, second row, we observe that the eigenvectors of the reconstructed spectral image occupy an intermediate position between the RGB eigenvectors and those derived from the original spectral image. This suggests that the reconstructed data capture some, but not all, of the spectral variability present in the original data. The alignment of the reconstructed eigenvectors with the RGB eigenvalues highlights a partial convergence of information between the colour channels and the reconstructed spectral space.

4.2. Colorimetric Analysis

In our analysis of the spectral reconstruction results (see Table 3 and Table 4), we start by looking at the average Euclidean distance values across all scenes under different lighting (D65, A, LED-B1). Table 3 presents the Euclidean distance ( Δ E a b * ) between the original spectral data and the reconstructed spectra for two distinct models (A++ and MST++), under three different illuminants (D65, A, and B1), across all scenes from the SIDQ dataset. Moreover, Table 4 also shows the Euclidean distance ( Δ E a b * ) between the original spectral data and the reconstructed spectra for the two models (A++ and MST++), evaluated under three different illuminants (D65, A, and B1) for all scenes from the CAVE database.
Surprisingly, these values consistently stay below 1, showing a strong match between predicted and actual spectral data across various lighting conditions [52]. Moreover, the MST++ spectral reconstruction model stands out for its notably better performance in comparison to the A++ model. This indicates its proficiency in accurately reproducing colours even under different illuminants, which significantly impacts image quality. Consistent with our spectral analysis, the heat maps in Figure 8 and Figure 9 corroborate our previous findings that dark and specular regions have lower performance in terms of colour accuracy. This is evident from the higher Δ E a b * values observed in these regions compared to other areas in the scenes.

5. Conclusions

In conclusion, spectral reconstruction from RGB imagery holds promise for revolutionising computer vision tasks by providing access to rich and extensive spectral data without the need for expensive and complex data-acquisition campaigns. The adoption of state-of-the-art models, coupled with comprehensive datasets, can yield accurate and effective spectral reconstruction. However, challenges persist, primarily concerning reconstruction errors associated with over- and under-exposed areas, as well as the fidelity of the reconstructed information. While common performance metrics suggest good results, a closer look at the spectral distribution of the reconstructed data reveals some areas for improvement.
The colorimetric analysis of relighting from spectra further emphasises the robustness of spectral reconstruction techniques, particularly the MST++ method, in faithfully reproducing colours across different illuminants. This underscores the potential for enhancing image quality and colour fidelity in practical applications.
Therefore, methods should undergo rigorous testing against real spectral data to validate their applicability in practical settings. Furthermore, an avenue for future research lies in training deep learning models using spectral reconstructed data to investigate their behaviour and potential for achieving superior results compared to using actual spectral data. Future works may also consider the development of quality metrics for RGB-to-spectral methods based on our observations. This article underscores the promising prospect of employing spectral data, rather than RGB data, across diverse computer vision applications.

Author Contributions

Conceptualisation, A.N.F., J.-B.T. and J.Y.H.; Investigation, A.N.F., J.-B.T., J.Y.H. and P.G.; Methodology, A.N.F., J.-B.T. and P.G.; Software, A.N.F.; Supervision, J.-B.T., J.Y.H. and P.G.; Writing—original draft, A.N.F.; Writing—review and editing, J.-B.T., J.Y.H. and P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Backman, V.; Wallace, M.B.; Perelman, L.; Arendt, J.; Gurjar, R.; Müller, M.; Zhang, Q.; Zonios, G.; Kline, E.; McGillican, T.; et al. Detection of preinvasive cancer cells. Nature 2000, 406, 35–36. [Google Scholar] [CrossRef] [PubMed]
  2. Meng, Z.; Qiao, M.; Ma, J.; Yu, Z.; Xu, K.; Yuan, X. Snapshot multispectral endomicroscopy. Opt. Lett. 2020, 45, 3897–3900. [Google Scholar] [CrossRef] [PubMed]
  3. Borengasser, M.; Hungate, W.S.; Watkins, R. Hyperspectral Remote Sensing: Principles and Applications; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  4. Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
  5. Kim, M.H.; Harvey, T.A.; Kittle, D.S.; Rushmeier, H.; Dorsey, J.; Prum, R.O.; Brady, D.J. 3D imaging spectroscopy for measuring hyperspectral patterns on solid objects. ACM Trans. Graph. (TOG) 2012, 31, 1–11. [Google Scholar] [CrossRef]
  6. Glatt, O.; Ater, Y.; Kim, W.S.; Werman, S.; Berby, O.; Zini, Y.; Zelinger, S.; Lee, S.; Choi, H.; Soloveichik, E. Beyond RGB: A Real World Dataset for Multispectral Imaging in Mobile Devices. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 4344–4354. [Google Scholar]
  7. Arad, B.; Ben-Shahar, O.; Timofte, R.; Van Gool, L.; Zhang, L.; Yang, M.; Xiong, Z.; Chen, C.; Shi, Z.; Liu, D.; et al. NTIRE 2018 challenge on spectral reconstruction from RGB Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; p. 1042. [Google Scholar]
  8. Thomas, J.B.; Lapray, P.J.; Derhak, M.; Farup, I. Standard representation space for spectral imaging. In Proceedings of the Color and Imaging Conference, Paris, France, 13–17 November 2023; Volume 31, pp. 1–6. [Google Scholar]
  9. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
  10. Buzzelli, M.; Tchobanou, M.K.; Schettini, R.; Bianco, S. A general-purpose pipeline for realistic synthetic multispectral image dataset generation. In Proceedings of the Color and Imaging Conference, Paris, France, 13–17 November 2023; Society for Imaging Science and Technology: Springfield, VA, USA, 2023; Volume 31, pp. 155–160. [Google Scholar]
  11. Chen, Q.; Cheung, T.; Westland, S. Physical modelling of spectral reflectance. In Proceedings of the 10th Congress of the International Color Association, Granada, Spain, 9–13 May 2005; pp. 1151–1154. [Google Scholar]
  12. Cao, X.; Yue, T.; Lin, X.; Lin, S.; Yuan, X.; Dai, Q.; Carin, L.; Brady, D.J. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Process. Mag. 2016, 33, 95–108. [Google Scholar] [CrossRef]
  13. Poli, D.; Toutin, T. Review of developments in geometric modelling for high resolution satellite pushbroom sensors. Photogramm. Rec. 2012, 27, 58–73. [Google Scholar] [CrossRef]
  14. Du, H.; Tong, X.; Cao, X.; Lin, S. A prism-based system for multispectral video acquisition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 175–182. [Google Scholar]
  15. Llull, P.; Liao, X.; Yuan, X.; Yang, J.; Kittle, D.; Carin, L.; Sapiro, G.; Brady, D.J. Coded aperture compressive temporal imaging. Opt. Express 2013, 21, 10526–10545. [Google Scholar] [CrossRef] [PubMed]
  16. Wagadarikar, A.; John, R.; Willett, R.; Brady, D. Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt. 2008, 47, B44–B51. [Google Scholar] [CrossRef] [PubMed]
  17. Wagadarikar, A.A.; Pitsianis, N.P.; Sun, X.; Brady, D.J. Video rate spectral imaging using a coded aperture snapshot spectral imager. Opt. Express 2009, 17, 6368–6388. [Google Scholar] [CrossRef] [PubMed]
  18. Meng, Z.; Ma, J.; Yuan, X. End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 187–204. [Google Scholar]
  19. Yuan, X.; Brady, D.J.; Katsaggelos, A.K. Snapshot compressive imaging: Theory, algorithms, and applications. IEEE Signal Process. Mag. 2021, 38, 65–88. [Google Scholar] [CrossRef]
  20. Maloney, L.T.; Wandell, B.A. Color constancy: A method for recovering surface spectral reflectance. In Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; pp. 293–297. [Google Scholar]
  21. Agahian, F.; Amirshahi, S.A.; Amirshahi, S.H. Reconstruction of reflectance spectra using weighted principal component analysis. Color Res. Appl. 2008, 33, 360–371. [Google Scholar] [CrossRef]
  22. Heikkinen, V.; Lenz, R.; Jetsu, T.; Parkkinen, J.; Hauta-Kasari, M.; Jääskeläinen, T. Evaluation and unification of some methods for estimating reflectance spectra from RGB images. JOSA A 2008, 25, 2444–2458. [Google Scholar] [CrossRef] [PubMed]
  23. Lin, Y.T.; Finlayson, G.D. Exposure invariance in spectral reconstruction from rgb images. In Proceedings of the Color and Imaging Conference, Paris, France, 21–25 October 2019; Society for Imaging Science and Technology: Springfield, VA, USA, 2019; Volume 27, pp. 284–289. [Google Scholar]
  24. Connah, D.R.; Hardeberg, J.Y. Spectral recovery using polynomial models. In Proceedings of the Color Imaging X: Processing, Hardcopy, and Applications, San Jose, CA, USA, 17 January 2005; SPIE: Bellingham, WA, USA, 2005; Volume 5667, pp. 65–75. [Google Scholar]
  25. Brainard, D.H.; Freeman, W.T. Bayesian color constancy. JOSA A 1997, 14, 1393–1411. [Google Scholar] [CrossRef] [PubMed]
  26. Morovic, P.; Finlayson, G.D. Metamer-set-based approach to estimating surface reflectance from camera RGB. JOSA A 2006, 23, 1814–1822. [Google Scholar] [CrossRef] [PubMed]
  27. Hardeberg, J.Y. On the spectral dimensionality of object colours. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Poitiers, France, 2–5 April 2002; Society of Imaging Science and Technology: Springfield, VA, USA, 2002; Volume 1, pp. 480–485. [Google Scholar]
  28. Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 193–200. [Google Scholar]
  29. Arad, B.; Timofte, R.; Ben-Shahar, O.; Lin, Y.T.; Finlayson, G.D. NTIRE 2020 challenge on spectral reconstruction from an RGB image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 446–447. [Google Scholar]
  30. Li, J.; Wu, C.; Song, R.; Li, Y.; Liu, F. Adaptive weighted attention network with camera spectral sensitivity prior for spectral reconstruction from RGB images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 462–463. [Google Scholar]
  31. Arun, P.V.; Buddhiraju, K.M.; Porwal, A.; Chanussot, J. CNN based spectral super-resolution of remote sensing images. Signal Process. 2020, 169, 107394. [Google Scholar] [CrossRef]
  32. Fubara, B.J.; Sedky, M.; Dyke, D. RGB to spectral reconstruction via learned basis functions and weights. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 480–481. [Google Scholar]
  33. Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. HSCNN+: Advanced CNN-based hyperspectral recovery from RGB images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 939–947. [Google Scholar]
  34. Zhao, Y.; Po, L.M.; Yan, Q.; Liu, W.; Lin, T. Hierarchical regression network for spectral reconstruction from RGB images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 422–423. [Google Scholar]
  35. Arad, B.; Ben-Shahar, O. Sparse recovery of hyperspectral signal from natural RGB images. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part VII 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 19–34. [Google Scholar]
  36. Nguyen, R.M.; Prasad, D.K.; Brown, M.S. Training-based spectral reconstruction from a single RGB image. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part VII 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 186–201. [Google Scholar]
  37. Sharma, G.; Wang, S. Spectrum recovery from colorimetric data for color reproductions. In Proceedings of the Color Imaging: Device-Independent Color, Color Hardcopy, and Applications VII, San Jose, CA, USA, 19–25 January 2002; SPIE: Bellingham, WA, USA, 2001; Volume 4663, pp. 8–14. [Google Scholar]
  38. Ribés, A.; Schmit, F. Reconstructing spectral reflectances with mixture density networks. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Poitiers, France, 2–5 April 2002; Society of Imaging Science and Technology: Springfield, VA, USA, 2002; Volume 1, pp. 486–491. [Google Scholar]
  39. Lin, Y.T.; Finlayson, G.D. On the optimization of regression-based spectral reconstruction. Sensors 2021, 21, 5586. [Google Scholar] [CrossRef] [PubMed]
  40. Aeschbacher, J.; Wu, J.; Timofte, R. In defense of shallow learned spectral reconstruction from RGB images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 471–479. [Google Scholar]
  41. Stiebel, T.; Merhof, D. Brightness invariant deep spectral super-resolution. Sensors 2020, 20, 5789. [Google Scholar] [CrossRef]
  42. Uzair, M.; Mahmood, A.; Mian, A. Hyperspectral face recognition with spatiospectral information fusion and PLS regression. IEEE Trans. Image Process. 2015, 24, 1127–1137. [Google Scholar] [CrossRef] [PubMed]
  43. Parmar, M.; Lansel, S.; Wandell, B.A. Spatio-spectral reconstruction of the multispectral datacube using sparse recovery. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 473–476. [Google Scholar]
  44. Zhang, J.; Su, R.; Fu, Q.; Ren, W.; Heide, F.; Nie, Y. A survey on computational spectral reconstruction methods from RGB to hyperspectral imaging. Sci. Rep. 2022, 12, 11905. [Google Scholar] [CrossRef] [PubMed]
  45. Lin, Y.-T.; Finlayson, G.D. Physically plausible spectral reconstruction. Sensors 2020, 20, 6399. [Google Scholar] [CrossRef]
  46. Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; Pfister, H.; Timofte, R.; Van Gool, L. MST++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 745–755. [Google Scholar]
  47. Lin, Y.T.; Finlayson, G.D. A rehabilitation of pixel-based spectral reconstruction from RGB images. Sensors 2023, 23, 4155. [Google Scholar] [CrossRef] [PubMed]
  48. Le Moan, S.; George, S.T.; Pedersen, M.; Blahová, J.; Hardeberg, J.Y. A database for spectral image quality. In Proceedings of the Image Quality and System Performance XII, San Francisco, CA, USA, 8–12 February 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9396, pp. 225–232. [Google Scholar]
  49. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S. Generalized Assorted Pixel Camera: Post-Capture Control of Resolution, Dynamic Range and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253, Technical Report, 2008. [Google Scholar] [CrossRef] [PubMed]
  50. Li, Y.; Kind, T.; Folz, J.; Vaniya, A.; Mehta, S.S.; Fiehn, O. Spectral entropy outperforms MS/MS dot product similarity for small-molecule compound identification. Nat. Methods 2021, 18, 1524–1531. [Google Scholar] [CrossRef] [PubMed]
  51. CIE. Technical Report 3rd Edition. 2004. Available online: https://cielab.xyz/pdf/cie.15.2004%20colorimetry.pdf (accessed on 9 May 2024).
  52. Burns, S.A. Chromatic adaptation transform by spectral reconstruction. Color Res. Appl. 2019, 44, 682–693. [Google Scholar] [CrossRef]
Figure 1. Spectral reconstruction from an RGB image, where the spectral reconstruction model estimates the original spectral information from the RGB image.
Figure 1. Spectral reconstruction from an RGB image, where the spectral reconstruction model estimates the original spectral information from the RGB image.
Sensors 24 03666 g001
Figure 2. Spectral power distribution of the used illuminants.
Figure 2. Spectral power distribution of the used illuminants.
Sensors 24 03666 g002
Figure 3. Colorimetric analysis between spectral reconstructed data and the original spectral image.
Figure 3. Colorimetric analysis between spectral reconstructed data and the original spectral image.
Sensors 24 03666 g003
Figure 4. (Left): Original spectral band at 410 nm. (Middle): Heat map using absolute difference between the reconstructed spectral image from A++ model and the original spectral image. (Right): Heat map using absolute difference between the reconstructed spectral image from MST++ and the original spectral image.
Figure 4. (Left): Original spectral band at 410 nm. (Middle): Heat map using absolute difference between the reconstructed spectral image from A++ model and the original spectral image. (Right): Heat map using absolute difference between the reconstructed spectral image from MST++ and the original spectral image.
Sensors 24 03666 g004
Figure 5. (a) Comparison of the eigenvectors of the first two components from the PCA. (b) PCA distribution between the original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Orange.
Figure 5. (a) Comparison of the eigenvectors of the first two components from the PCA. (b) PCA distribution between the original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Orange.
Sensors 24 03666 g005
Figure 6. (a) Comparison of the eigenvectors of the two first components from the PCA. (b) PCA distribution between original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Balloons.
Figure 6. (a) Comparison of the eigenvectors of the two first components from the PCA. (b) PCA distribution between original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Balloons.
Sensors 24 03666 g006
Figure 7. (a) Comparison of the eigenvectors of the two first components from the PCA. (b) PCA distribution between original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Painting.
Figure 7. (a) Comparison of the eigenvectors of the two first components from the PCA. (b) PCA distribution between original spectral image (red), MST++-reconstructed spectral image (blue), and A++-reconstructed spectral image (green), while the black area is a combination of the distributions for sample Painting.
Sensors 24 03666 g007
Figure 8. Sample Balloons from the CAVE dataset. (Left): Original RGB image. (Middle): Delta E Map from the A++ reconstruction model. (Right): Delta E Map from the MST++ reconstruction model.
Figure 8. Sample Balloons from the CAVE dataset. (Left): Original RGB image. (Middle): Delta E Map from the A++ reconstruction model. (Right): Delta E Map from the MST++ reconstruction model.
Sensors 24 03666 g008
Figure 9. Sample Painting from the SIDQ dataset. (Left): Original RGB image. (Middle): Delta E Map from the A++ reconstruction model. (Right): Delta E Map from the MST++ reconstruction model.
Figure 9. Sample Painting from the SIDQ dataset. (Left): Original RGB image. (Middle): Delta E Map from the A++ reconstruction model. (Right): Delta E Map from the MST++ reconstruction model.
Sensors 24 03666 g009
Table 1. Performance-based metrics between the original spectral data and the reconstructed spectral data for the SIDQ dataset. Arrows indicate the performance trend: ↑ denotes that higher values are better, and ↓ denotes that lower values are better.
Table 1. Performance-based metrics between the original spectral data and the reconstructed spectral data for the SIDQ dataset. Arrows indicate the performance trend: ↑ denotes that higher values are better, and ↓ denotes that lower values are better.
ScenePSNR ↑SSIM ↑MRAE ↓ES ↑
A++MST++A++MST++A++MST++A++MST++
Cork33.4638.210.99070.99820.1530.0630.9670.952
Hat27.0330.130.98290.99540.2340.1090.8990.962
Leaves36.3437.960.99320.99460.1320.0760.9880.915
Orange30.3834.270.93670.99710.2660.0900.9370.931
Painting34.3636.540.99190.99790.0940.0750.9690.939
Paper 122.3728.420.96580.98360.2300.1630.9500.974
Skin 131.6340.950.98810.99870.1970.0610.9390.984
Skin 225.8429.170.98770.99390.2020.1230.9430.917
Wood35.9339.310.99470.99840.13130.0760.9370.864
Average30.7034.650.98310.99530.1820.0920.9430.936
Table 2. Performance-based metrics between the original spectral data and the reconstructed spectral data for the CAVE dataset. Arrows indicate the performance trend: ↑ denotes that higher values are better, and ↓ denotes that lower values are better.
Table 2. Performance-based metrics between the original spectral data and the reconstructed spectral data for the CAVE dataset. Arrows indicate the performance trend: ↑ denotes that higher values are better, and ↓ denotes that lower values are better.
ScenePSNR ↑SSIM ↑MRAE ↓ES ↑
A++MST++A++MST++A++MST++A++MST++
Balloons24.8926.100.96740.99270.41190.1460.9020.913
Beads25.0728.760.98550.99520.27760.09680.94570.7791
CD28.4935.150.9460.9970.5750.0720.93730.7997
Chart22.8927.260.96740.98060.4150.2270.95610.8665
Clay29.9433.460.98500.99680.2940.0630.89920.8949
Cloth25.8529.980.97710.99600.33640.12080.93070.8461
Egyptian Statue26.1628.180.96760.99420.44780.11440.81140.6485
Face21.6624.590.98450.99130.29430.16480.88060.7649
Beers25.8628.750.99080.98830.17490.19960.80500.9753
Food29.0932.000.99060.99630.22970.08540.86760.8477
Lemon Slices31.8534.140.99150.99710.21950.0920.83360.8066
Lemon24.3627.790.99090.99390.22240.13190.86430.7872
Peppers26.3128.760.99090.99450.22310.11190.95490.8100
Strawberries29.6531.590.96200.99610.47620.10350.89610.7992
Sushi34.7039.970.97560.99850.38920.04650.88920.8674
Tomatoes35.0939.250.97080.99840.42670.04720.80340.8144
Feathers22.2426.310.98000.99300.32760.12730.89740.7680
Flowers22.9325.620.96320.99240.46450.13750.87790.7484
Glass Tiles26.4828.840.98640.99500.26850.11530.94860.7792
Hairs24.2425.040.99090.991790.21670.14900.89600.8563
Jelly Beans24.4225.210.98640.99250.26450.14920.90400.7114
Oil Painting25.0827.050.99200.99350.20040.14200.95320.9113
Paints21.2622.230.96170.98890.44190.17800.94850.8869
Photo and Face20.4825.730.98580.99230.28650.14970.91250.7178
Pompoms23.9525.180.96000.99190.45540.14910.95750.8082
Apples29.8831.280.96390.99590.47110.09730.83200.7155
Peppers21.6623.930.97590.99060.35790.16790.91960.7983
Sponges21.6322.430.96350.98280.41410.19670.96750.8429
Stuffed Toys25.0826.870.97430.99330.37470.12370.92910.8464
Superballs33.0434.960.97910.99730.35320.05960.89430.8665
Thread Spools25.1928.280.98850.99420.25060.12730.85290.7596
Average26.4529.120.98260.99010.35210.12980.92890.8874
Table 3. Δ E a b * difference between the color images computed from the reconstructed data and the original data for SIDQ dataset for the considered lights.
Table 3. Δ E a b * difference between the color images computed from the reconstructed data and the original data for SIDQ dataset for the considered lights.
SceneCIE D65CIE ALED B1
A++MST++A++MST++A++MST++
Cork0.4400.2030.4550.2290.4840.2
Hat0.6500.7640.7950.6040.9500.703
Leaves0.5030.3100.5780.3650.5230.346
Orange0.3060.6540.5690.4620.4810.599
Painting0.2870.2930.3010.3050.3100.287
Paper 10.5920.5470.7850.4970.5990.516
Skin 10.3850.2400.8120.2540.8740.226
Skin 20.2700.2170.7370.2560.7210.217
Wood0.3700.2990.6830.3340.6140.303
Average 0.422 0.391 0.635 0.367 0.617 0.377
Table 4. Δ E a b * difference between the color images computed from the reconstructed data and the original data for the CAVE dataset for the considered lights.
Table 4. Δ E a b * difference between the color images computed from the reconstructed data and the original data for the CAVE dataset for the considered lights.
SceneCIE D65CIE ALED B1
A++MST++A++MST++A++MST++
Balloons0.580.270.6800.270.6330.263
Beads0.730.330.9070.290.7600.334
CD0.460.220.6030.260.4490.247
Chart0.320.210.4190.240.3540.229
Clay0.680.280.9290.250.7310.279
Cloth0.440.240.3920.210.4810.253
Egyptian Statue0.310.210.4220.230.3290.204
Face0.320.250.4260.260.3680.246
Beers0.310.210.3610.210.3550.205
Food0.540.230.6770.190.5810.215
Lemon Slices0.370.250.4720.260.3950.253
Lemon0.520.280.8040.270.5230.285
Peppers0.630.270.8350.2750.6790.287
Strawberries0.430.290.5410.2710.4560.268
Sushi0.370.190.5150.2140.4070.217
Tomatoes0.360.210.5060.2010.3900.205
Feathers0.520.270.7200.2250.5690.249
Flowers0.430.270.5830.2450.5010.249
Glass Tiles0.600.220.7130.2370.6270.247
Hairs0.330.220.3780.2360.3110.221
Jelly Beans0.510.260.5710.2640.5210.261
Oil Painting0.470.260.6010.2610.4740.259
Paints0.390.200.4980.2120.4450.212
Photo and Face0.290.260.3980.2740.3270.255
Pompoms0.650.290.8450.2090.7440.236
Apples0.450.230.5800.2420.4920.249
Peppers0.600.300.9990.2870.6690.306
Sponges0.810.290.1300.2400.9250.265
Stuffed Toys0.450.240.5450.2020.4740.242
Superballs0.530.280.6600.2430.5310.272
Thread Spools0.410.240.5730.2710.4130.275
Average0.4770.2590.5890.2430.5130.251
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fsian, A.N.; Thomas, J.-B.; Hardeberg, J.Y.; Gouton, P. Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data? Sensors 2024, 24, 3666. https://doi.org/10.3390/s24113666

AMA Style

Fsian AN, Thomas J-B, Hardeberg JY, Gouton P. Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data? Sensors. 2024; 24(11):3666. https://doi.org/10.3390/s24113666

Chicago/Turabian Style

Fsian, Abdelhamid N., Jean-Baptiste Thomas, Jon Y. Hardeberg, and Pierre Gouton. 2024. "Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?" Sensors 24, no. 11: 3666. https://doi.org/10.3390/s24113666

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop