Next Article in Journal
Automatic Recognition of Sucker-Rod Pumping System Working Conditions Using Dynamometer Cards with Transfer Learning and SVM
Next Article in Special Issue
Brightness Invariant Deep Spectral Super-Resolution
Previous Article in Journal
A Real-Time Thermal Monitoring System Intended for Embedded Sensors Interfaces
Previous Article in Special Issue
Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Color Management in Virtual Reality Scenes

by
Francisco Díaz-Barrancas
1,*,
Halina Cwierz
1,
Pedro J. Pardo
1,
Ángel Luis Pérez
2 and
María Isabel Suero
2
1
Department of Computer and Network Systems Engineering, University of Extremadura, E06800 Mérida, Spain
2
Department of Physics, University of Extremadura, E06071 Badajoz, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(19), 5658; https://doi.org/10.3390/s20195658
Submission received: 21 July 2020 / Revised: 17 September 2020 / Accepted: 28 September 2020 / Published: 3 October 2020
(This article belongs to the Special Issue Color & Spectral Sensors)

Abstract

:
Virtual reality has reached a great maturity in recent years. However, the quality of its visual appearance still leaves room for improvement. One of the most difficult features to represent in real-time 3D rendered virtual scenes is color fidelity, since there are many factors influencing the faithful reproduction of color. In this paper we introduce a method for improving color fidelity in virtual reality systems based in real-time 3D rendering systems. We developed a color management system for 3D rendered scenes divided into two levels. At the first level, color management is applied only to light sources defined inside the virtual scene. At the second level, we applied spectral techniques over the hyperspectral textures of 3D objects to obtain a higher degree of color fidelity. To illustrate the application of this color management method, we simulated a virtual version of the Ishihara test for color blindness deficiency detection.

1. Introduction

Technology is constantly evolving, offering possibilities that were previously unthinkable. This is the case for image sensors that allow us to capture the three spatial dimensions and the temporal dimension of the real world and virtually represent that real world through digital images, digital videos, and, in recent years, virtual and augmented reality devices. In all cases, color is an essential part of the virtual representation of the world using digital media. This is because the recipient of that representation is a human being. For humans, color is a fundamental part of the external information they receive from their environment through their senses. The sense of sight provides more than 80% of the information received by the brain through the senses [1].
The color information captured by imaging devices often requires processing techniques to ensure correct color reproduction on various digital devices. First, it is necessary to carry out a correct chromatic characterization of both the capture device and the reproduction device. In this way, it is possible to establish a biunivocal correspondence between the device-dependent color coordinates (typically RGB or CMYK) and the device-independent color coordinates (CIE XYZ or CIE Lab). Over the last few years, a multitude of studies have been carried out on the color characterization of devices, from CRTS and TFT technology to Near-Eye Displays [2,3,4,5].
The differences in the calibration and colorimetric characterization of a color display device are always confusing [6]. Calibration of these devices consists of setting the device’s state to a known value. This can be done by fixing the media’s white point, the gain, and the offset of each channel for a cathode ray tube, for example. This process ensures that the device produces consistent results and that the calibration process can be completed without any information on the relationship between the device’s input coordinates and the colorimetric coordinates of the output. The colorimetric characterization of the device, however, requires this relationship to be known. From this characterization, the relationship between the device’s input coordinates (typically RGB values in displays and other device-independent coordinates (i.e., the CIE 1931 XYZ tristimulus value)) are obtained. Presently, all display devices are digital, and the relation between digital and analog values is accomplished through a Digital to Analog Converter (DAC). Due to the large number of chromatic stimuli that can be shown by any digital device, the direct measurement of this relation is impossible; therefore, a mathematical model is applied, enabling one to reduce the number of runs.
In addition, it is necessary to implement color management systems that allow the exchange of chromatic information between different types of devices, such as displays and printers, thereby adapting the chromatic information based on different reproduction media, different white point values, and different gamut values. Based on this need, a wide field of study of gamut mapping has been developed and has had great relevance in recent years [7]. Recently, research has been done on the color appearance in optical see-through augmented reality devices [8].
All these developments were designed primarily for developing digital images from real scenes captured by photographic techniques. The generation of synthetic digital images from 3D models, which is commonly known as 3D rendering, has been left out of this type of color management technique since color, from beginning to end, has always been defined by native digital values such as RGB values. The only color correction currently carried out is the calibration of color player devices to a standard configuration that allows a similar appearance on all displays. However, 3D development environments used in virtual reality have great versatility and computing power thanks to the GPUs that are used to render at least 90 images per second for each eye [9,10].
There is a large difference between the digital images derived from photographic techniques and the digital images from 3D rendering scenes. In the first case, the image is captured by a sensor, typically CCD or CMOS, located in the image plane of the camera’s optical system, where photons arrive from different parts of the scene. In the case of a digital image obtained by rendering a 3D scene (Figure 1), a process of ray tracing is carried out in the opposite direction to that of traditional optical systems, i.e., from the eye or the camera to the objects constituting the scene, passing through a matrix of points that correspond to the future pixels [11].
From a visual point of view, Virtual Reality (VR) technology is based on the generation of two different images corresponding to two different views of the same three-dimensional scene. One of these images is shown to each eye, thereby covering a large field-of-view (FOV) and thus producing a stereoscopic image and a feeling of depth. The particularity of this stereoscopic view is that it is generated (“rendered”, in the specific language of graphic computing) while considering the position of the observer in real time, with a minimum delay and a high refresh rate of approximately 90 Hz. The position of the head and the body of the user of the VR device are continuously calculated through multiple sensors in such a way that the view of the scene corresponds exactly to the position of the observer. In this way, it is possible to create the visual sensation of immersion in a three-dimensional virtual world [13].
Large multinational companies are introducing virtual reality devices to the consumer market based on Head Mounted Displays (HMDs) with two different types of hardware: devices that do not have their own graphic hardware and need a personal computer for the task [14,15] and others that use a mobile phone or other specific graphic hardware without a personal computer [16,17]. There are significant differences in performance between both types of devices. In this work, we refer exclusively to the first type—devices associated with a personal computer with a dedicated graphics card.
The virtual world generated in VR devices is programmable and can be created in the image and likeness of the real world or not, as needed. Currently, there are two main software platforms for developing virtual reality content: Unreal Engine [18] and Unity Game Engine [19]. In both platforms, mathematical functions are used as basic rules of internal functioning that seek to reflect, to a greater or lesser extent, the real world through physical laws [20]. The geometry of the scene is supplied to the graphics card, and this hardware then projects the geometry and breaks it down into vertices (Figure 2). Then, the vertices are transformed and split into pixels, which obtain a final rendering treatment before they are passed to the screen through the Frame Buffer.
To handle the lighting and shading conditions, the graphic engine usually uses Physically-Based Rendering algorithms that apply a bidirectional reflectance distribution function model (BRDF) with four main components (diffuse, specular, normal, and smoothness). The diffuse component corresponds to the material color with a perfect diffuse illumination, the specular component corresponds to surface color, and the normal and smoothness components both correspond to surface texture. It is, therefore, possible to obtain rendered scenes with a high degree of visual fidelity when treating the light–matter interactions this way [21,22].
Physics-based representation (PBR) includes a combination of artwork, physical properties, and material shaders that work together to give consistency to graphic representation. Using the underlying physical principles of how light and surfaces interact, we can create images that work in all lighting conditions without special cases. The combination of the high computational power of current GPU-based virtual reality systems and the use of a physically-based rendering engine for lighting and shadowing a 3D virtual scene gives us the opportunity to determine if it is possible to obtain reliable color reproduction on these types of systems by comparing a real and virtual scene. The starting hypothesis of this work is that it is possible to introduce improvements in the color reproduction fidelity of real-time 3D rendered scenes in virtual reality systems. Therefore, we propose this goal as the main objective of this work. Notably, the key question of this objective is whether the improvement of color fidelity can be done in a real-time 3D rendered scene over lights and 3D objects and not over the final 2D images sent to an HMD. To do so, we will use two different levels of color fidelity: the first one based on the color management only of light sources defined in the 3D scene, retaining the color textures of the 3D virtual objects, and the second one using spectral rendering from hyperspectral textures associated with 3D objects. The secondary goal of this work is to determine if the first level of color management is sufficient to obtain a reliable reproduction or if it is necessary to implement both levels. To illustrate the operation of this color management system applied to VR, we will create a virtual version of the famous Ishihara Test for Colorblindness that will help us assess the fidelity of the reproduction of a real test in a virtual scene.

2. Materials and Methods

The technical equipment used in this work is comprised of an HTC Vive HMD driven by a custom-made PC with an i7 processor (Intel, Santa Clara, CA, USA), 16 GB RAM memory, and a GeForce GTX 2060 super graphic card (Nvidia Corporation, Santa Clara, CA, USA) using the Windows 10 operating system (Microsoft, Redmond, WA, USA). The measurement instrument employed in this work was a CS-2000 tele-spectroradiometer (Konica-Minolta, Tokyo, Japan) with a spectral resolution of 1 nm between 380 and 780 nm, a <2% radiance measurement error, and CIE 1931 x = 0.0015; y = 0.0010 color error for illuminant A.
The methodology used in this work can be divided into two steps: (1) Colorimetric characterization of the VR display and (2) implementation of a color management system adapted to VR. Each step is explained in detail below.

2.1. Chromatic Characterization of a VR Device

The first step to use a virtual reality system in tasks related to color vision research is the chromatic characterization of the Head Mounted Display (HMD). Each device of this type has its own specific characteristics in terms of the chromaticity of its primary colors and its medium white point, as well as the relationship between the digital values of the analog-to-digital converter (DAC) and the associated tristimulus values of XYZ.
To chromatically characterize the VR device used in this study, spectroradiometric measurements were made with a tele-spectroradiometer aligned with the optical axis of the lenses with which the HMD is equipped. We measured over the display and lens assembly as a whole, leaving the measurement of the screen with and without lenses at different points of the screen for future studies. These lenses allow the user to correctly position their eyes on the displays and obtain an image from the displays with a large visual field. On the negative side, there is an increase in the image size of the pixels that makes those pixels perceptible to users. The values of chromaticity and the average relative luminance of both displays are shown in Table 1.
The measured spectral power distribution of the RGB primaries is shown in Figure 3. The spectral radiance of each channel reveals the OLED nature of these displays with a narrow bandwidth for each channel’s RGB.
The color gamut is a subset of colors that can be accurately represented in a given color space or by a certain output device like a display. In this work, we measured the color gamut of our HTC Vive device by comparing its color gamut with that of other devices, such as an Oculus Rift CV1 and classic CRT and TFT monitors (Figure 4).
We analyzed the relationship between the values of the digital-to-analog converter (DAC) of each RGB channel and their corresponding values of luminance Y (Figure 5). The measurements were made using our tele-spectroradiometer for each of the R, G, and B chromatic channels independently, with a range of DAC values from 0 to 255 and a step of five units.
As a result of this analysis and considering the computational time constraints of VR systems, a linear chromatic characterization model preceded by a gamma linearization stage was used. This simplified color characterization model is widely used in color management [23].
R = R γ G = G γ B = B γ
( X Y Z ) = ( X R m a x X G m a x X B m a x Y R m a x Y G m a x Y B m a x Z R m a x Z B m a x X B m a x ) * ( R G B )
This model uses a typical linear transformation between the RGB’ values and the normalized XYZ tristimulus values with a 3 × 3 matrix (Equation (2)). The RGB’ values were obtained after a gamma correction of the normalized RGB values that guaranteed the linearity of the system (Equation (1)). Table 1 shows the gamma value of each RGB channel and the statistical measurement of the R2 fit index.
To confirm the goodness of the color characterization model, we measured 50 random RGB color samples. These values were compared to those predicted by the mathematical model, thereby obtaining an average color difference of ∆E00 = 1.8. All measurement data together with the MATLAB script used to obtain the chromatic characterization model are available as Supplementary Material for this publication.

2.2. Implementation of a Color Management Procedure Adapted to VR Systems

After performing the spectral characterization of the HMD, the next necessary step to obtain a faithful reproduction of the color inside a Virtual Reality system is to introduce a color management procedure into the 3D graphics engine. The different combinations of lighting configurations in the 3D software used are practically unlimited; for example, it is possible to program different ways to perform the rendering using different shaders. The final appearance of the virtual reality scene will depend on the color of the light source used, the color of the material, the gloss, and the interaction of the different elements that form the virtual scene with shading, primary and secondary reflections, etc. For all these reasons, we assumed a series of simplifications that allow us to deal with the problem:
  • We focus on the color matter, disregarding the participation of glossy objects and deactivating the secondary reflections.
  • We limited the 3D software processing to real time processing, disabling the Baked and Global illumination options.
  • We used Unity’s standard shader and configured the player using its linear option with forward rendering activated.
By selecting these options, we aimed to establish a configuration to analyze and compare the results of the implemented color management system.
3D scene rendering engines do not use any default color management systems. The native format used to define both light sources and object textures in this type of software is sRGB digital color space with a bit depth of 8 bits per channel. This sRGB space is widely used in computer science and image processing and is characterized by a specific gamut, defined by the chromaticity of the primaries and by a non-linear transformation (gamma) of approximately 2.2. The media White Point of this color space is D65.
The color management procedure implemented in this work has two levels of accuracy. In the first level, a C# script was implemented, which allowed to calculate the RGB values of a simulated light source in the VR scene starting from the spectral power distribution of the source and the spectral characterization of the HMD used.
The second level of precision requires the introduction of the spectral texture of the virtual objects present in the virtual scene. For this part, we developed a C# script that performs all the computational processing of the virtual object texture, thereby generating a different RGB texture for each lighting change. Notably, although in virtual reality systems the rendering is performed with a minimum frequency of 90 Hz, this rendering is done with the same light sources and RGB textures, unless they are changed at run time.
Figure 6 shows a flux diagram for both levels of the color management procedure developed. The first level is only applied to the virtual light sources defined inside the virtual scene. The second level requires one to apply the first one and calculate the image texture of each 3D object starting from its hyperspectral image. The entire C# script for both procedures is attached as Supplementary Material for this paper.
To analyze the results obtained by introducing both levels of color management, we used a sample of the ColorChecker color chart (X-rite, USA). This color chart is widely used in color management tasks in both scientific and professional fields. The manufacturer of this color chart provides the sRGB reference values that the color patches must present under D65 lighting. These values are presented in the first three columns of Table 2. The ColorChecker was scanned by a 3D color scanner, which provided the geometry of the object, as well as the color texture. The geometry is defined in an OBJ file as a dot-cloud. The color texture is defined by an 8-bit per channel BMP color file obtained under a D50 LED light source that the scanner is equipped with.
Our main objective is to compare the efficiency of different color management methods in real-time 3D virtual environments, such as VR. The great challenge of this work is to perform the rendering in real time, thereby solving the computational complexities that exist. To carry out this comparative study, we require not only the geometry and color texture of the ColorChecker but also the hyperspectral texture—that is, the spectral reflectance of each point of the color chart defined in the color texture file. Since this object is flat, we obtained the hyperspectral texture via a hyperspectral camera, model UHD 285 (CubertGmbH, Ulm, Germany). In this way, we replaced the RGB color texture obtained from the 3D scanner with a hyperspectral texture defined between 400 and 1000 nm, using the 4 nm steps provided by the hyperspectral camera. Starting from this hyperspectral texture file, we calculated the average RGB values of each color patch of ColorChecker corresponding to the D65 illuminant and sRGB color space. We then compared these calculated RGB values with the theorical RGB values provided by the manufacturer. Table 2 shows the reference values specified by the manufacturer and the values obtained in our calculations.
To study the effects of these two levels of implemented color management, we used four different virtual light sources: a D65 illuminant, a D65 simulator obtained from a commercial lightbooth in our laboratory featuring 6-peak LED technology, and a theoretical LED source composed only of two spectral peaks chosen in such a way that the color of this source over a diffuse reflectance target coincides exactly with the color of the D65 illuminant. Figure 7 shows the spectral power distribution of all light sources employed in this work.
Our 3D Graphics Engine uses sRGB as the native color space, and this space uses a medium-white-point, corresponding to the D65 illuminant. The CIE 1931 XYZ tristimulus values = (95.047 100.0 108.88) of this illuminant correspond to the digital 8-bit RGB values = (255, 255, 255). To prevent the incorrect definition of any light source whose X or Z values are above those corresponding to the illuminant D65, we chose to work with normalized sources whose relative luminance would be 85% that of the Illuminant D65. Therefore, we define the custom Illuminant D65 source as XYZ = (80.82, 85.00, 92.54), which corresponds to an RGB value = (237, 237, 237). Table 3 shows the XYZ and RGB values of the virtual light sources used, as well as their chromaticity values and Correlated Color Temperature (CCT).
The color of the objects present in a scene depends on the objects themselves but also on the light source illuminating them. The quality of a light source in terms of the fidelity of the colors it generates in a scene compared to those obtained by illuminating the scene with a reference light source can be calculated using the Color Fidelity Index Rf, defined by the International Lighting Commission (CIE) [24]. This value is included in Table 3, which describes the characteristics of the light sources used.
To assess the efficiency of the implemented color management system, a scene was designed in our 3D graphics engine, where only the virtual ColorChecker is illuminated by a single directional light source. The RGB values shown in Table 2 for different virtual light sources were assigned to this directional source to simulate this aspect of the ColorChecker under different light sources. In this way, the effectiveness of the first level of color management implemented (first row of Figure 8) was tested. Subsequently, the second level of color management was enabled, in which the chromatic textures of the 3D objects were recalculated according to the light source used, in addition to the actions performed at the first level (second row of Figure 8).
Figure 8 shows the results obtained when applying the two levels of color management within the 3D scene equipped with a virtual ColorChecker. At first glance, it is difficult to see the difference between the two levels of color management, except in the case of the two-peak source. In this case, the color difference is evident and results from the low Color Fidelity Index of that source, which is composed of only two spectral peaks, one blue and one yellow, making it impossible for this light source to reproduce any reddish or greenish tone. To more accurately evaluate the efficiency of both levels of color management, Table 4 shows the average color differences for each RGB channel. The RGB values measured from the screenshots are compared to those theoretically calculated from the spectral reflectance of each color patch of the ColorChecker using the different light sources. These color differences were calculated on each digital RGB channel since RGB is the native value of the implemented color management system.
In all cases, an improvement in color fidelity can be observed by applying the second level of color management. However, this difference is small, except for the case of the light source composed of two spectral peaks. There is no absolute criterion that allows knowing when it is enough to apply the first level of color management and when it is necessary to apply the two levels of color management, since it will depend on the spectral power distribution of the light source used and its interaction with the spectral reflectance of the materials of the objects used in the VR scene. In this work, it is pointed out that the Color Fidelity Index can be an indicator of when a light source may require the use of the second level of color management but setting a CFI reference value requires further research.

3. Results

Color reproduction can be handled at different levels of exigency depending on the need for accurate color reproduction. One of the most critical cases for color accuracy is testing for detecting color vision deficiencies. To illustrate the results obtained in this work, a reduced version of the well-known Ishihara Test for color-blindness detection [25] was virtually implemented. First, a virtual lightbooth was designed using VR-compatible 3D rendering software and then hyperspectral captures of the original test were used to introduce the captures into the virtual lightbooth. We measured the color of the virtual version of the test and compared it to the color measured in the real test. Finally, the behavior of the virtual version of the Ishihara Test was validated on real users.

3.1. Design of the Virtual Lightbooth and Reflectance Diffuse Reference Pattern

To analyze the results obtained in this work, a virtual scene was created by simulating a real lightbooth model, LED Color Viewing (Just NormLicht GmbH, Weilheim an der Teck, Germany). For this purpose, the real scale of the lightbooth and the size and position of the LED light bulbs were simulated. We performed the necessary adjustments in the 3D design software to obtain the right illuminating angle of the LED spotlights, as well as the lighting range and intensity parameters. We also created a virtual diffuse reflectance standard to perform the system calibration. Since the whole color management system is based on relative colorimetry, we need an element that allows us to determine where to locate the maximum luminance value. For this virtual lightbooth, we defined the same four virtual light sources employed in the previous step using ColorChecker.
In this way, the final simulation scene was created and prepared to introduce the hyperspectral textures of the Ishihara plates. Figure 9 shows the final virtual scenario with the test plate of the Ishihara test. In this test plate, all observers must recognize the number 12.

3.2. Hyperspectral Textures

One of the most innovative aspects of this work is the introduction of hyperspectral textures associated with virtual 3D objects in a Virtual Reality graphics engine [26,27,28]. To illustrate this point, we created a virtual version of the Ishihara test for detecting colorblindness. In this way, we employed the database provided by the Color Imaging Lab of the University of Granada [29]. This database contains, among other elements, hyperspectral images of each plate of the Ishihara Test. The researchers of this lab also provided us a MATLAB code to process the original hyperspectral cubes corresponding to each Ishihara plate. Although the spectral range of the hyperspectral cubes is wider, the spectral range used to create the hyperspectral textures was 380–780 nm using 4 nm steps.
The virtual version of the Ishihara test consists of a reduced number of Ishihara plates because the only purpose of this test is to show the capabilities of this new color management system for managing hyperspectral images inside a Virtual Reality Scene with faithful color reproduction (see Figure 10). The Ishihara Test uses a reduced number of differently colored points on each slide, making it possible to easily identify the points that will be confusing to different types of defective observers based on the lines of confusion in the CIE 1931 diagram [30]. For example, to check whether correct color management was achieved in our virtual reality system and, therefore, obtain a virtual version of the original Ishihara test, we measured the 10 different colors that make up slide 3 of the virtual Ishihara test and the 10 colors that should be present, according to the colorimetric calculation from the hyperspectral images.
Table 5 shows the calculated chromaticity of 10 different color dots of plate number 3 starting from the hyperspectral image of this plate using CIE D65 as a light source. These 10 dots belong to 10 different colors labelled from P1 to P10, as shown in Figure 11. In the same way, the measured chromaticity of the virtual reality scene for these 10 color dots is shown on the right side of Table 5. The average color difference between the calculated and measured chromaticity of these 10 samples was below 1.0 units using the CIEDE2000 color difference formula. By checking each individual difference, it can be seen that the main difference is due to luminance rather than chromaticity.

3.3. Validation of the Procedure of Color Management in VR

Finally, a validation of the virtual version of the rapid test for detecting color vision deficiencies based on a reduced version of the Ishihara Test was conducted. This test was carried out on 10 observers previously assessed in our laboratory for their color perception abilities. Of these 10 observers, six had previously been classified as normal observers using the Farnsworth–Munsell 100 Hue test and another four as defective observers (1 deutan and 3 protan). Table 6 shows the detailed responses for each plate of the virtual test, as well as the reference responses following the Ishihara Test instructions. The results obtained by the virtual test match those obtained by the real test and the previous cataloguing of the results. The purpose of this test was to validate the color management procedure carried out in the virtual reality environment via a psychophysical test, rather than to validate the test itself, which would require a greater number of tests and observers.

4. Conclusions

The results of this work can be measured in terms of the color reproduction fidelity obtained through the two levels of color management implemented. From this point of view, the results indicate that the application of the first level of color management, which only affects the lights and employs 3D scanned textures, may be sufficient in cases where high accuracy color reproduction is not required, provided that the spectrum of the simulated light source does not differ much from the spectrum of the reference illuminant. Under these assumptions, color differences of about 2 RGB digital units are achieved. This is not the case if the simulated light source has a low Rf color rendering index. In this case, it will be necessary to apply both levels of color management using the hyperspectral textures of the objects represented in the virtual reality scene. Applying these two levels of color management in VR will result in a very high color fidelity, with an average color difference below one RGB digital unit on each channel. In this case, then, the only significant color difference will be that coming from the chromatic characterization of the HMD, which will depend on the characteristics of each display used in the HMD.
The first level of color management implemented can be applied to any 3D object captured using a 3D scanner or other 3D capture technology. However, the second level requires the use of a hyperspectral camera to capture the information related to the spectral reflectance of the materials that conform the object. In the case of objects with flat geometry, such as the Ishihara test plates used in this work, their use in virtual reality environments is immediate. In the case of objects with other less simple geometry, it is necessary to use photogrammetric technics, like Multi-View 3D Reconstruction to capture a virtual 3D object from a real 3D object. This technique belongs to the category of Structure From Motion (SFM) technics, in which three-dimensional structures are estimated from two-dimensional image sequences. Traditionally, all these technics are based on RGB images captured by one or several digital RGB cameras. Our approach to solve this limit consists in substituting the RGB camera by a hyperspectral camera with different spectral channels. In this way we could obtain the 3D point cloud that defines the geometry of the 3D object and also, we could obtain the associated hyperspectral texture. This is a line of research we are currently working on.
Considering the above, we can conclude that the implemented color management methods can be applied in virtual reality scenes to facilitate the correct simulation of scenes where light and color fidelity are important factors. Obviously, these results can be applied to other non-VR 3D rendering systems, but the challenge was to define a color manage procedure for VR systems because of the high refresh rates of such systems.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/20/19/5658/s1, Source Code S1: Calibration Script. Source Code S2: Color Management Script.

Author Contributions

Conceptualization, F.D.-B., H.C., and P.J.P.; methodology, Á.L.P. and M.I.S.; software, F.D.-B., H.C., and P.J.P.; validation, Á.L.P., M.I.S., and H.C.; formal analysis, P.J.P. and F.D.-B.; investigation, F.D.-B., H.C., and P.J.P.; writing—original draft preparation, F.D.-B., H.C., and P.J.P.; writing—review and editing, F.D.-B., H.C., Á.L.P., M.I.S., and P.J.P.; supervision, P.J.P. and M.I.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the grants GR18131 and IB16004 of the Regional Government of the Junta de Extremadura and partially financed by the European Regional Development Fund.

Acknowledgments

We would like to thank the Color Imaging Lab at the University of Granada who provided us with their Ishihara hyperspectral database, which we used to develop the virtual test.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. LeGrand, Y.; ElHage, S.G. Physiological Optics; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  2. Berns, R.S. Methods for characterizing CRT displays. Displays 1996, 16, 173–182. [Google Scholar] [CrossRef]
  3. Pardo, P.J.; Pérez, A.L.; Suero, M.I. Validity of TFT-LCD displays for color vision deficiency research and diagnosis. Displays 2004, 25, 159–163. [Google Scholar] [CrossRef]
  4. Penczek, J.; Boynton, P.A.; Meyer, F.M.; Heft, E.L.; Austin, R.L.; Lianza, T.A.; Leibfried, L.V.; Gacy, L.W. 65-1: Distinguished Paper: Photometric and Colorimetric Measurements of Near-Eye Displays. SID Symp. Dig. Tech. Pap. 2017, 48, 950–953. [Google Scholar] [CrossRef]
  5. Penczek, J.; Boynton, P.A.; Meyer, F.M.; Heft, E.L.; Austin, R.L.; Lianza, T.A.; Leibfried, L.V.; Gacy, L.W. Absolute radiometric and photometric measurements of near-eye displays. J. Soc. Inf. Disp. 2017, 25, 215–221. [Google Scholar] [CrossRef]
  6. Suero, M.I.; Pardo, P.J.; Pérez, A.L. Color characterization of handheld game console displays. Displays 2010, 31, 205–209. [Google Scholar] [CrossRef]
  7. Morovič, J. Color Gamut Mapping; Wiley: Hoboken, NJ, USA, 2008; Volume 10. [Google Scholar]
  8. Hassani, N.; Murdoch, M.J. Investigating color appearance in optical see-through augmented reality. Color Res. Appl. 2019, 44, 492–507. [Google Scholar] [CrossRef]
  9. ElBamby, M.S.; Perfecto, C.; Bennis, M.; Doppler, K. Toward Low-Latency and Ultra-Reliable Virtual Reality. IEEE Netw. 2018, 32, 78–84. [Google Scholar] [CrossRef] [Green Version]
  10. Shao, X.; Xu, W.; Lin, L.; Zhang, F. A multi-GPU accelerated virtual-reality interaction simulation framework. PLoS ONE 2019, 14, e0214852. [Google Scholar] [CrossRef] [PubMed]
  11. Ray Tracing on Programmable Graphics Hardware. Available online: https://dl.acm.org/doi/10.1145/566654.566640 (accessed on 29 September 2020).
  12. Ray Trace diagram.svg. Available online: https://en.wikipedia.org/wiki/Ray_tracing_(graphics)#/media/File:Ray_trace_diagram.svg (accessed on 17 September 2020).
  13. Schnack, A.; Wright, M.; Holdershaw, J.L. Immersive virtual reality technology in a three-dimensional virtual simulated store: Investigating telepresence and usability. Food Res. Int. 2019, 117, 40–49. [Google Scholar] [CrossRef] [PubMed]
  14. Oculus Rifts. Available online: https://www.oculus.com/rift-s/?locale=es_ES (accessed on 16 July 2020).
  15. HTC Vive. Available online: https://www.vive.com/mx/product/ (accessed on 16 July 2020).
  16. Daydream. Available online: https://arvr.google.com/daydream/ (accessed on 16 July 2020).
  17. Manual de las Gafas VR ONE Plus. Available online: https://www.zeiss.com/content/dam/virtual-reality/english/downloads/pdf/manual/20170505_zeiss_vr-one-plus_manual_and_safety_upd_digital_es.pdf (accessed on 16 July 2020).
  18. Unreal Engine. Available online: https://www.unrealengine.com/en-US/ (accessed on 16 July 2020).
  19. Unity. Available online: https://unity.com/es (accessed on 16 July 2020).
  20. Pardo, P.J.; Suero, M.I.; Pérez, Á.L. Correlation between perception of color, shadows, and surface textures and the realism of a scene in virtual reality. J. Opt. Soc. Am. A 2018, 35, B130–B135. [Google Scholar] [CrossRef] [PubMed]
  21. Shen, H.; Zhang, Z.; Shang, Z. Fast global rendering in virtual reality via linear integral operators. Int. J. Innov. Comput. Inf. Control 2019, 15, 67–77. [Google Scholar]
  22. Tuliniemi, J. Physically Based Rendering for Embedded Systems. Master’s Thesis, University of Oulu, Oulu, Finland, 2018. [Google Scholar]
  23. Díaz-Barranca, F.; Pardo, P.J.; Suero, M.I.; Pérez, A.L. Is it possible to apply colour management technics in Virtual Reality devices? In Proceedings of the XIV Conferenza del Colore, Firenze, Italy, 11–12 September, 2018.
  24. CIE 2017 Colour Fidelity Index for Accurate Scientific Use. Available online: http://cie.co.at/publications/cie-2017-colour-fidelity-index-accurate-scientific-use (accessed on 29 September 2020).
  25. Anstis, S.; Cavanagh, P.; Maurer, D.; Lewis, T.; MacLeod, D.A.I.; Mather, G. Computer-generated screening test for colorblindness. Color Res. Appl. 1986, 11, S63–S66. [Google Scholar]
  26. Diaz-Barrancas, F.; Cwierz, H.C.; Pardo, P.J.; Perez, A.L.; Suero, M.I. Visual fidelity improvement in virtual reality through spectral textures applied to lighting simulations. Electron. Imaging 2020, 259, 1–4. [Google Scholar] [CrossRef]
  27. Díaz-Barrancas, F.; Cwierz, H.; Pardo, P.J.; Suero, M.I.; Perez, A.L. Hyperspectral textures for a better colour reproduction in virtual reality. In Proceedings of the XV Conferenza del Colore, Macerata, Italy, 5–7 September 2019. [Google Scholar]
  28. Diaz-Barrancas, F.; Cwierz, H.C.; Pardo, P.J.; Perez, A.L.; Suero, M.I. Improvement of Realism Sensation in Virtual Reality Scenes Applying Spectral and Colour Management Techniques. In Proceedings of the 12th International Conference on Computer Vision Systems (ICVS 2019), Riga, Latvia, 5–9 July 2019. [Google Scholar]
  29. Martínez-Domingo, M.Á.; Gómez-Robledo, L.; Valero, E.M.; Huertas, R.; Hernández-Andrés, J.; Ezpeleta, S.; Hita, E. Assessment of VINO filters for correcting red-green Color Vision Deficiency. Opt. Express 2019, 27, 17954–17967. [Google Scholar] [CrossRef] [PubMed]
  30. Wyszecki, G.; Stiles, W.S. Color Science; Wiley: New York, NY, USA, 1982. [Google Scholar]
Figure 1. Simplified graphic representation of a lighting and shadowing method for rendering a 3D scene and how the final 2D image is generated [12].
Figure 1. Simplified graphic representation of a lighting and shadowing method for rendering a 3D scene and how the final 2D image is generated [12].
Sensors 20 05658 g001
Figure 2. Sequence of transformations made in a Graphic Processing Unit (GPU) to generate the final 2D images for each display.
Figure 2. Sequence of transformations made in a Graphic Processing Unit (GPU) to generate the final 2D images for each display.
Sensors 20 05658 g002
Figure 3. Spectral power distribution of the RGB channels measured at the maximum digital-to-analog converter (DAC) value.
Figure 3. Spectral power distribution of the RGB channels measured at the maximum digital-to-analog converter (DAC) value.
Sensors 20 05658 g003
Figure 4. Color gamut of HTC Vive displays compared to the color gamut of different types of displays.
Figure 4. Color gamut of HTC Vive displays compared to the color gamut of different types of displays.
Sensors 20 05658 g004
Figure 5. Relation between the DAC and luminance values for each RGB-independent channel.
Figure 5. Relation between the DAC and luminance values for each RGB-independent channel.
Sensors 20 05658 g005
Figure 6. Flux diagram for both levels of the Color Management Procedure.
Figure 6. Flux diagram for both levels of the Color Management Procedure.
Sensors 20 05658 g006
Figure 7. Spectral power distribution of the light sources employed in this work adjusted to a luminance value of 85 Cd/m2.
Figure 7. Spectral power distribution of the light sources employed in this work adjusted to a luminance value of 85 Cd/m2.
Sensors 20 05658 g007
Figure 8. Screen capture of ColorChecker shown in the VR graphics engine with two levels of color management applied to four light sources: (first row) only applying changes in the color of the light sources and (second row) applying changes in the color of the light sources and recalculating the texture color of the VR object.
Figure 8. Screen capture of ColorChecker shown in the VR graphics engine with two levels of color management applied to four light sources: (first row) only applying changes in the color of the light sources and (second row) applying changes in the color of the light sources and recalculating the texture color of the VR object.
Sensors 20 05658 g008
Figure 9. Visual appearance of the virtual lightbooth with the Ishihara test plate.
Figure 9. Visual appearance of the virtual lightbooth with the Ishihara test plate.
Sensors 20 05658 g009
Figure 10. Set of 10 Ishihara plates selected to develop a quick virtual version test for colorblindness detection.
Figure 10. Set of 10 Ishihara plates selected to develop a quick virtual version test for colorblindness detection.
Sensors 20 05658 g010
Figure 11. Screenshot Ishihara Plate number 3 selected to check the effectivity of color management.
Figure 11. Screenshot Ishihara Plate number 3 selected to check the effectivity of color management.
Sensors 20 05658 g011
Table 1. The average and standard deviation of CIE 1931 chromaticity and the relative luminance of one point of each display of the HMD measured through the lens.
Table 1. The average and standard deviation of CIE 1931 chromaticity and the relative luminance of one point of each display of the HMD measured through the lens.
ChannelChromaticity LuminanceGamma
xyY (Relative)ValueR2
White0.299 ± 0.0020.315 ± 0.002100.0
Red0.667 ± 0.0040.332 ± 0.00330.3 ± 1.12.430.999
Green0.217 ± 0.0070.710 ± 0.00275.4 ± 2.42.380.999
Blue0.139 ± 0.0020.051 ± 0.0027.8 ± 0.52.380.998
Black0.311 ± 0.010.307 ± 0.0040.4 ± 0.2
Table 2. Reference RGB values for the ColorChecker patches versus those calculated from the hyperspectral image.
Table 2. Reference RGB values for the ColorChecker patches versus those calculated from the hyperspectral image.
ColorChecker ReferenceMeasured
Patch NumberRGBRGB
111582681158267
2194150130195149129
39812215793123157
487108679010865
5133128177130129176
610318917099191171
72141264421912345
880911667292168
919390991948598
1094601089159105
111571886416118963
122241634622916141
1356611504362147
1470148737214972
1517554601764956
162311993123819924
171878614918884150
1881331610136166
19243243242245245240
20200200200200202201
21160160160160161161
22122122121120121121
23858585838485
24525252505050
Average difference of ΔR = 3.4, ΔG = 1.6, ΔB = 1.9.
Table 3. Numerical characterization of the light sources used in this work: CIE Rf, Color Fidelity Index; Correlated Color Temperature, CCT; CIE 1931 chromaticity coordinates and tristimulus values; and the RGB values corresponding to the sRGB standard.
Table 3. Numerical characterization of the light sources used in this work: CIE Rf, Color Fidelity Index; Correlated Color Temperature, CCT; CIE 1931 chromaticity coordinates and tristimulus values; and the RGB values corresponding to the sRGB standard.
Light SourceCIE RfCCTCIE 1931 x,yCIE 1931 XYZRGB
CIE D6510065030.3127, 0.328980.81, 85.00, 92.57237, 237, 237
D65 Simulator88.265680.3107, 0.334479.00, 85.00, 90.31232, 239, 234
D65 Two Peaks3.265010.3127, 0.329180.74, 85.00, 92. 53237, 237, 237
CIE D5010050000.3458, 0.358581.98, 85.00, 70.11255, 235, 205
Table 4. Average and standard deviation of the color differences between the calculated and measured RGB colors for the four light sources employed.
Table 4. Average and standard deviation of the color differences between the calculated and measured RGB colors for the four light sources employed.
Light Source1st Level CMS2nd Level CMS
RGBRGB
CIE D653.1 ± 31.4 ± 12.0 ± 20.2 ± 0.40.5 ± 0.50.2 ± 0.4
Simulator D652.6 ± 21.7 ± 13.3 ± 50.5 ± 0.50.2 ± 0.40.4 ± 0.6
D65 Two Peaks30 ± 299.0 ± 86.5 ± 70.5 ± 0.50.4 ± 0.50.5 ± 0.5
CIE D503.1 ± 31.7 ± 24.6 ± 70.1 ± 0.30.4 ± 0.51.2 ± 0.9
Table 5. Chromatic coordinates calculated from the hyperspectral image and measured from the virtual reality scene for 10 different color dots of Ishihara Test plate number 3.
Table 5. Chromatic coordinates calculated from the hyperspectral image and measured from the virtual reality scene for 10 different color dots of Ishihara Test plate number 3.
Color DotCalculated ColorMeasured ColorΔE
xyYxyYDE2000
P1 Green Dark0.35600.385424.60.35570.385025.41.00
P2 Green Medium0.36020.390629.10.36010.390330.01.05
P3 Green Light0.37340.398638.40.37310.398441.00.82
P4 Ochre Dark0.39010.365627.10.38950.365528.10.70
P5 Ochre Medium0.40220.374633.10.40140.374334.30.76
P6 Ochre Light0.39450.383441.50.39410.383442.81.28
P7 Purple Dark0.37130.328224.20.37090.328225.61.14
P8 Purple Light0.38840.357636.10.38780.357338.00.72
P9 Bluish-Green Dark0.31580.375923.70.31590.376322.70.64
P10 Bluish-Green Light0.31820.388133.30.31820.388731.71.47
Average ΔE00 = 0.96.
Table 6. Numerical responses of the 10 real observers to the 10 plate virtual version of the Ishihara Test.
Table 6. Numerical responses of the 10 real observers to the 10 plate virtual version of the Ishihara Test.
ObserverQualificationP1P2P3P4P5P6P7P8P9P10
Ref1Normal12865373--26
Ref2Protan123525--586
Ref3Deutan123525--582
Ob1Normal12865357-826
Ob2Normal12865357--26
Ob3Normal12865357--26
Ob4Normal12865357--26
Ob5Normal12865357--26
Ob6Normal12865357--26
Ob7Protan12352---526
Ob8Protan123525--586
Ob9Deutan12352---582
Ob10Protan123525--586

Share and Cite

MDPI and ACS Style

Díaz-Barrancas, F.; Cwierz, H.; Pardo, P.J.; Pérez, Á.L.; Suero, M.I. Spectral Color Management in Virtual Reality Scenes. Sensors 2020, 20, 5658. https://doi.org/10.3390/s20195658

AMA Style

Díaz-Barrancas F, Cwierz H, Pardo PJ, Pérez ÁL, Suero MI. Spectral Color Management in Virtual Reality Scenes. Sensors. 2020; 20(19):5658. https://doi.org/10.3390/s20195658

Chicago/Turabian Style

Díaz-Barrancas, Francisco, Halina Cwierz, Pedro J. Pardo, Ángel Luis Pérez, and María Isabel Suero. 2020. "Spectral Color Management in Virtual Reality Scenes" Sensors 20, no. 19: 5658. https://doi.org/10.3390/s20195658

APA Style

Díaz-Barrancas, F., Cwierz, H., Pardo, P. J., Pérez, Á. L., & Suero, M. I. (2020). Spectral Color Management in Virtual Reality Scenes. Sensors, 20(19), 5658. https://doi.org/10.3390/s20195658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop