**1. Introduction**

Most human beings have the ability of color vision perception, which senses the frequency of the light reflected from object surfaces. However, color vision deficiency (CVD) is a common genetic condition [1]. It is in general not a fatal or serious disease, but still brings inconvenience to most patients. People with color vision deficiency (or so-called color blindness) cannot observe the colorful world due to the damage of color reception nerves. Whether caused by genetic problems or chemical injury, the damaged nerves are not able to distinguish certain colors. There are a few common types of color vision deficiency such as protanomaly (red weak), deuteranomaly (green weak) and tritanomaly (blue weak). They can be detected and verified easily by some special color patterns (e.g., Ishihara plates [2]), but, unfortunately, cannot be cured by medical surgery or other treatments. Compared to the human population, people with color vision deficiency are still a minority, and they are sometimes ignored and restricted by our society.

In many places, colorblind people are not allowed to have a driver's license. A number of careers in engineering, medicine and other related fields have set some restrictions on the ability of color perception. The display and presentation of most media on devices and in many forms do not specifically take color vision deficiency into consideration. Although the weakness in distinguishing different colors does not obviously affect people's learning and cognition, there is still a challenge in terms of color-related industries. In this work, we propose an approach to assist people with color vision deficiency to tell the difference among the confusing colors as much as possible. A simple yet reasonable technique, "color reprint", is developed and used to represent the CVD-proof colors. The algorithm does not only preserve the naturalness and details of the scenes, but also possess real-time processing capability. It can, therefore, be implemented on low-cost or portable devices, and brought to everyday life.

Human color vision is based on three light-sensitive pigments [3,4]. It is trichromatic and presented in three dimensions. The color stimulus is specified by the power contained at each wavelength. Normal trichromacy is because that the retina contains three classes of cone photo-pigment neural cells, L-, M-, and S-cones. A range of wavelengths of the light stimulate each of these receptor types at various degrees. For example, yellowish green light stimulates both L- and M-cones equally strongly, but S-cones weakly. Red light stimulates more L-cones than M-cones, and S-cones hardly at all. Our brain combines the information from each type of cone cells, and responds to different wavelengths of the light as shown in Table 1. The color processing is carried out in two stages. First, the stimulus from the cones is recombined to form two color-opponents and luminance. Second, an adaptive signal regulation processes within the operating range and stabilizes the illumination changes of the object appearance. When any kind of sensitive pigments is broken or loses the functionality [1], people can only view a part of the visible spectrum compared to those with normal vision capability [5] (see Figure 1).

(**a**) One of the images in Ishihara plates (left), and the images enhanced by the proposed re-coloring algorithm for protanomaly, deuteranomaly and tritanomaly, respectively (the rest).

(**b**) The images in (a) generated by color vision deficiency simulation. The first image is deuteranomaly simulation of the original Ishihara plate. The rest images are the simulation results of protanomaly, deuteranomaly and tritanomaly on the re-colored images, respectively.

**Figure 1.** (**a**) An original image from Ishihara plates and the enhanced images using our re-coloring algorithms for protanomaly, deuteranomaly and tritanomaly. (**b**) The images generated from a color vision deficiency simulation tool [6]. The results show that our image enhancement technique is able to improve check pattern recognition under various types of color vision deficiency.

**Table 1.** Cone cells in the human eyes and the response to the light wavelength.


There are studies about the molecular genetics of human color vision in the literature. Nathans et al. have described the isolation and sequencing of genomic and complementary DNA clones which encode the apoproteins of the red, green and blue pigments [4]. With newly refined methods, the number and ratio of genes are re-examined in men with normal color vision. A recent report reveals that many males have more pigment genes on the X chromosome than previously studied, and many have more than one long-wave pigment gene [7]. The loss of characteristic sensitivities of the red and green receptors introduced into the transformed sensitivity curves also indicates the appropriate degrees of luminosity deficit for deuteranopes and protanopes [8].

Color vision deficiency is mainly caused by two reasons: natural genetic factors and impaired nerves or brain. A protanope suffers from a lack of the L-cone photo-pigment, and is unable to discriminate reddish and greenish hues since the red–green opponent mechanism cannot be constructed. A deuteranope does not have sufficient M-cone photo-pigment, so the reddish and greenish hues are not distinguishable. People with tritanopia do not have the S-cone photo-pigment, and, therefore, cannot discriminate yellowish and bluish hues [9]. The literature shows that more than 8% of the world population suffer from color vision deficiency (see Table 2). For color vision correction, gene therapy which adds the missing genes is sufficient to restore full color vision without further rewiring of the brain. It has been tested on a monkey with colorblindness since birth [10]. Nevertheless, there are also non-invasive alternatives available by means of computer vision techniques.

**Table 2.** Approximate percentage occurrences of various types of color vision deficiency [11].


In [12], Huang et al. propose a fast re-coloring technique to improve the accessibility for the impaired color vision. They design a method to derive an optimal mapping to maintain the contrast between each pair of the representative colors [13]. In a subsequent work, an image re-coloring algorithm for dichromats using the concept of key color priority is presented [14]. A color blindness plate (CBP) is presented by Chen et al., which is a satisfactory way to test color vision in the computer vision community [15]. The approach is adopted to demonstrate normal color vision, as well as red–green color vision deficiency. Rasche et al. propose a method to preserve the image details while reducing the gamut dimension, and seek a color to gray mapping to maintain the contrast and luminance consistency [16]. They also describe a method which allows the re-colored images to deliver the content with increased information to color-deficient viewers [17]. In [18], Lau et al. present a cluster-based approach to optimize the transformation for individual images. The idea is to preserve the information from the source space as much as possible while maintaining the natural mapping as faithfully as possible. Lee et al. develop a technique based on fuzzy logic and correction of digital images to improve the visual quality for individuals with color vision disturbance [19]. Similarly, Poret et al. design a filter based on the Ishihara color test for color blindness correction [20].

Most algorithms for color transformation aim to preserve the color information in the original image while maintaining the re-colored image as naturally as possible. This might be different from some image processing and computer vision tasks; the images appearing natural after enhancement is an important issue for color vision deficiency correction. It is not only to keep the image details intact, but also to maintain the colors as smooth as those without the re-coloring process. These conditions re-range in the color distribution space to let the colorblind people to discriminate different colors [21,22]. Moreover, it is generally agreed that color perception is subjective and will not be exactly the same for different people. In this work, the proposed method is carried out on color vision deficiency simulation tools and adopts human tests for evaluation. We use RMS (root mean squares) to calculate the change after re-coloring, and HDR-VDP (visual difference predictor) [23] to compare the visibility and quality of subjective human feeling. Our algorithms not only present the naturalness and details of the images, but process almost in real-time.

#### **2. Approach**

In this paper, a technique called *color warping* (CW) is proposed for effective image re-coloring. It uses the orientation of the eigenvectors of the color vision deficiency simulation results to warp the color distribution. In general, the acquired images are presented in the RGB color space for display. This is, however, not suitable for color vision-related processing. For human color perception related tasks, the images are first transformed to the *λ*, *Y-B*, *R-G* color space based on the CIECAM02 model [24]. It consists of a transformation from RGB to LMS [25] using

$$
\begin{bmatrix} L \\ M \\ S \end{bmatrix} = \begin{bmatrix} 0.7328 & 0.4296 & -0.1624 \\ -0.7036 & 1.6975 & 0.0061 \\ 0.0030 & 0.0136 & 0.9834 \end{bmatrix} \begin{bmatrix} R \\ G \\ B \\ B \end{bmatrix} \tag{1}
$$

followed by a second transformation from LMS to *λ*, *Y-B*, *R-G* with

$$
\begin{bmatrix}
\lambda \\
Y - B \\
R - G
\end{bmatrix} = \begin{bmatrix}
0.6 & 0.4 & 0.0 \\
0.24 & 1.05 & -0.7 \\
1.2 & -1.6 & 0.4
\end{bmatrix} \begin{bmatrix}
L \\
M \\
M \\
S
\end{bmatrix} .\tag{2}
$$

Since the above transformations are linear, it is easily to verify the relationship between the RGB and *λ*, *Y-B*, *R-G* color spaces is given by

$$
\begin{bmatrix}
\lambda \\
Y - B \\
R - G
\end{bmatrix} = \begin{bmatrix}
0.3479 & 0.5981 & -0.3657 \\
1.1851 & -1.5708 & 0.3838
\end{bmatrix} \begin{bmatrix}
R \\
G \\
B
\end{bmatrix} \tag{3}
$$

and

$$
\begin{bmatrix} R \\ G \\ B \end{bmatrix} = \begin{bmatrix} 1.2256 & -0.2217 & 0.4826 \\ 0.9018 & -0.3645 & -0.2670 \\ -0.0936 & -0.8072 & 0.0224 \end{bmatrix} \begin{bmatrix} \lambda \\ Y - B \\ R - G \end{bmatrix} . \tag{4}
$$

A flowchart of the proposed method is illustrated in Figure 2. The "Eigen-Pro" stage represents the eigenvector processing. The *color warping* is the key idea of this work, and the color constraints are used to make the distortion decrease after the color space transformation.

**Figure 2.** The flowchart of the proposed technique. In the pipeline, the images are first transformed to the *λ*, *Y-B*, *R-G* color space for the re-color processing, followed by a transformation back to the original RGB color space.

## *2.1. Color Transform*

The physical property of the light used for color perception is the distribution of the spectral power [26]. In principle, there are many distinct spectral colors, and the set of all physical colors may be thought of as a large-dimensional vector space. A better alternative to the commonly adopted tristimulus coordinates for the spectral property of the light is to use L-, M-, and S-cone cells coordinates as a 3-space. To form a model for human perceptual color space, we can consider all the resulting combinations as a subset of the 3-space. The property of the cones covers the region away from the origin corresponding to the intensity of the S, M and L lights proportionately. A digital image acquisition device consists of different elements [27,28]. The characteristics of the light and the material of the observed object determine the physical properties of its color [27,29,30]. For color transformation, Huang et al. [31] present a method to warp images to the CIELab color space by rotating a matrix. Dana et al. [32] and Swain et al. [33] propose to use an antagonist space which does not take the non-linear human eye response into consideration. Instead, we transform the color space to (WS, RG, BY) based on the electro-physiological study [34].
