Next Article in Journal
Recent Progress in the Development of Graphene Detector for Terahertz Detection
Previous Article in Journal
Low-Light Image Enhancement Based on Multi-Path Interaction
Previous Article in Special Issue
A Lightweight Localization Strategy for LiDAR-Guided Autonomous Robots with Artificial Landmarks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras

1
Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
2
Faculty of Business and Informatics, Nagano University, 658-1, Shimogo, Ueda, Nagano 386-1298, Japan
3
Department of Engineering Informatics, Osaka Electro-Communication University, Neyagawa, Osaka 572-8530, Japan
4
Kobe Institute of Computing, Graduate School of Information Technology, Chuo-ku, Hyogo, Kobe 650-0001, Japan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 4985; https://doi.org/10.3390/s21154985
Submission received: 2 June 2021 / Revised: 19 July 2021 / Accepted: 20 July 2021 / Published: 22 July 2021
(This article belongs to the Special Issue Recent Advances in Automated Measuring Systems)

Abstract

:
Mobile phone cameras are often significantly more useful than professional digital single-lens reflex (DSLR) cameras. Knowledge of the camera spectral sensitivity function is important in many fields that make use of images. In this study, methods for measuring and estimating spectral sensitivity functions for mobile phone cameras are developed. In the direct measurement method, the spectral sensitivity at each wavelength is measured using monochromatic light. Although accurate, this method is time-consuming and expensive. The indirect estimation method is based on color samples, in which the spectral sensitivities are estimated from the input data of color samples and the corresponding output RGB values from the camera. We first present an imaging system for direct measurements. A variety of mobile phone cameras are measured using the system to create a database of spectral sensitivity functions. The features of the measured spectral sensitivity functions are then studied using principal component analysis (PCA) and the statistical features of the spectral functions extracted. We next describe a normal method to estimate the spectral sensitivity functions using color samples and point out some drawbacks of the method. A method to solve the estimation problem using the spectral features of the sensitivity functions in addition to the color samples is then proposed. The estimation is stable even when only a small number of spectral features are selected. Finally, the results of the experiments to confirm the feasibility of the proposed method are presented. We establish that our method is excellent in terms of both the data volume of color samples required and the estimation accuracy of the spectral sensitivity functions.

1. Introduction

In recent years, mobile phones have become widespread and part of our daily lives (e.g., see [1]). The combination of mobility, telecommunication, and photography enabled by integrating cameras into mobile devices has transformed our lifestyles. In this respect, mobile phone cameras are far more useful than professional digital single-lens reflex (DSLR) cameras. More recently, beyond imaging, mobile sensing technologies using mobile phone cameras have emerged and are rapidly finding applications in many fields, such as smartphone spectroscopy, medical diagnosis, food quality inspection, and environmental monitoring [2,3,4]. The sensing systems often consist of a mobile phone camera and an externally attached device.
Knowledge of the camera spectral sensitivity function is important in many fields that deal with images, such as imaging science and technology, computer vision, medical imaging, and applications involving cultural heritage or artwork. The spectral sensitivity represents the image sensor output per unit incident light energy at each wavelength within the spectral range in which the camera system operates. The function plays the role of mapping the spectral information in a scene to the RGB response values of the camera [5,6]. However, as camera manufacturers typically do not publish this information, users need to either measure or estimate their camera’s sensitivity [7,8]. Thus far, the measurement and estimation of spectral sensitivity functions have been mostly limited to DSLR cameras, for which the digital outputs are available in the form of raw data. For instance, the spectral sensitivity database measured by the Rochester Institute of Technology (RIT) for 28 cameras is given in [9], whereas the Nokia N900 is the only mobile phone (smartphone) for which the sensitivity is given.
Methods for knowing spectral sensitivities can be mainly classified into the direct measurement method and the indirect estimation method. The direct methods are methods in which the spectral sensitivity is measured at each wavelength point in the visible range [7,10]. They require a monochromator that emits stable monochromatic light. Most indirect estimation methods are based on algorithms that use color samples [11,12]. Typical color samples include reflective color targets, such as color checkers, that are photographed by a camera under known illumination. Fluorescence, LED, and LCD display-based color targets can be used as specialized color samples [13,14,15]. The spectral sensitivity functions are estimated from pairs of input and output data comprising the color samples and camera RGB values, respectively. Principal component analysis (PCA) is a combination of direct and indirect methods [16]. This method was described as being more accurate than the conventional indirect method (e.g., see [17]). As estimating the spectral sensitivity of mobile phone camera has not been attempted, we applied the modified method to the spectral sensitivity estimation problem in this study. It should be noted that the spectral sensitivity function creates a linear relationship between the camera inputs and outputs using raw image data.
However, in most cases, the digital output of a mobile phone camera is not raw image data, but rather rendered image data such as JPEG images. This type of data is quite different from raw data as it has undergone many post-processing steps such as white balance (WB), color interpolation, color correction, gamma correction, and compression. Furthermore, the input and output data are not linearly related [17]. Recently, software that can store images captured using a mobile phone camera as raw data has become publicly available. In [18], a compressive sensing approach was proposed for estimating the spectral sensitivity functions of a mobile phone (smartphone) camera. This method is an indirect method based on a limited number of color samples, as the directly measured RGB spectral response functions of the smartphones were not available.
In this study, methods for measuring and estimating the spectral sensitivity functions of mobile phone cameras are developed. Although applying direct measurement methods to mobile cameras gives accurate and reliable results, implementing them over the entire wavelength range of interest in the same manner as professional DSLR cameras is expensive and time-consuming. Indirect estimation methods using color samples face solving high-dimensional matrices of the spectral responses to the samples. The matrices are seriously rank deficient even when the number of color samples is large. Here, we aim to develop an effective estimation method that can achieve sufficient accuracy with a small number of color samples by extracting the features of the spectral sensitivity functions from the dataset of directly measured sensitivities. We use the features of the spectral function shapes, but not the measurements themselves.
In the following, we first describe a direct measurement method for spectral sensitivity functions. An efficient imaging system to generate monochromatic light and measure the camera response is presented. Using this system, we measure a variety of mobile phone cameras available on the market and create a database of spectral sensitivity functions. The directly measured spectral sensitivity functions are used as the reference data to estimate the spectral sensitivity functions.
We then analyze the features of the measured spectral sensitivity functions. The respective spectral sensitivity functions are fitted to color-matching functions for comparison with the spectral sensitivity of the human visual system. PCA is applied to determine the dimensionality of the dataset and to extract the statistical features of the spectral functions.
Subsequently, we present a normal method for estimating spectral sensitivity functions using color samples. We point out the reliability and accuracy drawbacks the normal estimation method suffers from despite being a least squares estimation method. To address this, we propose an effective estimation method using the spectral sensitivity features extracted through PCA in addition to the color samples.
Finally, the results of experiments to confirm the feasibility of the proposed method for estimating the spectral sensitivity functions of a mobile phone camera are presented. We establish that our method has excellent performance in terms of both the data volume of the color samples used and the estimation accuracy of the spectral sensitivities.

2. Measurement of Spectral Sensitivity Functions

2.1. Measurement Setup

Methodologies for the spectral characterization of RGB cameras have been discussed in some references [19,20]. The standard methods for calibrating consumer cameras in detail, including characteristics such as the linearity, dark current, and spectral response are described in [19]. In this study, we adopted a similar approach to capture images. The camera images were captured in Adobe’s digital negative (DNG) format, which is a lossless raw image format [21]. We manually set the camera International Organization for Standardization (ISO) to 100 and the white balance (WB) mode to incandescent. The exposure time was set to the maximum value at which the dynamic range of the camera outputs did not saturate. The dark response was measured on all the selected cameras by covering the camera with a black sheet in a dark room. The dark response was removed from the camera output. The resulting signal component represents a linear response to the input radiation. The color filter array was a Bayer pattern in which the RGB filters were arranged in a checkerboard pattern with two green pixels (G and G2) for every red or blue one [22]. We averaged the values of the two green pixels. The pixel values for each RGB triplet were averaged over approximately 300 × 300 pixels. The bit depths of the cameras used were 8 to 12 bits.
(a)
Linearity
We first evaluated the linearity of the raw camera data. A set of gray chips from the X-rite Color Checker were used as the color target. The surface–spectral reflectance of these samples was measured using a spectral colorimeter (CM-2600d, Konica Minolta, Tokyo, Japan). The mobile phone used was an Apple iPhone 8. Figure 1a shows the relationship between the average reflectance of the gray chips and the camera RGB output. We also calculated the luminance values for the color samples under International Commission on Illumination (CIE) Standard Illuminant D65 using the spectral luminous efficiency curve y ¯ ( λ ) . Figure 1b shows the relationship between the luminance values and the camera RGB outputs.
(b)
Spectral response
Figure 2 shows our experimental setup for measuring the spectral responses of mobile phone cameras using monochromatic light and a spectrometer [23]. Figure 2a shows the conversion of the continuous spectrum from a xenon lamp into monochromatic light using the grating in a monochromator (SPG-100, Shimazu Kyoto, Japan). The monochromatic light was projected onto a diffuser, and the transmitted light image was observed using both a mobile phone camera and a spectroradiometer (CS-2000, Konica Minolta, Tokyo, Japan). The effective spectral resolution (full width at half maximum (FWHM)) of the monochromator was approximately 4 nm.
Figure 2b shows the production of monochromatic light by a programmable light source (OL490, Optronic Laboratories, Orlando, FL, USA) and a liquid light guide. The lighting system was a spectrally controllable light source using a digital micromirror device (DMD) [24]. This lighting system was used in this study to generate emissions with a narrow width at a single wavelength in the visible range (400–700 nm). The emitted light was projected onto a white reference standard (Spectralon), and the reflected light image was observed by the camera and spectroradiometer. The FWHM was approximately 5–10 nm. We used mainly the system shown in Figure 2b for the measurement. When the monochromator system was used, the average values of the measured spectral sensitivity functions for both systems in Figure 2a,b were taken.
Assuming linear camera response, the three-channel output of a mobile phone camera can be described as
[ R G B ] = 380 720 e ( λ ) [ r ( λ ) g ( λ ) b ( λ ) ] d λ ,
where R, G, and B represent the camera responses after reducing the dark responses. e ( λ ) represents the illuminant spectrum and r ( λ ) , g ( λ ) , and b ( λ ) the spectral sensitivity functions. We denote the spectral powers of the n illuminants used in the measurement as
380 720 e i ( λ ) d λ   =     E i ,               ( i = 1 ,   2 ,   ,   n ) .
When each illuminant spectrum is unimodal, such as monochromatic light, and the FWHM is sufficiently narrow compared to the sensitivity functions, the spectral sensitivity at each wavelength λ i can be calculated from the illuminant power and the camera outputs as
[ r ( λ i ) g ( λ i ) b ( λ i ) ] = [ R i / E i G i / E i B i / E i ] ,               ( i = 1 ,   2 ,   ,   n ) .
where r ( λ i ) , g ( λ i ) , and b ( λ i ) (i = 1, 2, ..., n) are the discrete representations of the spectral sensitivity functions. The visible wavelength range (400–700 nm) was scanned at equal wavelength intervals of 10 nm and each spectral sensitivity function was expressed as a 31-dimensional column vector with n = 31.

2.2. Spectral Sensitivity Database

We measured the spectral sensitivities of 20 mobile phone cameras and constructed a database of the spectral sensitivity functions. The mobile phones used in this study are listed in Table 1. Typically, the mobile phone incorporates an image sensor produced by a different manufacturer. The third column in Table 1 lists the names of the image sensors. Figure 3 shows the relative spectral sensitivity functions of the 20 mobile phone cameras in our database. Most mobile phones are based on either iOS or Android. Six of the mobile phones in Table 1 are iOS phones and the remaining 14 are Android phones. Figure 4 shows the spectral sensitivity functions grouped into the two categories of (a) iOS phone cameras and (b) Android phone cameras. There is no significant difference in the spectral curves of the sensitivity functions between the two categories.
It may be interesting to compare this dataset with a dataset of the DSLR cameras. Compared to the spectral sensitivity database in [9], we see that the shapes of the spectral distributions in our mobile phone cameras do not have large variations, differently from those of the DSLRs.
The numerical data of the spectral sensitivity functions in Table 1 have been published as Excel data and txt data on http://ohlab.kic.ac.jp/ (accessed on 22 July 2021).

3. Feature Analysis of Spectral Sensitivity Functions

3.1. Fitting to Color-Matching Functions

The Luther condition states that the camera spectral sensitivity functions are linear transformations of the CIE-1931 2-degree color-matching functions [25]. As the color-matching functions are a numerical representation of the color vision response of the standard observers, the degree of fit with the Luther condition determines the colorimetric measurement accuracy of the mobile phone cameras.
Let [ x ¯ ,   y ¯ ,   z ¯ ]   be the 31 × 3 discrete matrix representation of the CIE-1931 2-degree color-matching functions x ¯ ( λ ) , y ¯ ( λ ) , z ¯ ( λ ) and [r, g, b] be the 31 × 3 matrix representing the RGB spectral sensitivity functions of a mobile phone camera. We express the linear relationship between these matrices as
[ x ¯ ,   y ¯ ,   z ¯ ]     =   [ r ,   g ,   b ] T ,
where T is a 3 × 3 transformation matrix. To validate whether the measured sensitivity functions satisfy the Luther condition, we estimate the matrix T using the least squares solution for Equation (4). The estimate is given as
T ^ = [ C C t ] 1 C t A ,
where A = [ x ¯ ,   y ¯ ,   z ¯ ]   , C =   [ r ,   g ,   b ] , and the symbol t denotes matrix transposition.
The root-mean-square errors (RMSEs) t r a c e [ A C T ^ ] t [ A C T ^ ] / ( 31 × 3 ) of the estimates were calculated, where the symbol t r a c e [ X ] denotes the trace of a matrix X . Figure 5 shows the fitting results to the color-matching functions based on the spectral sensitivity functions of the iPhone 8. The RMSE was 0.254. The average RMSE over all the measured sensitivity functions in our database was 0.226. We also calculated the CIE-LAB color difference (Delta-E 1976) under Standard Illuminant D65 for many Munsell color chips. The average color difference was 5.61. Thus, the Luther condition was not completely satisfied.

3.2. PCA Analysis

PCA was applied to the measured spectral function database to determine the dimensionality for approximating the spectral data and statistically extracting the spectral shape features. All the relative spectral curves of the measured spectral sensitivity functions from the N (=20) mobile phone cameras are represented for each channel in a 31 ×  N matrix as
Y R   =   [ r 1 ,   r 2 ,   ,   r N ] ,                 Y G   =   [ g 1 ,   g 2 ,   ,   g N ] ,               Y B   =   [ b 1 ,   b 2 ,   ,   b N ] .
We summarize these matrices as
Y k   =   [ y 1 ,   y 2 ,   ,   y N ]                                             ( k = R ,   G ,   B ) .
Singular-value decomposition (SVD) provides an orthogonal decomposition of rectangular matrices [26]. We used SVD for the PCA. The SVD of each matrix Y k is written as
Y k   =   U Σ V t
or equivalently,
Y k   =   σ 1 u 1 v 1 t + σ 2 u 2 v 2 t +   + σ N u N v N t ,
where U   ( = [ u 1 ,   u 2 ,     , u N ] ) and V   ( = [ v 1 ,   v 2 ,     , v N ] ) are the 31 ×  N left singular matrix and the N ×  N right singular matrix, respectively, and Σ is the N ×  N diagonal matrix in which the elements are the singular values σ 1 ,   σ 2 ,   ,   σ N     ( σ i σ i + 1 > 0 ) . The 31-dimensional singular vectors u 1 ,   u 2 ,     , u N are orthogonal to each other. Therefore, each measured spectral sensitivity function can be uniquely expressed as a linear combination of N orthogonal vectors
y j   =   c 1 j u 1 + c 2 j u 2   + c N j u N ,                       ( j = 1 ,   2 ,   ,   N )
where c i j = σ i v i j . Consider an approximate representation of the sensitivity functions in terms of component vectors chosen from u 1 ,   u 2 ,     , u N . When the first K principal components are chosen, the performance index representing the approximation accuracy is given by the percent variance:
P ( K ) = i = 1 K σ i 2 / i = 1 N σ i 2 .
Figure 6 shows the first three principal components u 1 , u 2 , and u 3 for the (a) red, (b) green, and (c) blue channels of the spectral sensitivity functions where the bold, broken, and dotted curves represent the first, second, and third principal components, respectively. The percent variances are P(1) = 0.9940, P(2) = 0.9974, P(3) = 0.9990 for red, P(1) = 0.9954, P(2) = 0.9984, P(3) = 0.9995 for green, and P(1) = 0.9926, P(2) = 0.9962, P(3) = 0.9999 for blue. The first principal component u 1 is the average spectral curve of the spectral sensitivity function dataset, which plays the most important role in the spectral representation. The results of the PCA suggest that the measured spectral sensitivity functions can be approximated using the first three principal components with sufficient accuracy. The approximation is obtained using the principal component expansion in Equation (10). Figure 7 shows the approximated spectral curves for the (a) red, (b) green, and (c) blue channels of the spectral sensitivity functions of the iPhone 8, where the colored bold curves, the black bold curves, and the broken curves represent the measured spectral sensitivities, the approximation using the first component only, and the approximation using the first two components, respectively.

4. Estimation of Spectral Sensitivity Functions

4.1. Normal Method Using Color Samples

The direct measurement method for the spectral sensitivity functions of mobile phone cameras is time-consuming and expensive, and requires a laboratory setting. Therefore, despite their inferior accuracy, indirect estimation methods are often used. The indirect methods are normally based on color samples. The spectral sensitivity functions are estimated from a pair comprising the input data of the color samples and the corresponding output data of the captured RGB values.
Suppose that M different color samples with surface spectral reflectance S i ( λ ) (i = 1, 2, …, M) are observed under illumination e ( λ ) by a mobile phone camera. The camera inputs are the spectral data S i ( λ ) e ( λ ) reflected from the color samples, which are summarized as an n ×  M matrix
C = [ s 1 . e ,     s 2 . e ,     ,     s M . e ] ,
where s i (i = 1, 2, ..., M) and e are n-dimensional column vectors representing the spectral reflectances and the illuminant, respectively, and the symbol. * represents elementwise multiplication. The camera outputs for the M color samples are represented as the 1 ×  M matrices
z R = [ R 1 ,   R 2 ,   ,   R M ] ,         z G = [ G 1 ,   G 2 ,   ,   G M ] ,           z B = [ B 1 ,   B 2 ,   ,   B M ] .
The observed outputs can then be written in the discrete form
z R = r t C ,         z G = g t C ,           z B = b t C ,
The least squares estimates for Equation (14) are given by
r ^ = [ C C t ] 1 C z R t ,         g ^ = [ C C t ] 1 C z G t ,           b ^ = [ C C t ] 1 C z B t .
However, this normal method is often ineffective for accurately estimating the spectral sensitivity functions owing to the dimensionality of the surface spectral reflectance. As the surface–spectral reflectance of natural or human-made objects are described using six to eight basis functions [27,28], the dimension of the reflectance is much lower than the dimension n (=31) of the spectral sensitivity functions. Therefore, even when many samples M > n are used, the matrix C may be rank deficient and the matrix inversion is often unreliable.

4.2. Proposed Method Based on Color Samples and Spectral Features

We consider solving the estimation problem by using the spectral features of the sensitivity functions obtained through PCA in addition to the color samples. We focus on a linear model representation of the spectral sensitivity functions. The PCA results in the previous section suggest that the spectral sensitivity functions of mobile phone cameras can be approximated by linear combinations of a small number of principal components. Selecting the first L components for the estimates, the linear model is represented as
r = i = 1 L x R i u R i ,             g = i = 1 L x G i u G i ,             b = i = 1 L x B i u B i ,
where u R i , u G i , and u B i are the principal component vectors for each of the red, green, and blue channels, respectively, as shown in Figure 6, and the coefficients x R i , x G i , and x B i are unknown scalar weights to be estimated. Equation (16) provides a strong constraint in estimating the spectral shapes of the sensitivity functions.
Writing the L principal components of the spectral sensitivity features and the corresponding weights as the n ×  L matrices
U R = [ u R 1 ,   ,   u R L ] ,   U G = [ u G 1 ,   ,   u G L ] ,   U B = [ u B 1 ,   ,   u B L ] ,
and L-dimensional column vectors
x R = [ x R 1 ,   ,   x R L ] t ,   x G = [ x G 1 ,   ,   x G L ] t ,   x B = [ x B 1 ,   ,   x B L ] t ,
We have a compact form for Equation (16):
r = U R x R ,           g = U G x G ,           b = U B x B .
The spectral features of r , g , and b are included in the arrays of U R , U G , and U B .
The next step is to determine the weighting coefficients. Substituting Equation (19) into Equation (14), we obtain the following observation equations for the weights
z R = x R t U R t C ,           z G = x G t U G t C ,               z B = x B t U B t C .
Therefore, the weights x R , x G , and x B can be estimated using the observed dataset of M color chips z R , z G , and z B . The least-squares estimates are given as
x ^ R = [ C R C R t ] 1 C R z R t ,         x ^ G = [ C G C G t ] 1 C G z G t ,           x ^ B = [ C B C B t ] 1 C B z B t
where C R = U R t C , C G = U G t C , and C B = U B t C . We note that the matrix sizes of C R , C G , and C B are L ×  M, and C R C R t , C G C G t , and C B C B t are full-rank L ×  L matrices. Therefore, the matrix inversion is stable. The estimates of the spectral sensitivity functions r ^ , g ^ , and b ^ are finally obtained by substituting x ^ R , x ^ G , and x ^ B into Equation (19).
r ^ = U R [ C R C R t ] 1 C R z R t ,         g ^ = U G [ C G C G t ] 1 C G z G t ,           b ^ = U B [ C B C B t ] 1 C B z B t .
It should be noted that these estimates are the least squares estimates to minimize the residual error in the observations. The original residual error is described by a cost function L F as
L F = z R r t C 2 + z G g t C 2 + z B b t C 2
where the symbol x 2 denotes the L2 norm of a vector x . As the spectral sensitivity functions are represented using the linear model in Equation (16), the cost function can be rewritten as follows:
L F = z R x R t U R t C 2 + z G x G t U G t C 2 + z B x B t U B t C 2
When L is directly minimized with respect to each of x R , x G , and x B , we can obtain the same solution as Equation (22). Therefore, the proposed method solves the optimization problem to minimize the residual error.
The parameter L denotes the dimensionality of the linear model representation of the spectral sensitivity functions. The selection of L affects the estimation accuracy. The most appropriate value of L is determined experimentally based on the estimation accuracy of the spectral sensitivity functions.

5. Experimental Results

5.1. Experimental Setup

We used 1523 Munsell color chips from the Munsell Book of Color [29]. The chips are reflective color targets arranged in the Munsell color system. Figure 8 shows the imaging setup for the color samples using a mobile phone camera. We measured the surface–spectral reflectance of each color sample using the spectral colorimeter. Figure 9 shows the complete set of all the spectral reflectance measurements. We also used 24 color samples from the X-Rite ColorChecker Passport Photo [30]. Figure 10 shows the color checkers used for the validation of sensitivity measurement and reflectance estimation, where panel (a) shows the imaging targets consisting of 24 color checkers and the white reference standard (Spectralon), and panel (b) the spectral reflectance values measured by the spectral colorimeter. The illumination light source was an incandescent lamp with the spectral power distribution shown in Figure 11. A Munsell white paper (N9.5) was used to correct the illumination unevenness on the color samples and limb darkening.

5.2. Validation of the Measured Spectral Sensitivities

To evaluate the accuracy of the measured spectral sensitivities, the color differences were computed between the colors imaged by the mobile phone cameras and the simulated colors based on the measured spectral sensitivities. We randomly selected two iOS and Android phone cameras, making a total of four phone cameras, which we studied using the X-Rite color checkers in Figure 10.
We simulated the camera response for each color checker using the measured spectral sensitivities ( r mea ,   g mea , b mea ) of each mobile phone camera. The camera output RGB values ( z mea ,   R ,   z mea ,   G ,   z mea ,   B ) can be predicted in a discrete form as
z mea ,   R = r mea t s . e ,             z mea ,   G = g mea t s . e ,               z mea ,   B = b mea t s . e
where s and e represent the spectral reflectance of the color checker in Figure 10b and the illuminant spectral power distribution in Figure 11, respectively.
On the other hand, the RGB values ( z img ,   R ,   z img ,   G ,   z img ,   B ) of the color checkers were captured directly by each camera. The RGB color difference was then averaged over all color checkers as
Δ E RGB   = E [ ( z img ,   R z mea ,   R ) 2 + ( z img ,   G z mea ,   G ) 2 + ( z img ,   B z mea ,   B ) 2 ]
Table 2 shows the average color differences between the imaged colors of the 24 color checkers captured by each camera and the simulated colors based on the measured spectral sensitivities, where the RGB values are normalized to be R 2 + G 2 + B 2 = 1 for the white reference standard under the incandescent lamp used.

5.3. Estimation Results by the Normal Method

The estimation results obtained using the normal method for the iPhone 8 are shown in Figure 12, where the bold curves in red, green, and blue represent the estimated spectral sensitivities of the red, green, and blue channels, respectively, and the broken curves represent the measured spectral sensitivities. A non-negative least squares estimation was applied to solve Equation (14). Despite the use of many color samples, the estimation results based on the direct use of color samples were quite unstable and inaccurate.
To investigate the dimensionality of the spectral reflectance dataset, SVD was applied to the 31 × 1523 matrix S = [ s 1 ,   s 2 ,   ,   s 1523 ] consisting of the measured spectral reflectance values from the Munsell color chips. Figure 13 depicts the percent variance P ( K ) defined in Equation (11) as a function of the number of principal components K. It can be seen that the percent variance reached 1.0 around K = 6. Therefore, the dimension of the reflectance dataset was approximately 6 and it was not possible to recover the spectral functions with 31 dimensions.

5.4. Estimation Results by the Proposed Method

The feasibility of the proposed method was evaluated in two steps.
(1) In the first experiment, we focused solely on the iPhone 8 and investigated the performance in detail. The spectral sensitivity functions of the iPhone 8 were removed from the original database, and SVD was applied to the remaining dataset to obtain the principal components u R i , u G i , and u B i . Next, the spectral sensitivities of the iPhone 8 were estimated via the algorithm in Section 4.2 using the camera RGB values for the color samples and the principal components. Finally, to determine the most appropriate L, the root mean squared errors of the estimated spectral sensitivities were calculated as
r m s e =   ( r ^ r 2 + g ^   g 2 + b ^ b 2 ) / ( 31 × 3 ) .
The estimation errors at different values of L were rmse = 0.04929, 0.07239, and 0.08963 for L = 1, 2, and 3, respectively. The minimum error was recorded when the estimation was performed using only one component. Figure 14 shows the estimated spectral sensitivity functions of the iPhone 8 at L = 1.
Furthermore, we examined the possibility of reducing the number of color samples. Suppose that m color samples are used for sensitivity estimation. We randomly selected m samples from all the 1523 samples of the Munsell color chips and then computed the spectral sensitivity estimate and accuracy based on the selected sample data. This trial was repeated 1000 times to achieve significant statistical reliability. The average errors were rmse = 0.04958, 0.04933, 0.04931, 0.04929, and 0.04929 for m = 10, 50, 100, 500, and 1000. These results show that a sufficient estimation accuracy can be obtained even when only ten color samples are used. Figure 15 shows the estimated sensitivities of the iPhone 8 at L = 1 and m = 10. Thus, the proposed method has the advantage that the spectral sensitivity functions can be estimated using the average spectral curves of the database and a small number of color samples.
In the second experiment, we investigated whether the performance of the proposed method depends on the mobile phone cameras used and the color samples. As the measured spectral sensitivity curves were similar, we selected two iOS phone cameras and two Android phone cameras, and evaluated their spectral sensitivities using the 24 color sample from the X-Rite color checkers. For fair validation, the principal components were computed using the dataset of the spectral sensitivity measurement for the remaining 19 mobile phone cameras except for the target phone camera. Table 3 lists the RMSEs of the estimated spectral sensitivities for the four mobile phone cameras for the principal components with different L values. As can be seen from the table, the minimum errors were found when the first component in all cases of the four mobile phone cameras was set at L = 1. Figure 16 depicts the estimated sensitivity functions of the four cameras at L = 1. Thus, it can be confirmed that the spectral sensitivity of the mobile phone camera can be estimated based solely on the first principal component of the dataset and a small number of color samples.

5.5. Reflectance Estimation Validation

The surface–spectral reflectance provides is a physical feature inherent to the surface of a target object. Therefore, estimating the spectral reflectance is crucial for object recognition and identification in a variety of fields, including image science and technology, and computer vision. To evaluate the feasibility of mobile phones for spectral reflectance estimation, we validated the measured and estimated spectral sensitivity functions of the iPhone 8.
The Wiener filter is a linear estimation technique widely used for spectral reflectance estimation [31,32,33,34,35,36,37]. The relationship between the spectral reflectance of an object surface and the camera outputs is modeled with the camera output RGB vector z = [ z R ,   z G ,   z B ] t , the spectral sensitivity vectors r, g, and b, the illuminant vector e, the reflectance vector s to be estimated, and the noise vector n in observation as
z   =   A s   +   n .
where A   = [ r . e ,   g . e ,   b . e ] t . When the reflectance s and noise n are uncorrelated, the Wiener estimate with the minimal mean square error is expressed as follows:
s ^   =   P A t ( A P A t   +   Σ ) 1 z .
where P is the covariance matrix of the reflectance data and Σ the covariance matrix of the noise, which can usually be assumed to be a diagonal matrix Σ = diag ( σ R 2 ,   σ G 2 ,   σ B 2 ) . We determined P using the database of surface spectral reflectance values in [38] and determined Σ empirically (see [38]).
The accuracy of the estimated reflectance values was evaluated using the average RMSE and the average CIE-LAB color difference obtained over the 24 color checkers. Table 4 lists the performance values of the iPhone 8 in four cases where measurements 1 and 2, respectively, are based on the directly measured spectral sensitivities by the monochromator in Figure 2a and the measured spectral sensitivities by the programmable light source in Figure 2b, and Estimations 1 and 2, respectively, are based on the estimated spectral sensitivities from all the color samples in Figure 14 and the estimated spectral sensitivities using only 10 color samples in Figure 15. The LAB color difference was calculated using the illuminant shown in Figure 11. There is no noticeable difference between the performances of the four spectral sensitivity functions used in the validation. The validation results suggest that both the measured and the estimated spectral sensitivities in this study are useful for spectral reflectance estimation.

6. Discussion

It is important to clarify the difference between the estimation accuracy and the approximation accuracy. This distinction can be drawn when the spectral sensitivities are known. In the proposed method, the spectral sensitivities of the mobile phone cameras are estimated in a linear combination of the principal components of the dataset, and approximated in a linear combination of the same principal components. When using the first K components, the error between the original spectral sensitivity matrix Y and the approximated matrix Y ^ K can be expressed as follows:
Y Y ^ K 2 =   σ K + 1 2 + σ K + 2 2 + + σ N 2
where σ 1 ,   σ 2 ,   ,   σ N   are singular values (see Equation (9)). Therefore, the approximation error monotonically decreases as the component number K increases. However, the behavior of the estimation accuracy is not necessarily the same. The larger the component number, the smaller the singular values and the lower the contribution rate. In such a situation, when a component with a small contribution that is regarded as noise is added to the estimation process, the estimation accuracy deteriorates extremely. Figure 17 shows the variations in the estimation and approximation errors as a function of the principal components. The error values are averaged over the four mobile phone cameras in Table 3. The minimum average estimation error is recorded when only one principal component is used, and the second smallest error is realized when three principal components are used. As the contribution rate of the first principal component is very large as P (1) > 0.99, and in contrast the contribution of the third principal component is significantly smaller, the estimation using only the first principal component can be considered stable and reliable.
Based on the experimental results, it was found that the spectral sensitivity functions of the mobile phone cameras can be estimated from only the first principal component of the dataset of the measured spectral sensitivities and a small number of color samples. In the case of DSLR cameras, two principal components were required and the extent to which the color samples could be simplified was not clear [16]. The present results suggest that the spectral sensitivity functions of the mobile phone cameras can be more easily estimated using from 10 to 24 color samples, compared with DSLR cameras. From a computational simplicity point of view, we note that the inverse matrix operation is not required because the matrices C i C i t (i = R, G, B) become scalar values. As a result, the spectral sensitivity functions can be modeled using the first principal components as follows:
r ^ = c R u R ,         g ^ = c G u G ,           b ^ = c B u B .
As the first principal component is the same as the average spectrum of the spectral dataset, the PCA of the dataset of the measured spectral sensitivities is not required. The spectral sensitivity functions are estimated by simply weighting the average spectral sensitivity functions. The numerical data of the average spectral sensitivities u R , u G , and u B are available at the same site (http://ohlab.kic.ac.jp/) (accessed on 22 July 2021) as our database. The three weighting coefficients c R , c G , and c B are calculated from the camera RGB data for color samples and the estimates of the spectral sensitivity functions are obtained using Equation (31).

7. Conclusions

We have developed methods for measuring and estimating the spectral sensitivity functions of mobile phone cameras. In the direct measurement method, the spectral sensitivity at each wavelength was measured using monochromatic light. Although this method was accurate, it is time-consuming and expensive. The indirect estimation method was based on color samples from which the spectral sensitivities were estimated using pairs of input and output data comprising the color samples and the corresponding camera output RGB values, respectively.
We first presented an imaging system for direct measurements and performed measurements on a variety of mobile phone cameras to create a database of spectral sensitivity functions. Subsequently, the features of the measured spectral sensitivity functions in our database were analyzed using PCA. We determined the dimensionality of the spectral sensitivity dataset and extracted the statistical features of the spectral functions. We then described a normal method for estimating the spectral sensitivity functions using color samples. However, this method was not effective at estimating the spectral sensitivity functions. Therefore, we proposed an effective method to solve the estimation problem using the spectral features of the sensitivity functions in addition to the color samples. The estimation was accurate even when only a small number of spectral features were selected.
The feasibility of the proposed method was confirmed through experiments. The characteristics and advantages of the proposed method over the previously published methods for mobile phone cameras are summarized as follows:
1. We measured the spectral sensitivities of mobile phone cameras, created a database, and clarified its characteristics for the first time.
2. The spectral sensitivities can be estimated using the average spectral sensitivity and a small number of color samples.
3. The computation is stable and straightforward because of determining only three unknown parameters, and the estimation accuracy is very close to the direct measurement results.
The limitation of our approach is that it does not have the manufacturers’ data to validate the proposed method properly. However, with direct measurements, using PCA provides a means to optimally approximate the sensitivities’ spectral power distribution, which helps us overcome the limitations of our approach, even in short of the manufacturer‘s data.
The number of people using DSLR cameras, other than professional photographers, is decreasing worldwide, and mobile phones are becoming the mainstream. The advantages of mobile phone cameras include portability, low cost, convenience, and a wide range of applications. Unlike DSLR cameras, the camera and computer come together as a set in a mobile phone. This is advantageous for implementing applications that require the use of spectral sensitivity functions. We believe that this paper adds value to many fields including imaging science and technology as the techniques to estimate the spectral sensitivities of mobile phone cameras are still developing. Furthermore, our study makes an effective contribution to show journal readers different approaches, as there is intense market growth in mobile phones.

Author Contributions

Conceptualization, S.T.; Investigation, S.T., S.N. and R.O.; Analysis, S.T.; Writing—original draft, S.T.; Literature Review, S.T., S.N. and R.O.; Measurement and Experiment, S.T., S.N. and R.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a Grant-in-Aid for Scientific Research (C) (20K11877).

Acknowledgments

The authors would like to thank Eriko Tominaga for helping the DNG capturing, students at Osaka Electro-Communication University for helping the spectral sensitivity measurement, and Roseline Yong at Akita University for helping the English proofread.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Statista. Number of Smartphone Subscriptions Worldwide from 2016 to 2026. Available online: https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide (accessed on 22 July 2021).
  2. Rateni, G.; Dario, P.; Cavallo, F. Smartphone-based food diagnostic technologies: A review. Sensors 2017, 17, 1453. [Google Scholar] [CrossRef]
  3. McGonigle, A.J.S.; Wilkes, T.C.; Pering, T.D.; Willmott, J.R.; Cook, J.M.; Mims, F.M.; Parisi, A.V. Smartphone spectrometers. Sensors 2018, 18, 223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Burggraa, O.; Perduijn, A.B.; van Hek, R.F.; Schmidt, N.; Keller, C.U.; Snik, F. A universal smartphone add-on for portable spectroscopy and polarimetry: iSPEX 2. In Micro- and Nanotechnology Sensors, Systems, and Applications XII; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11389, p. 113892K. [Google Scholar] [CrossRef]
  5. Vora, P.L.; Farrell, J.E.; Tietz, J.D.; Brainard, D.H. Image capture: Simulation of sensor responses from hyperspectral images. IEEE Trans. Image Process. 2001, 10, 307–316. [Google Scholar] [CrossRef] [PubMed]
  6. Farrell, J.E.; Catrysse, P.B.; Wandell, B.A. Digital camera simulation. Applied Opt. 2012, 51, A80–A90. [Google Scholar] [CrossRef] [PubMed]
  7. Berra, E.; Gibson-Poole, S.; MacArthur, A.; Gaulton, R.; Hamilton, A. Estimation of the spectral sensitivity functions of un-modified and modified commercial off-shelf digital cameras to enable their use as a multispectral imaging system for UAVS. In Proceedings of the International Conference Unmanned Aer. Veh. Geomat, Toronto, ON, Canada, 30 August–2 September 2015; pp. 207–214. [Google Scholar] [CrossRef] [Green Version]
  8. Darrodi, M.M.; Finlayson, G.D.; Good-man, T.; Mackiewicz, M. A reference data set for camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2014, 32, 381–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Camera Spectral Sensitivity Database. Available online: http://www.gujinwei.org/research/camspec/camspec_database.txt (accessed on 21 July 2021).
  10. Nakamura, J. Image Sensors and Signal Processing for Digital Still Cameras; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  11. Hubel, P.M.; Sherman, D.; Farrell, J.E. A comparison of methods of sensor spectral sensitivity estimation. In Color and Imaging Conference; Society for Imaging Science and Technology: Scottsdale, AZ, USA, 15–18 November 1994; Volume 1994, pp. 45–48. [Google Scholar]
  12. Hardeberg, J.Y.; Bretel, H.; Schmitt, F.J.M. Spectral characterization of electronic cameras. In Proceedings of the Electronic Imaging: Processing, Printing, and Publishing in Color, Zurich, Switzerland, 18–20 May 1998; Volume 3499, pp. 100–109. [Google Scholar] [CrossRef]
  13. DiCarlo, J.M.; Montgomery, G.E.; Trovinger, S.W. Emissive chart for imager calibration. In Color and Imaging Conference; Society for Imaging Science and Technology: Scottsdale, AZ, USA, 9–12 November 2004; pp. 295–301. [Google Scholar]
  14. Han, S.; Matsushita, Y.; Sato, I.; Okabe, T.; Sato, Y. Camera spectral sensitivity estimation from a single image under unknown illumination by using fluorescence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 18–20 June 2012; pp. 805–812. [Google Scholar] [CrossRef] [Green Version]
  15. Zhu, J.; Xie, X.; Liao, N.; Zhang, Z.; Wu, W.; Lv, L. Spectral sensitivity estimation of trichromatic camera based on orthogonal test and window filtering. Opt. Express 2020, 28, 28085–28100. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, J.; Liu, D.; Gu, J.; Susstrunk, S. What is the space of spectral sensitivity functions for digital color cameras. In Proceedings of the IEEE Workshop on the Applications of Computer Vision, Tampa, FL, USA, 15–17 January 2013; pp. 168–179. [Google Scholar] [CrossRef] [Green Version]
  17. Finlayson, G.; Darrodi, M.M.; Mackiewicz, M. Rank-based camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2016, 33, 589–599. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ji, Y.; Kwak, Y.; Park, S.M.; Kim, Y.L. Compressive recovery of smartphone RGB spectral sensitivity functions. Opt. Express 2021, 29, 11947–11961. [Google Scholar] [CrossRef] [PubMed]
  19. Bongiorno, D.L.; Bryson, M.; Dansereau, D.G.; Williams, S.B. Spectral characterization of COTS RGB cameras using a linear variable edge filter. In Proceedings of the SPIE 8660 Digital Photography IX; Sampat, N., Battiato, S., Eds.; International Society for Optics and Photonics: Burlingame, CA, USA, 3–7 February 2013; p. 86600N. [Google Scholar] [CrossRef]
  20. Burggraaff, O.; Schmidt, N.; Zamorano, J.; Pauly, K.; Pascual, S.; Tapia, C.; Spyrakos, E.; Snik, F. Standardized spectral and radiometric calibration of consumer cameras. Opt. Express 2019, 27, 19075–19101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Adobe Systems Incorporated. Digital Negative (DNG) Specification, Version 1.4.0.0.; Adobe Systems Incorporated: San Jose, CA, USA, 2012. [Google Scholar]
  22. Bazhyna, A.; Gotchev, A.; Egiazarian, K. Near-lossless compression algorithm for Bayer pattern color filter arrays. In Digital Photography; International Society for Optics and Photonics: Burlingame, CA, USA, 2005; Volume 5678, pp. 198–209. [Google Scholar] [CrossRef]
  23. Tominaga, S.; Nishi, S.; Ohtera, R. Estimating spectral sensitivities of a smartphone camera. In Proceedings of the IS&T International Symposium Electronic Imaging, Online, 11–21 January 2021; Volume XXVI, p. COLOR-223. [Google Scholar]
  24. Tominaga, S.; Horiuchi, T. Spectral imaging by synchronizing capture and illumination. J. Opt. Soc. Am. A 2012, 29, 1764–1775. [Google Scholar] [CrossRef] [PubMed]
  25. Ohta, N.; Robertson, A.R. Measurement and Calculation of Colorimetric Values. In Colorimetry: Fundamentals and Applications; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2005; Chapter 5. [Google Scholar]
  26. Golub, G.H.; van Loan, C.F. Matrix Computations, 4th ed.; Johns Hopkins University: Baltimore, MD, USA, 2013. [Google Scholar]
  27. Parkkinen, J.; Hallikainen, J.; Jaaskelainen, T. Characteristic spectra of Munsell colors. J. Opt. Soc. Am. A 1989, 6, 318–322. [Google Scholar] [CrossRef]
  28. Tominaga, S. Multichannel vision system for estimating surface and illuminant functions. J. Opt. Soc. Am. A 1996, 13, 2163–2173. [Google Scholar] [CrossRef]
  29. Munsell Products. Available online: https://www.modeinfo.com/en/Munsell-Products/ (accessed on 21 July 2021).
  30. ColorChecker Passport Photo 2. Available online: https://xritephoto.com/ph_product_overview.aspx?id=2572&catid=158 (accessed on 21 July 2021).
  31. Haneishi, H.; Hasegawa, T.; Hosoi, A.; Yokoyama, Y.; Tsumura, N.; Miyake, Y. System design for accurately estimating the spectral reflectance of art paintings. Appl. Opt. 2000, 39, 6621–6632. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Shimano, N. Recovery of spectral reflectances of objects being imaged without prior knowledge. IEEE Trans. Image Process. 2006, 15, 1848–1856. [Google Scholar] [CrossRef] [PubMed]
  33. Shimano, N.; Terai, K.; Hironaga, M. Recovery of spectral reflectances of objects being imaged by multispectral cameras. J. Opt. Soc. Am. A 2007, 24, 3211–3219. [Google Scholar] [CrossRef] [PubMed]
  34. Stigell, P.; Miyata, K.; Hauta-Kasari, M. Wiener estimation method in estimating of spectral reflectance from RGB image. Pattern Recognit. Image Anal. 2007, 17, 233–242. [Google Scholar] [CrossRef]
  35. Murakami, Y.; Fukura, K.; Yamaguchi, M.; Ohyama, N. Color reproduction from low-SNR multispectral images using spatio-spectral Wiener estimation. Opt. Express 2008, 16, 4106–4120. [Google Scholar] [CrossRef] [PubMed]
  36. Urban, P.; Rosen, M.R.; Berns, R.S. A Spatially Adaptive Wiener Filter for Reflectance Estimation. In Proceedings of the 16th Color Imaging Conference, Portland, OR, USA, 10–14 November 2008; pp. 279–284. [Google Scholar]
  37. Nahavandi, A.M. Noise segmentation for improving performance of Wiener filter method in spectral reflectance estimation. Color Res. Appl. 2018, 43, 341–348. [Google Scholar] [CrossRef]
  38. Tominaga, S.; Matsuura, A.; Horiuchi, T. Spectral analysis of omnidirectional illumination in a natural scene. J. Imaging Sci. Technol. 2010, 54, 040502-1–040502-9. [Google Scholar] [CrossRef]
Figure 1. Linearity test of the raw camera data using color samples. (a) Relationship between the average reflectance of gray chips and the camera RGB outputs. (b) Relationship between the luminance values and the camera RGB outputs.
Figure 1. Linearity test of the raw camera data using color samples. (a) Relationship between the average reflectance of gray chips and the camera RGB outputs. (b) Relationship between the luminance values and the camera RGB outputs.
Sensors 21 04985 g001
Figure 2. Experimental setups for measuring the spectral responses of mobile phone cameras using monochromatic light and a spectrometer. (a) Monochromatic light from monochromator grating. (b) Monochromatic light from a programmable light source.
Figure 2. Experimental setups for measuring the spectral responses of mobile phone cameras using monochromatic light and a spectrometer. (a) Monochromatic light from monochromator grating. (b) Monochromatic light from a programmable light source.
Sensors 21 04985 g002
Figure 3. Relative spectral sensitivity functions of the 20 mobile phone cameras in our database.
Figure 3. Relative spectral sensitivity functions of the 20 mobile phone cameras in our database.
Sensors 21 04985 g003
Figure 4. Spectral sensitivity functions classified into the two categories of (a) iOS phone cameras, and (b) Android phone cameras.
Figure 4. Spectral sensitivity functions classified into the two categories of (a) iOS phone cameras, and (b) Android phone cameras.
Sensors 21 04985 g004
Figure 5. Fitting results to the color-matching functions based on the spectral sensitivity functions of iPhone 8.
Figure 5. Fitting results to the color-matching functions based on the spectral sensitivity functions of iPhone 8.
Sensors 21 04985 g005
Figure 6. First three principal components u 1 , u 2 , and u 3 for the (a) red, (b) green, and (c) blue channels of the spectral sensitivity functions. The bold, broken, and dotted curves represent the first, second, and third principal components, respectively.
Figure 6. First three principal components u 1 , u 2 , and u 3 for the (a) red, (b) green, and (c) blue channels of the spectral sensitivity functions. The bold, broken, and dotted curves represent the first, second, and third principal components, respectively.
Sensors 21 04985 g006
Figure 7. Approximated spectral curves for the (a) red, (b) green, and (c) blue channels of the iPhone 8 spectral sensitivity functions. The colored bold curves, black bold curves, and broken curves represent the measured spectral sensitivities, approximations using the first component only, and approximations using the first two components, respectively.
Figure 7. Approximated spectral curves for the (a) red, (b) green, and (c) blue channels of the iPhone 8 spectral sensitivity functions. The colored bold curves, black bold curves, and broken curves represent the measured spectral sensitivities, approximations using the first component only, and approximations using the first two components, respectively.
Sensors 21 04985 g007
Figure 8. Setup for color sample imaging by a mobile phone camera.
Figure 8. Setup for color sample imaging by a mobile phone camera.
Sensors 21 04985 g008
Figure 9. Complete set of the surface–spectral reflectances measured from all the Munsell color chips.
Figure 9. Complete set of the surface–spectral reflectances measured from all the Munsell color chips.
Sensors 21 04985 g009
Figure 10. Color checkers used for reflectance estimation validation. (a) Imaging targets consisting of 24 color checkers and the white reference standard (Spectralon). (b) Spectral reflectances of the 24 color checkers measured by the spectral colorimeter.
Figure 10. Color checkers used for reflectance estimation validation. (a) Imaging targets consisting of 24 color checkers and the white reference standard (Spectralon). (b) Spectral reflectances of the 24 color checkers measured by the spectral colorimeter.
Sensors 21 04985 g010
Figure 11. Illuminant spectral power distribution of the incandescent lamp used.
Figure 11. Illuminant spectral power distribution of the incandescent lamp used.
Sensors 21 04985 g011
Figure 12. Estimation results from the normal method for the iPhone 8, where the bold curves in red, green, and blue represent the estimated spectral sensitivities for the red, green, and blue channels, respectively, and the broken curves represent the measured spectral sensitivities used as the reference data.
Figure 12. Estimation results from the normal method for the iPhone 8, where the bold curves in red, green, and blue represent the estimated spectral sensitivities for the red, green, and blue channels, respectively, and the broken curves represent the measured spectral sensitivities used as the reference data.
Sensors 21 04985 g012
Figure 13. Percent variance P ( K ) as a function of the number of principal component K.
Figure 13. Percent variance P ( K ) as a function of the number of principal component K.
Sensors 21 04985 g013
Figure 14. Estimated spectral sensitivity functions of iPhone 8 at L = 1 using all color samples, where the bold curves represent the estimated spectral sensitivities, and the broken curves represent the measured spectral sensitivities.
Figure 14. Estimated spectral sensitivity functions of iPhone 8 at L = 1 using all color samples, where the bold curves represent the estimated spectral sensitivities, and the broken curves represent the measured spectral sensitivities.
Sensors 21 04985 g014
Figure 15. Estimated sensitivities of iPhone 8 at L = 1 and m = 10, where the bold curves represent the estimated spectral sensitivities, and the broken curves represent the measured spectral sensitivities.
Figure 15. Estimated sensitivities of iPhone 8 at L = 1 and m = 10, where the bold curves represent the estimated spectral sensitivities, and the broken curves represent the measured spectral sensitivities.
Sensors 21 04985 g015
Figure 16. Estimated spectral sensitivities of (a) iPhone 6s, (b) iPhone 8, (c) P10 lite, and (d) Galaxy S7 edge at L = 1 using the 24 color checkers, where the bold and broken curves represent the estimated spectral sensitivities and the measured spectral sensitivities, respectively.
Figure 16. Estimated spectral sensitivities of (a) iPhone 6s, (b) iPhone 8, (c) P10 lite, and (d) Galaxy S7 edge at L = 1 using the 24 color checkers, where the bold and broken curves represent the estimated spectral sensitivities and the measured spectral sensitivities, respectively.
Sensors 21 04985 g016
Figure 17. Variations in the estimation error and the approximation error as a function of the number of principal components. The error values are averaged over the four mobile phone cameras.
Figure 17. Variations in the estimation error and the approximation error as a function of the number of principal components. The error values are averaged over the four mobile phone cameras.
Sensors 21 04985 g017
Table 1. Mobile phones measured in this study.
Table 1. Mobile phones measured in this study.
ManufacturerModelImage Sensor
Apple iPhone 6sSony IMX315
Apple iPhone SESony IMX315
Apple iPhone 8Sony IMX315
Apple iPhone XSony IMX315
Apple iPhone 11Sony IMX503
Apple iPhone 12 Pro MAX Sony IMX603
HUAWEIP10 liteSony IMX214
HUAWEInova lite 2Unknown
SamsungGalaxy S7 edgeSamsung ISOCELL S5K2L1
SamsungGalaxy S9Samsung ISOCELL S5K2L3
SamsungGalaxy Note10+Samsung ISOCELL S5K2L4
SamsungGalaxy S20Samsung ISOCELL S5KGW2
SHARPAQUOS sense3 liteUnknown
SHARPAQUOS R5GInfineon Technologies IRS2381C
XiaomiMi Mix 2sSamsung ISOCELL S5K3M3
XiaomiRedmi Note 9SSamsung ISOCELL S5KGM2
SonyXperia 1 IISony IMX557
SonyXperia 5 IISony IMX557
Fujitsuarrows NX9Unknown
GooglePixel 4Sony IMX363
Table 2. Average color differences between the imaged colors of the 24 color checkers captured by each camera and the simulated colors based on the measured spectral sensitivities.
Table 2. Average color differences between the imaged colors of the 24 color checkers captured by each camera and the simulated colors based on the measured spectral sensitivities.
Color DifferenceModel
iPhone 6siPhone 8P10 LiteGalaxy S7 Edge
Δ E RGB 0.015300.020460.017720.01689
Table 3. RMSE of the estimated spectral sensitivities for four mobile phone cameras at different values of L using the 24 color checkers.
Table 3. RMSE of the estimated spectral sensitivities for four mobile phone cameras at different values of L using the 24 color checkers.
RMSEModel
iPhone 6siPhone 8P10 LiteGalaxy S7 Edge
L = 10.055770.065970.038410.03779
L = 20.071670.107140.057410.03946
L = 30.063780.073100.068160.04277
Table 4. Performance values in the four cases. For measurements 1 and 2, respectively, the directly measured spectral sensitivities by a monochromator in Figure 2a and the directly measured spectral sensitivities by a programmable light source in Figure 2b were used. For Estimations 1 and 2, respectively, the estimated spectral sensitivities using all color sample in Figure 14 and the estimated spectral sensitivities using only ten color samples in Figure 15 were used.
Table 4. Performance values in the four cases. For measurements 1 and 2, respectively, the directly measured spectral sensitivities by a monochromator in Figure 2a and the directly measured spectral sensitivities by a programmable light source in Figure 2b were used. For Estimations 1 and 2, respectively, the estimated spectral sensitivities using all color sample in Figure 14 and the estimated spectral sensitivities using only ten color samples in Figure 15 were used.
Measurement 1Measurement 2Estimation 1Estimation 2
Average RMSE0.052410.051450.052010.05279
Average LAB
color difference
7.0556.0826.9736.749
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tominaga, S.; Nishi, S.; Ohtera, R. Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras. Sensors 2021, 21, 4985. https://doi.org/10.3390/s21154985

AMA Style

Tominaga S, Nishi S, Ohtera R. Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras. Sensors. 2021; 21(15):4985. https://doi.org/10.3390/s21154985

Chicago/Turabian Style

Tominaga, Shoji, Shogo Nishi, and Ryo Ohtera. 2021. "Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras" Sensors 21, no. 15: 4985. https://doi.org/10.3390/s21154985

APA Style

Tominaga, S., Nishi, S., & Ohtera, R. (2021). Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras. Sensors, 21(15), 4985. https://doi.org/10.3390/s21154985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop