Next Article in Journal
Ratiometric BRET Measurements of ATP with a Genetically-Encoded Luminescent Sensor
Previous Article in Journal
Session Recommendation via Recurrent Neural Networks over Fisher Embedding Vectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lighting Deviation Correction for Integrating-Sphere Multispectral Imaging Systems

1
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
2
College of Computer Science, Zhejiang University, Hangzhou 310027, China
3
College of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
4
Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(16), 3501; https://doi.org/10.3390/s19163501
Submission received: 20 May 2019 / Revised: 30 July 2019 / Accepted: 8 August 2019 / Published: 10 August 2019
(This article belongs to the Section Physical Sensors)

Abstract

:
In an integrating sphere multispectral imaging system, measurement inconsistency can arise when acquiring the spectral reflectances of samples. This is because the lighting condition can be changed by the measured samples, due to the multiple light reflections inside the integrating sphere. Besides, owing to non-uniform light transmission of the lens and narrow-band filters, the measured reflectance is spatially dependent. To deal with these problems, we propose a correction method that consists of two stages. The first stage employs a white board to correct non-uniformity and a small white patch to correct lighting deviation, both under the assumption of ideal Lambertian reflection. The second stage uses a polynomial regression model to further remove the lighting inconsistency when measuring non-Lambertian samples. The method is evaluated on image data acquired in a real multispectral imaging system. Experimental results illustrate that our method eliminates the measurement inconsistency considerably. This consequently improves the spectral and colorimetric accuracy in color measurement, which is crucial to practical applications.

1. Introduction

Integrating spheres are widely used in radiometric and photometric measurements, among which reflectance measurement is a typical application [1]. By collecting and integrating the reflected radiant flux, the aim of the sphere is to provide a stable and uniform illumination condition. The hemispherical reflectance or reflectance factor of a sample can then be measured in certain geometries.
Absolute or relative reflectance measurements can be conducted by using integrating spheres. In absolute measurements [2,3], the reflectance of a sample is measured by illuminating the sphere wall and the sample in turn. The relative measurements [4,5] introduce a reference standard of known reflectance; the reflectance of a sample is computed based on the ratio of detected signals corresponding to the sample and standard. There are two types of relative measurement methods. One is the comparison method, in which the standard and sample are placed in the sphere simultaneously. The other measures the standard and sample in the sphere successively and is known as the substitution method.
The spectrophotometer is a typical measurement system that uses the integrating sphere. It measures the average spectral reflectance from the sample port whose diameter is relatively large. Accordingly, it has a very coarse spatial resolution. A multispectral imaging system, when appropriately calibrated, can measure the spectral reflectance of a sample with spatially-fine resolution [6,7,8,9]. Currently, some multispectral imaging systems [10,11,12,13,14] use the integrating sphere to obtain a stable and uniform illumination. A monochrome camera, together with narrow-band filters or a liquid crystal tunable filter (LCTF) works as the detector. The detector port is usually on the opposite side of the sample port.
In a single-beam integrating sphere, substituting the standard with a sample will alter the throughput of the sphere, especially when the sample is of much lower reflectance than the standard. This usually leads to a lower measurement quantity. The measurement error caused by lighting deviation is known as the single-beam substitution error. Typical solutions to this problem include increasing the sphere size, decreasing the sample port area, using multiple standards with different reflectance levels and introducing an additional reference beam. The methods for eliminating substitution error have been investigated in [15,16].
In an integrating sphere multispectral imaging system, the sample port has a relatively large area and holds the standard and sample in turn, which introduces non-negligible lighting deviation. However, the solutions just mentioned focus on systems measuring the average spectral reflectance from the sample port and cannot be applied directly to the imaging system. Previous calibration methods [17,18,19,20] for the imaging system mainly dealt with the spatial non-uniformity, in which the dark current images and the images of the reference standard were usually involved. As these methods cannot eliminate lighting deviation, additional lighting correction methods are needed. The work [10] presented a Markov model to deal with this problem for the integrating sphere imaging system. The model is based on the assumption of constant lamp irradiance and Lambertian sample reflection. Considering the illuminant fluctuation and non-Lambertian samples, in this paper, we propose a two-stage lighting deviation correction for integrating sphere multispectral imaging systems.
The novelty of the proposed lighting correction is mainly two-fold. First, a small reference white patch is employed to compensate for the lighting deviation based on the assumption of Lambertian reflection. Second, a polynomial regression model is used to further reduce the measurement inconsistency of non-Lambertian samples in practical measurement. The improvement of measurement consistency is finally validated using intensive experiments.

2. The Problem of Lighting Deviation Correction

Figure 1 illustrates a typical integrating sphere multispectral imaging system, which is of the d: 0 ° geometric condition. The system consists of an integrating sphere, a lamp, a monochrome camera, a lens, and a filter wheel installed with narrow-band filters. The incident light is diffused as it enters the sphere through the entrance port and is spatially integrated in the sphere. The multispectral image of a planar sample is acquired by rotating the filters into the optical path and acquiring band images through the detector port sequentially.
Lighting correction is needed for accurate and stable color measurement in an integrating sphere multispectral imaging system. The correction procedure mainly deals with two problems, i.e., spatial non-uniformity and lighting deviation. First, even if the integrating sphere provides a uniform illumination, the reflected fluxes reaching the image sensor from a uniform sample at different positions can still be different. This is due to the non-uniform light transmission of the lens and filters. This problem can be solved by acquiring an image of a white board that has the same size of the sample.
Second, lighting deviation can occur when measuring different samples. Due to the multiple light reflections inside the sphere, the sample itself can change the lighting condition; this is referred to as the background effect hereafter. This means that, for a certain small color patch, its measurements will be different if this patch is placed on different background samples. In addition to the imaged sample, the fluctuation of the lamp irradiance also leads to the lighting deviation.
To deal with the mentioned problems, we propose a two-stage lighting calibration method to correct the measurements of individual bands. In the first stage, we assume the sample surface is of ideal Lambertian diffuse reflection. We then characterize the spatial non-uniformity by acquiring the image of a white board, and correct lighting deviation by employing a small white patch for reference. In the second stage, we use a polynomial regression model to further reduce the background effect for practical color measurement.
The lighting deviation correction is performed on the camera response (i.e., the intensity of a pixel in the captured image). The spectral reflectance is reconstructed from the corrected camera responses using Wiener estimation [21]. To be more specific, in our 16-band multispectral imaging system, the 31-dimensional spectral reflectance for each pixel is mathematically reconstructed from the 16-dimensional camera response using a 31 × 16 reconstruction matrix. The reconstruction matrix is computed by acquiring the corrected camera responses of 144 color targets of known spectral reflectances and performing Wiener estimation. Note that the color targets and the measured sample are made of similar materials. Therefore, the consistency of the spectral reflectances can be improved by conducting the lighting deviation correction on the camera response.

3. Correction Stage I: White-Patch Normalization

Lambertian reflection is a common assumption in eliminating substitution errors for an integrating sphere [15,16]. In Stage I, we also employed this assumption in modeling the lighting deviation caused by the imaged samples. We used a white board, which is hereafter referred to as the white standard or simply the standard, to correct the spatial non-uniformity. We further employed a small white patch to model the change of lighting intensity caused by the imaged sample. Figure 2 shows the layout of the reference white patch and a sample. The size of the white patch was much smaller than that of the sample.
As illustrated in Figure 2, the lighting intensity inside the integrating sphere was recorded by a small reference white patch. It had similar properties as the sphere wall and was placed in the focal plane beside the sample holder. In the following, we introduce the theoretical analysis of non-uniformity correction and white-patch normalization for lighting calibration. In this stage, the white standard, white patch, and sample were all assumed to be of Lambertian reflection.

3.1. Using the White Standard to Correct Spatial Non-Uniformity

We denote the radiant flux entering the sphere in the wavelength range of λ ± Δ λ by Φ 0 ( λ ) , where Δ λ is a quite small interval. Due to the existence of the diffuser and baffles, the flux can be assumed to strike the sphere wall uniformly. For the Lambertian sphere wall whose spectral reflectance is ρ w ( λ ) , the flux in its first reflection is:
Φ 1 , w ( λ ) = ρ w ( λ ) Φ 0 ( λ ) .
Among the flux reflected by the sphere wall, the percentages of the part striking the sphere wall and the part incident on the sample holder are denoted by α w , w ( 0 , 1 ) and α w , sp ( 0 , 1 ) , respectively. Due to the existence of the entrance port and detector port, we have α w , w + α w , sp < 1 .
For calibration, the white standard was placed on the sample holder to acquire the spatial distribution of the lighting. The standard had the same size as the sample and was much larger than the white patch illustrated in Figure 2. The reflectance of the standard, ρ ¯ r ( λ ) , is also referred to as average reflectance. The flux in the first reflection of the standard under the integrating sphere lighting is computed as:
Φ 1 , r ( λ ) = α w , sp ρ ¯ r ( λ ) Φ 1 , w ( λ ) .
A part of the flux reflected by the standard strikes the sphere wall, at a percentage of α sp , w ( 0 , 1 ) . The flux in the second reflection of the sphere wall can then be computed as:
Φ 2 , w ( λ ) = ρ w ( λ ) · ( α w , w Φ 1 , w ( λ ) + α sp , w Φ 1 , r ( λ ) ) = k r ( λ ) ρ w 2 ( λ ) Φ 0 ( λ ) ,
where k r ( λ ) : = α w , w + α w , sp α sp , w ρ ¯ r ( λ ) .
The flux in the second reflection of the standard is:
Φ 2 , r ( λ ) = α w , sp ρ ¯ r ( λ ) Φ 2 , w ( λ ) .
Then, the flux in the sphere wall’s third reflection is computed as:
Φ 3 , w ( λ ) = ρ w ( λ ) · ( α w , w Φ 2 , w ( λ ) + α sp , w Φ 2 , r ( λ ) ) = k r 2 ( λ ) ρ w 3 ( λ ) Φ 0 ( λ ) .
The flux in the k th reflection of the sphere wall, Φ k , w ( λ ) , and that of the standard, Φ k , r ( λ ) , can be computed in a similar manner.
We first consider the imaging model of the white standard. For a pixel position x = ( x , y ) T in the standard, we denote its spectral reflectance by ρ r ( x , λ ) . A part of the flux reflected by the standard at x reaches the image sensor, at a percentage of β ( x ) . Note that β ( x ) is spatially varying, due to the spatial non-uniformity as mentioned in Section 2. The intensity of the pixel x in the captured image, i.e., the camera response, is then computed as:
u r ( x , λ ) = e ( λ ) β ( x ) ρ r ( x , λ ) · α w , sp N k = 1 Φ k , w ( λ ) = ρ r ( x , λ ) β ( x ) γ ( λ ) Φ 0 ( λ ) · k = 1 ( k r ( λ ) ρ w ( λ ) ) k - 1 ,
where e ( λ ) is the factor between the detected flux and camera response, N is the number of corresponding pixels of the standard, and γ ( λ ) : = α w , sp ρ w ( λ ) e ( λ ) / N is only determined by equipment specifications. Note that we subtract the dark current from the camera response in practical measurements. Since 0 < ρ w ( λ ) < 1 and 0 < k r ( λ ) < α w , w + α w , sp < 1 , we have 0 < k r ( λ ) ρ w ( λ ) < 1 . Equation (6) can be further computed as:
u r ( x , λ ) = ρ r ( x , λ ) β ( x ) γ ( λ ) Φ 0 ( λ ) 1 - k r ( λ ) ρ w ( λ ) .
Then, we consider the imaging model of a Lambertian sample whose average reflectance is ρ ¯ s ( λ ) . Assume that the radiant flux entering the sphere becomes Φ 0 ( λ ) + Δ Φ 0 ( λ ) due to the possible fluctuation of the lamp irradiance. The camera response at x becomes:
u s ( x , λ ) = ρ s ( x , λ ) β ( x ) γ ( λ ) ( Φ 0 ( λ ) + Δ Φ 0 ( λ ) ) 1 - k s ( λ ) ρ w ( λ ) ,
where ρ s ( x , λ ) is the sample reflectance at x , and k s ( λ ) is computed as:
k s ( λ ) = α w , w + α w , sp α sp , w ρ ¯ s ( λ ) .
The background variation, which corresponds to the different reflectances ρ ¯ s ( λ ) of samples, leads to the change of k s ( λ ) . Besides, the illuminant fluctuation introduces a time-variant Δ Φ 0 ( λ ) . The lighting deviation in both cases influences the camera response u s ( x , λ ) of the sample, according to Equation (8).
The camera response of the sample v s ( x , λ ) can be normalized with respect to the response of the white standard at x ,
v s ( x , λ ) = u s ( x , λ ) u r ( x , λ ) = ρ s ( x , λ ) ρ r ( x , λ ) · q s ( λ ) ,
where:
q s ( λ ) = 1 - k r ( λ ) ρ w ( λ ) 1 - k s ( λ ) ρ w ( λ ) · Φ 0 ( λ ) + Δ Φ 0 ( λ ) Φ 0 ( λ ) .
Note that q s ( λ ) varies with the lighting deviation just mentioned, due to the change of k s ( λ ) and Δ Φ 0 ( λ ) at the time of measurement.
Thanks to the uniformity of the white standard, we actually have ρ r ( x , λ ) = ρ ¯ r ( λ ) . By using the white standard, we eliminate the spatially-varying term β ( x ) from camera response u s ( x , λ ) and thus correct the spatial non-uniformity in Equation (10). However, the normalized camera response v s ( x , λ ) still suffers from the lighting deviation, due to the existence of term q s ( λ ) .

3.2. Using the White Patch to Correct Lighting Deviation

We correct the lighting deviation using a reference white patch as illustrated in Figure 2. Thanks to the relatively large field of view of the imaging system, the camera responses of both the sample and reference white patch can be measured simultaneously. Let ρ p ( x p , λ ) be the spectral reflectance of the white patch at pixel x p , its camera response when acquiring the image of the standard is:
u r ( x p , λ ) = ρ p ( x p , λ ) β ( x p ) γ ( λ ) Φ 0 ( λ ) 1 - k r ( λ ) ρ w ( λ ) ,
and its camera response when acquiring the image of the sample is:
u s ( x p , λ ) = ρ p ( x p , λ ) β ( x p ) γ ( λ ) ( Φ 0 ( λ ) + Δ Φ 0 ( λ ) ) 1 - k s ( λ ) ρ w ( λ ) .
Our aim is to characterize lighting deviation when imaging various samples; thus, we normalize u s ( x p , λ ) with respect to u r ( x p , λ ) , yielding:
v s ( x p , λ ) = u s ( x p , λ ) / u r ( x p , λ ) = q s ( λ ) ,
which has exactly the same form of q s ( λ ) . We introduce a white-patch ratio defined as:
p s ( λ ) : = 1 / v s ( x p , λ ) .
Then, the normalized camera response of the sample in Equation (10) can be further normalized, yielding:
v ˘ s ( x , λ ) : = p s ( λ ) v s ( x , λ ) = ρ s ( x , λ ) / ρ r ( x , λ ) .
When compared with Equation (10), it is observed that in Equation (16), q s ( λ ) is completely eliminated. We note that the white patch ratio actually characterizes the lighting change in the integrating sphere and can be regarded as another form of introducing a reference beam. Hence, lighting deviation can be corrected by employing a white patch for reference.
Based on the above theoretical analysis, we present the procedure of lighting correction using the white standard and white patch as follows. We first acquired the image of the white standard, obtaining the camera response u r ( x , λ ) of the white standard at position x and the camera response u r ( x p , λ ) of the reference white patch at position x p . Then, we acquired the image of a sample, getting the camera responses of the sample and white patch, which are respectively denoted as u s ( x , λ ) and u s ( x p , λ ) . Based on the white patch ratio p s ( λ ) = u r ( x p , λ ) / u s ( x p , λ ) , the camera response after the correction is computed as:
v ˘ s ( x , λ ) = p s ( λ ) u s ( x , λ ) u r ( x , λ ) .

4. Correction Stage II: Polynomial Regression Modeling

We note that the lighting deviation cannot be fully corrected by using a white patch. This is due to the non-Lambertian nature of real surface reflection of the imaged sample. In [22,23], the deviation in reflectance measurements of non-Lambertian samples was observed. The integrating sphere multispectral imaging system can measure reflectance at the spatial resolution of camera pixels, but will suffer more from the variation of reflection characteristics in a sample. The influence of the material and texture structure on reflectance measurement was also investigated in previous works [24,25].
Owing to the bidirectional reflectance distribution function (BRDF) [26] of a non-Lambertian surface, the distribution of the reflected light varied with the incident angle. Due to the measurement geometry, the camera response exhibited an angular dependence. Therefore, among the reflected flux from the sample at position x , the percentage of the part reaching the image sensor changed in different reflections, rather than being a constant value of β ( x ) for a given x as assumed in Stage I. The imperfections of the sphere such as lack of symmetry also led to the deviation. Besides, the inter-reflections in a surface with texture structures may not be ignored. In addition, the stray light effect [27,28,29] also plays a role when a dark area is surrounded by bright regions or vice versa. As a result, the employment of a white patch in Stage I cannot totally remove the background effect. Actually, when samples are made of different materials, the problem becomes even more complex and an additional cross-media calibration is generally needed [24]. In the current work, we used textile samples made of similar materials for investigation.
The lighting deviation correction aims to produce similar camera responses for a given color regardless of lighting change. Since the influence of the aforementioned factors is rather complicated, it is difficult to introduce a complete physical model to correct lighting deviation. We instead resorted to polynomial regression [30,31], which is a machine learning approach, to further improve the consistency of camera responses. Our objective was that, when a given small patch is placed on different background samples, the corrected camera responses of the patch should be in good agreement with each other. The polynomial models were applied on individual multispectral bands.
In the polynomial model, the normalized camera response v s ( x , λ ) should be an input variables since it is the one to be corrected. The white-patch ratio p s ( λ ) should also be involved since it indicates the lighting change in the integrating sphere. For a given small patch placed on different background samples, a brighter background sample leads to a larger v s ( x , λ ) and hence a smaller p s ( λ ) . Similarly, a smaller v s ( x , λ ) corresponds to a larger p s ( λ ) . The polynomial model is thus able to produce close camera responses when the background sample changes. We found that a simple second-order polynomial model suffices for background effect removal,
v ˜ s ( x , λ ) = c 1 ( λ ) p s ( λ ) v s ( x , λ ) + c 2 ( λ ) v s ( x , λ ) + c 3 ( λ ) p s ( λ ) + c 4 ( λ ) ,
where v ˜ s ( x , λ ) is the corrected camera response and c k ( λ ) , 1 k 4 , are the coefficients to be solved. The polynomial model can be of high orders or include additional terms, but our investigation indicated that this simple form sufficed for correcting lighting deviation and performed well on test data. We note that, however, as the polynomial model is phenomenological, it probably cannot be generalized to samples with reflection characteristics that differ from the training samples.
We collected training data using the procedure illustrated in Figure 3a. We used a set of small gray patches (0.5 cm × 0.5 cm) as color targets and a set of color samples (10 cm × 8 cm) as backgrounds. The patches were placed on background samples at different positions. In total, n p = 9 small gray patches and n b = 10 background samples were used in the training procedure. The images of the small patches and background samples are shown in Figure 3b,c, respectively.
For the i th patch and j th background, we can get the normalized camera response v s i , j ( x , λ ) of the patch and the white patch ratio p s i , j ( λ ) , as described in Section 3. Note that the normalized camera response was averaged in a 35 × 35 pixel region. To improve measurement consistency, the corrected responses of a patch placed on different backgrounds should be identical in the training data. Given a background whose index is j ref , the corrected response of the i th patch in Stage I was v ˘ s i , j ref = v s i , j ref ( x , λ ) p s i , j ref ( λ ) . For each j, we set the corrected response v ˜ s i , j ( x , λ ) , to the response on the j ref th background, v ˘ s i , j ref . In this way, a set of n p n b training data was collected.
We concatenated the model coefficients into a vector c = ( c 1 ( λ ) , c 2 ( λ ) , c 3 ( λ ) , c 4 ( λ ) ) T and the training data into vectors as follows:
v = ( v 1 , 1 v 1 , n b , , v n p , 1 v n p , n b ) T , p = ( p 1 , 1 p 1 , n b , , p n p , 1 p n p , n b ) T , v ˜ = ( v ˜ 1 , 1 v ˜ 1 , n b , , v ˜ n p , 1 v ˜ n p , n b ) T .
Note that the variables x and λ are omitted in Equation (19) to simplify notation. We then construct a matrix M as:
M = ( p v , v , p , 1 ) ,
where ⊙ is the element-wise product of two vectors and 1 R n p n b is a vector of ones. According to Equation (18), the problem of estimating model coefficients can be formulated as:
c * = arg min c v ˜ - M c 2 ,
which can be solved using least-squares as:
c * = ( M T M ) - 1 M T v ˜ .
After obtaining c * = c 1 * ( λ ) , c 2 * ( λ ) , c 3 * ( λ ) , c 4 * ( λ ) T , we can correct the camera response using the regression model in the measurement process. By acquiring images of the white standard and sample, we obtained the normalized camera response v s ( x , λ ) of the sample at position x and the white patch ratio p s ( λ ) . The corrected camera response was then computed according to Equation (18), which gives:
v ˜ s ( x , λ ) = ( c 1 * ( λ ) p s ( λ ) + c 2 * ( λ ) ) · v s ( x , λ ) + c 3 * ( λ ) p s ( λ ) + c 4 * ( λ ) .

5. Experimental Results

We built an integrating sphere multispectral imaging system whose structure was identical to that in Figure 1. The imaging system used a Hamamatsu 150 W super-quiet xenon lamp, a Zeiss 50 mm lens, a QImaging QI695 scientific monochrome camera, and a customized integrating sphere. The integrating sphere had a diameter of 50 cm. The size of the sample holder was 10 cm × 8 cm. The diameters of the entrance port and detector port were 6 cm and 5 cm, respectively. The port fraction of our integrating sphere was 1.63%, which was sufficient for good uniformity according to the sphere design rule [32]. The filter wheel was installed with 16 narrow-band filters. The full width at half maximum (FWHM) value of each filter was 10 nm, and the central wavelengths were at 400, 420, …, and 700 nm. By rotating the filter wheel, we recorded camera responses at different bands sequentially. The multispectral image of a sample was generated from normalized camera responses and corresponding white patch ratios. The 31-channel spectral reflectance of each pixel was reconstructed from the 16-band multispectral image using Wiener estimation [21].
Experiments were carried out to validate the proposed method on both training and test data acquired in our system. The improvement of measurement consistency was evaluated using the standard deviation (i.e., the square root of variance) of camera responses, as well as the spectral and colorimetric errors of spectral reflectances. Comparison with the state-of-the-art method was also performed on the test data.

5.1. Consistency Improvement on Camera Response

We first evaluated the improvement of response consistency using the training data shown in Figure 3. For a patch placed on various background samples, the consistency was computed as the standard deviation of its camera responses. The original camera responses were the normalized ones computed using Equation (10); camera responses in Stage I were computed using Equation (17); and those in Stage II were computed using Equation (23). The standard deviations of the training patches at each band are listed in Table 1. The standard deviations of the original data were relatively large in all bands. The standard deviation averaged on all bands was 0.0157 before lighting deviation correction. Its value reduced to 0.0067 in Stage I and further reduced to 0.0023 in Stage II. The final improvement of camera response consistency was 85.3%. For illustration, Table 2 lists the model coefficients obtained from the training data.
We evaluated the model accuracy using the test data in Figure 4. As shown, the small patches were of different colors, and the background samples contained various color patterns. We note that these small patches and background samples were not used in polynomial regression modeling. As illustrated in Table 3, the original standard deviation averaged on all bands was 0.01. It dropped to 0.0051 in Stage I and further dropped to 0.0028 in Stage II. These results validate the generalization capability of the proposed lighting correction method.
Figure 5 presents the camera responses of some patches placed on different background samples, from the test data at Band No. 3 (440 nm). Each curve corresponds to a patch in the figure, and the fluctuation of the curve indicates the inconsistency. The fluctuations of the curves are quite evident in Figure 5a, while the fluctuations become smaller in Figure 5b. Figure 5c shows a great reduction in fluctuations, which indicates the removal of the background effect.

5.2. Consistency Improvement on Spectral Reflectance

By reconstructing the 31-channel spectral reflectance from corresponding camera responses at different bands, we further quantified the improvement in spectral color measurement with our method.
For a patch placed on different background samples, the consistency of color measurement can be evaluated using both spectral and colorimetric errors. The spectral error was computed as the root-mean-square (rms) between two spectral reflectances. The colorimetric errors were computed using the CIEDE2000 color difference formula [33] under CIE standard illuminants D65, A, and F2, respectively.
Table 4 lists the average spectral and colorimetric errors of the training data. The spectral rms error dropped from 0.1169 to 0.0556 in Stage I and finally dropped to 0.0149 in Stage II. The average color difference error under D65 of the original spectral reflectances was 5.1649 units before lighting deviation correction. It was reduced to 3.0092 and 0.3679 units after applying the corrections in Stage I and Stage II.
Table 5 shows the improvement of color measurement consistency on the test data. It is observed that both the spectral and colorimetric errors were considerably reduced in Stage I and Stage II. This was expected, as the consistency of camera responses were improved by our lighting correction method.
For illustration, Figure 6 shows the spectral reflectance curves of a brown patch on different background samples. The curve of the spectral reflectance measured by a spectrophotometer is also presented for comparison. Note that due to the emission of fluorescent whitening agents in background samples, some of the curves in Figure 6a,b exhibited a conspicuous peak near 440 nm. The original reflectance curves were quite different due to the lighting deviation. The curves became closer to each other in Stage I and were in good agreement in Stage II.

5.3. Comparison with Existing Method

The Markov model [10] is the most relevant method to ours among previous works. It models the integrating sphere multispectral imaging of a Lambertian sample that may not be uniform and measures the reflectance of the sample at each pixel. We compared it with our method by computing measurement consistency on both camera response and spectral reflectance. The average standard deviations of camera responses, as well as average spectral and colorimetric errors of reconstructed reflectances are presented in Table 6. It is obvious that our method produced much lower errors under all metrics when compared with the Markov model.

6. Conclusions

We have proposed a lighting correction method to eliminate the measurement inconsistency caused by spatial non-uniformity and lighting deviation in integrating sphere multispectral imaging systems. We first modeled the imaging process of Lambertian surfaces in the integrating sphere, which explained the effect of these on measurements. Based on the theoretical analysis, we eliminated the spatial non-uniformity by acquiring the image of a white board and compensated for the lighting deviation by employing a reference white patch. We further introduced a polynomial regression model to correct the measurement inconsistency of real non-Lambertian samples. Experimental results validated that our method could improve measurement consistency considerably on both camera response and spectral reflectance. It also performed better than the state-of-the-art method.

Author Contributions

Conceptualization, Z.Z., H.-L.S., and J.H.X.; methodology, Z.Z. and H.-L.S.; resources, H.-L.S. and J.H.X.; investigation, Z.Z.; formal analysis, Z.Z. and S.L.; data curation, Z.Z. and Y.Z.; validation, S.L. and Y.Z.; writing, original draft, Z.Z.; writing, review and editing, H.-L.S.; funding acquisition, H.-L.S. and Y.Z.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under Grant Number 61371160 and the Zhejiang Provincial Natural Science Foundation of China under Grant Number LY18F010007.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clarke, F.; Compton, J.A. Correction methods for integrating sphere measurement of hemispherical reflectance. Color Res. Appl. 1986, 11, 253–262. [Google Scholar] [CrossRef]
  2. Zerlaut, G.; Anderson, T. Multiple-integrating sphere spectrophotometer for measuring absolute spectral reflectance and transmittance. Appl. Opt. 1981, 20, 3797–3804. [Google Scholar] [CrossRef] [PubMed]
  3. Hanssen, L. Integrating-sphere system and method for absolute measurement of transmittance, reflectance, and absorptance of specular samples. Appl. Opt. 2001, 40, 3196–3204. [Google Scholar] [CrossRef] [PubMed]
  4. Gindele, K.; Köhl, M.; Mast, M. Spectral reflectance measurements using an integrating sphere in the infrared. Appl. Opt. 1985, 24, 1757–1760. [Google Scholar] [CrossRef] [PubMed]
  5. Ball, C.P.; Levick, A.P.; Woolliams, E.R.; Green, P.D.; Dury, M.R.; Winkler, R.; Deadman, A.J.; Fox, N.P.; King, M.D. Effect of polytetrafluoroethylene (PTFE) phase transition at 19° on the use of Spectralon as a reference standard for reflectance. Appl. Opt. 2013, 52, 4806–4812. [Google Scholar] [CrossRef] [PubMed]
  6. Hardeberg, J.Y.; Schmitt, F.J.; Brettel, H. Multispectral color image capture using a liquid crystal tunable filter. Opt. Eng. 2002, 41, 2532–2549. [Google Scholar]
  7. Shimano, N.; Terai, K.; Hironaga, M. Recovery of spectral reflectances of objects being imaged by multispectral cameras. J. Opt. Soc. Am. A 2007, 24, 3211–3219. [Google Scholar] [CrossRef]
  8. Park, C.; Kang, M. Color restoration of RGBN multispectral filter array sensor images based on spectral decomposition. Sensors 2016, 16, 719. [Google Scholar] [CrossRef] [PubMed]
  9. ElMasry, G.; Mandour, N.; Al-Rejaie, S.; Belin, E.; Rousseau, D. Recent applications of multispectral imaging in seed phenotyping and quality monitoring—An overview. Sensors 2019, 19, 1090. [Google Scholar] [CrossRef]
  10. Hidović-Rowe, D.; Rowe, J.E.; Lualdi, M. Markov models of integrating spheres for hyperspectral imaging. Appl. Opt. 2006, 45, 5248–5257. [Google Scholar] [CrossRef]
  11. Ljungqvist, M.G.; Frosch, S.; Nielsen, M.E.; Ersbøll, B.K. Multispectral image analysis for robust prediction of astaxanthin coating. Appl. Spectrosc. 2013, 67, 738–746. [Google Scholar] [CrossRef]
  12. Mahmoud, K.; Park, S.; Park, S.N.; Lee, D.H. An imaging spectrophotometer for measuring the two-dimensional distribution of spectral reflectance. Metrologia 2014, 51, S293. [Google Scholar] [CrossRef]
  13. Tsakanikas, P.; Pavlidis, D.; Panagou, E.; Nychas, G.J. Exploiting multispectral imaging for non-invasive contamination assessment and mapping of meat samples. Talanta 2016, 161, 606–614. [Google Scholar] [CrossRef]
  14. Claridge, E.; Hidovic-Rowe, D. Model based inversion for deriving maps of histological parameters characteristic of cancer from ex-vivo multispectral images of the colon. IEEE Trans. Med. Imaging 2014, 33, 822–835. [Google Scholar] [CrossRef]
  15. Vidovič, L.; Majaron, B. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere. J. Biomed. Opt. 2014, 19, 027006. [Google Scholar] [CrossRef]
  16. Sloan, W.W. Correction of single-beam sample absorption error in a hemispherical 45°/0° spectrophotometer measurement cavity. Color Res. Appl. 2014, 39, 436–441. [Google Scholar] [CrossRef]
  17. Rey-Barroso, L.; Burgos-Fernández, F.; Delpueyo, X.; Ares, M.; Royo, S.; Malvehy, J.; Puig, S.; Vilaseca, M. Visible and extended near-infrared multispectral imaging for skin cancer diagnosis. Sensors 2018, 18, 1441. [Google Scholar] [CrossRef]
  18. De Lasarte, M.; Pujol, J.; Arjona, M.; Vilaseca, M. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera. Appl. Opt. 2007, 46, 167–174. [Google Scholar] [CrossRef]
  19. Katrašnik, J.; Pernuš, F.; Likar, B. A method for characterizing illumination systems for hyperspectral imaging. Opt. Express 2013, 21, 4841–4853. [Google Scholar] [CrossRef]
  20. Nouri, D.; Lucas, Y.; Treuillet, S. Calibration and test of a hyperspectral imaging prototype for intra-operative surgical assistance. Medical Imaging 2013: Digital Pathology. Int. Soc. Opt. Photonics 2013, 8676, 86760P. [Google Scholar]
  21. Shen, H.L.; Cai, P.Q.; Shao, S.J.; Xin, J.H. Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation. Opt. Express 2007, 15, 15545–15554. [Google Scholar] [CrossRef] [Green Version]
  22. Hisdal, B.J. Reflectance of nonperfect surfaces in the integrating sphere. J. Opt. Soc. Am. 1965, 55, 1255–1260. [Google Scholar] [CrossRef]
  23. Roos, A.; Ribbing, C.G.; Bergkvist, M. Anomalies in integrating sphere measurements on structured samples. Appl. Opt. 1988, 27, 3828–3832. [Google Scholar] [CrossRef]
  24. Shen, H.L.; Weng, C.W.; Wan, H.J.; Xin, J.H. Correcting cross-media instrument metamerism for reflectance estimation in multispectral imaging. J. Opt. Soc. Am. A 2011, 28, 511–516. [Google Scholar] [CrossRef]
  25. Luo, L.; Tsang, K.M.; Shen, H.L.; Shao, S.J.; Xin, J.H. An investigation of how the texture surface of a fabric influences its instrumental color. Color Res. Appl. 2015, 40, 472–482. [Google Scholar] [CrossRef]
  26. Nicodemus, F.E. Directional reflectance and emissivity of an opaque surface. Appl. Opt. 1965, 4, 767–775. [Google Scholar] [CrossRef]
  27. Liang, H. Advances in multispectral and hyperspectral imaging for archaeology and art conservation. Appl. Phys. A 2012, 106, 309–323. [Google Scholar] [CrossRef]
  28. Martinez, K.; Cupitt, J.; Saunders, D.; Pillay, R. Ten years of art imaging research. Proc. IEEE 2002, 90, 28–41. [Google Scholar] [CrossRef]
  29. Yamaguchi, M.; Teraji, T.; Ohsawa, K.; Uchiyama, T.; Motomura, H.; Murakami, Y.; Ohyama, N. Color image reproduction based on multispectral and multiprimary imaging: Experimental evaluation. In Proceedings of the SPIE Conference on Color Imaging: Device-Independent Color, Color Hardcopy, and Applications VII, San Jose, CA, USA, 28 December 2001; Volume 4663, pp. 15–26. [Google Scholar]
  30. Berns, R.S.; Petersen, K.H. Empirical modeling of systematic spectrophotometric errors. Color Res. Appl. 1988, 13, 243–256. [Google Scholar] [CrossRef]
  31. Hong, G.; Luo, M.R.; Rhodes, P.A. A study of digital camera colorimetric characterization based on polynomial modeling. Color Res. Appl. 2001, 26, 76–84. [Google Scholar] [CrossRef]
  32. Goebel, D.G. Generalized Integrating-Sphere Theory. Appl. Opt. 1967, 6, 125–128. [Google Scholar] [CrossRef]
  33. Luo, M.R.; Cui, G.; Rigg, B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
Figure 1. The schematic drawing of a typical integrating sphere multispectral imaging system.
Figure 1. The schematic drawing of a typical integrating sphere multispectral imaging system.
Sensors 19 03501 g001
Figure 2. The layout of the small reference white patch and an imaged sample. The white patch is marked by a blue line, and the sample is marked by a green box.
Figure 2. The layout of the small reference white patch and an imaged sample. The white patch is marked by a blue line, and the sample is marked by a green box.
Sensors 19 03501 g002
Figure 3. Training data collection. (a) Procedure of the measurement. A number of small patches are placed on various samples at different positions, respectively. The camera response of the small patch and the white-patch ratio are acquired in the imaging process. (b) Small gray patches of size 0.5 cm × 0.5 cm. The camera response of a small patch is averaged in a 35 × 35 pixel region. (c) Uniform background samples of size 10 cm × 8 cm.
Figure 3. Training data collection. (a) Procedure of the measurement. A number of small patches are placed on various samples at different positions, respectively. The camera response of the small patch and the white-patch ratio are acquired in the imaging process. (b) Small gray patches of size 0.5 cm × 0.5 cm. The camera response of a small patch is averaged in a 35 × 35 pixel region. (c) Uniform background samples of size 10 cm × 8 cm.
Sensors 19 03501 g003
Figure 4. Test data collection. (a) Small color patches. (b) Background samples with various patterns.
Figure 4. Test data collection. (a) Small color patches. (b) Background samples with various patterns.
Sensors 19 03501 g004
Figure 5. Camera responses of color patches with respect to backgrounds at the band of 440 nm. Curves in different colors correspond to different patches. (a) Original, (b) Stage I, and (c) Stage II.
Figure 5. Camera responses of color patches with respect to backgrounds at the band of 440 nm. Curves in different colors correspond to different patches. (a) Original, (b) Stage I, and (c) Stage II.
Sensors 19 03501 g005
Figure 6. Spectral reflectance curves of a brown patch placed on different background samples. Note that the thick red curve corresponds to the spectral reflectance measured by a spectrophotometer. (a) Original reflectance curves. (b) Reflectance curves corrected in Stage I. (c) Reflectance curves corrected in Stage II.
Figure 6. Spectral reflectance curves of a brown patch placed on different background samples. Note that the thick red curve corresponds to the spectral reflectance measured by a spectrophotometer. (a) Original reflectance curves. (b) Reflectance curves corrected in Stage I. (c) Reflectance curves corrected in Stage II.
Sensors 19 03501 g006
Table 1. Standard deviations computed from all color patches and background samples in the training data. The standard deviations averaged on all bands are also listed.
Table 1. Standard deviations computed from all color patches and background samples in the training data. The standard deviations averaged on all bands are also listed.
Band No.12345678
Original0.01550.01720.01940.01590.01560.01280.01500.0129
Stage I0.00970.01020.01180.00820.00750.00490.00670.0049
Stage II0.00240.00230.00220.00220.00210.00210.00210.0021
Band No.910111213141516Average
Original0.01270.01460.01520.01370.01530.01550.01850.02120.0157
Stage I0.00470.00610.00610.00470.00570.00480.00540.00590.0067
Stage II0.00210.00200.00210.00210.00220.00240.00290.00320.0023
Table 2. The coefficients of the polynomial model obtained from the training data.
Table 2. The coefficients of the polynomial model obtained from the training data.
Band No.12345678
c10.89660.87870.90830.91680.92400.94380.91550.9293
c20.11580.13370.10000.09010.08240.06130.09340.0792
c30.20620.20340.21690.15070.13540.08370.12430.0849
c4−0.2229−0.2193−0.2329−0.1617−0.1468−0.0913−0.1374−0.0943
Band No.910111213141516
c10.94450.91230.91550.92490.91180.93090.92760.9452
c20.06080.09790.09310.08370.09720.07510.07780.0562
c30.07820.11050.10590.07930.10000.08030.09510.1040
c4−0.0868−0.1228−0.1172−0.0878−0.1097−0.0868−0.1009−0.1085
Table 3. Standard deviations of response consistency obtained from the test data when applying the polynomial regression model computed from the training data.
Table 3. Standard deviations of response consistency obtained from the test data when applying the polynomial regression model computed from the training data.
Band No.12345678
Original0.00600.01330.01720.01130.01010.00810.00920.0081
Stage I0.00460.00820.01060.00630.00550.00390.00490.0039
Stage II0.00240.00290.00360.00270.00250.00230.00240.0024
Band No.910111213141516Average
Original0.00820.00950.00990.00960.01060.01000.00980.00960.0100
Stage I0.00390.00460.00450.00400.00430.00390.00410.00460.0051
Stage II0.00250.00250.00280.00290.00290.00310.00350.00420.0028
Table 4. Spectral reflectance consistency of the training data. Average spectral rms errors and color difference errors are listed.
Table 4. Spectral reflectance consistency of the training data. Average spectral rms errors and color difference errors are listed.
Spectral rms Δ E 00 (D65) Δ E 00 (A) Δ E 00 (F2)
Original0.11695.16494.69915.3026
Stage I0.05563.00922.67343.1455
Stage II0.01490.36790.35730.3634
Table 5. Spectral reflectance consistency of the test data. Average spectral rms errors and color difference errors are listed.
Table 5. Spectral reflectance consistency of the test data. Average spectral rms errors and color difference errors are listed.
Spectral rms Δ E 00 (D65) Δ E 00 (A) Δ E 00 (F2)
Original0.12404.12743.99494.1644
Stage I0.06171.79491.69091.8808
Stage II0.02010.58210.57120.6020
Table 6. Comparison with the Markov model [10] on measurement consistency. Average standard deviations of camera responses, spectral rms errors, and color difference errors of the test data are listed.
Table 6. Comparison with the Markov model [10] on measurement consistency. Average standard deviations of camera responses, spectral rms errors, and color difference errors of the test data are listed.
Response std. dev.Spectral rms Δ E 00 (D65) Δ E 00 (A) Δ E 00 (F2)
Original0.01000.12404.12743.99494.1644
Markov model0.00440.06251.73681.69071.8580
Ours0.00280.02010.58210.57120.6020

Share and Cite

MDPI and ACS Style

Zou, Z.; Shen, H.-L.; Li, S.; Zhu, Y.; Xin, J.H. Lighting Deviation Correction for Integrating-Sphere Multispectral Imaging Systems. Sensors 2019, 19, 3501. https://doi.org/10.3390/s19163501

AMA Style

Zou Z, Shen H-L, Li S, Zhu Y, Xin JH. Lighting Deviation Correction for Integrating-Sphere Multispectral Imaging Systems. Sensors. 2019; 19(16):3501. https://doi.org/10.3390/s19163501

Chicago/Turabian Style

Zou, Zhe, Hui-Liang Shen, Shijian Li, Yunfang Zhu, and John H. Xin. 2019. "Lighting Deviation Correction for Integrating-Sphere Multispectral Imaging Systems" Sensors 19, no. 16: 3501. https://doi.org/10.3390/s19163501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop