Next Article in Journal
An Improved Game Theory-Based Cooperative Localization Algorithm for Eliminating the Conflicting Information of Multi-Sensors
Previous Article in Journal
SAIF: A Correction-Detection Deep-Learning Architecture for Personal Assistants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor

Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(19), 5578; https://doi.org/10.3390/s20195578
Submission received: 9 September 2020 / Revised: 23 September 2020 / Accepted: 26 September 2020 / Published: 29 September 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
Near-infrared (NIR) images are very useful in many image processing applications, including banknote recognition, vein detection, and surveillance, to name a few. To acquire the NIR image together with visible range signals, an imaging device should be able to simultaneously capture NIR and visible range images. An implementation of such a system having separate sensors for NIR and visible light has practical shortcomings due to its size and hardware cost. To overcome this, a single sensor-based acquisition method is investigated in this paper. The proposed imaging system is equipped with a conventional color filter array of cyan, magenta, yellow, and green, and achieves signal separation by applying a proposed separation matrix which is derived by mathematical modeling of the signal acquisition structure. The elements of the separation matrix are calculated through color space conversion and experimental data. Subsequently, an additional denoising process is implemented to enhance the quality of the separated images. Experimental results show that the proposed method successfully separates the acquired mixed image of visible and near-infrared signals into individual red, green, and blue (RGB) and NIR images. The separation performance of the proposed method is compared to that of related work in terms of the average peak-signal-to-noise-ratio (PSNR) and color distance. The proposed method attains average PSNR value of 37.04 and 33.29 dB, respectively for the separated RGB and NIR images, which is respectively 6.72 and 2.55 dB higher than the work used for comparison.

1. Introduction

The widespread use of cameras in daily life has spurred numerous applications. These applications can become even more powerful if they can leverage more useful imaging information at lower cost. Active investigations have been underway to extract unconventional information from the images taken by inexpensive cameras for usage beyond the simple viewing of a photograph. For example, capturing images at non-visible wavelengths by inexpensive cameras could prove incredibly useful. One such effort of immediate interest is the acquisition of signals ranging 780–1400 nm of near-infrared (NIR) wavelengths [1].
Conventional consumer-level cameras only acquire images in the visible wavelength region, typically by using a color filter array (CFA) placed in front of the image sensor. To avoid saturation from the accompanying infrared (IR) signal, an IR cut filter is also placed in front of the image sensor [2]. This configuration acquires an image that covers the wavelength range 380–780 nm [1] that the human eye normally sees. However, signals in the infrared range can provide valuable additional information, as indicated by numerous investigations that have been conducted on how to best extract more useful visual information from the infrared signal [3].
NIR has different spectral characteristics than visible (VIS) wavelengths [2]. Many industrial applications have applied NIR imaging to enhance the quality of images in the visible range. In remote sensing, NIR images are quite effective in identifying forests since vegetation has high reflectivity in the NIR [4]. High NIR penetration into water vapor also explains why NIR images give clear images even in hazy weather conditions, and this characteristic has been successfully exploited to dehaze outdoor scenes [5].
In manufacturing applications, NIR can detect defective products or bruises on fruit [6,7,8]. In medical imaging, NIR helps to detect veins, which is quite useful for intravenous injections in the arm [9,10]. In surveillance imaging, NIR light sources have been used for night vision imaging without disturbing subjects being imaged. NIR images can also be utilized to reduce noise in visible range images captured in low light conditions [11,12]. Even this small sampling of NIR applications clearly demonstrates that NIR imaging is extremely important. These applications can be further enhanced by the combination of VIS and NIR images. In capturing both VIS and NIR images from a scene, simultaneous capture of both of them by a single system can make matters extremely simple by avoiding necessity of registration.
The conventional 2-CCD RGB-IR camera [13] has been used to simultaneously capture both visible red, green, and blue (RGB), and IR images. Its drawbacks include limited portability and high cost, as the system consists of two sensors and a beam splitter. The two sensors are placed 90 degrees apart to capture the scene at the same position. This dual sensor-based system incurs double hardware production costs and is bigger than a single sensor-based system. To overcome these limitations, research has been devoted to obtain both visible and IR images with a single sensor. For instance, the transverse field detector (TFD)-based VIS and NIR image acquisition method operates under the principle of different spectral transmittance in silicon [14]. However, the TFD-based imaging system is still in its experimental phase. Another notable approach is the novel design of a CFA. For instance, Koyama et al. [15] proposed a photonic crystal color filter (PCCF)-based approach which adds a defect layer in a TiO2 and SiO2 stacked photonic crystal. The thickness of the defect layer controls the band-gap length between the visible and infrared wavelengths. The detector can be designed with an array of four different defect layer thicknesses to obtain R+IR, G+IR, B+IR, and IR bands. Lu et al. [16] proposed an optimal CFA design which simulates to get an optimal CFA pattern with a various number of color filters. Because of the high production cost of PCCFs and Lu’s CFA, it is difficult to utilize them in low-cost consumer devices. Although a more pragmatic camera module has been developed using an RGB-IR CFA [17,18,19,20], it replaces one pixel of green band pass filter in the CFA pattern with an IR long pass filter and its production cost is higher than visible range color filters. Sadegnipoor et al. took a different approach that used a single RGB camera, employing a compressive sensing framework and developed a signal separation method [21] to extract the VIS and NIR images from a mixed image signal. The hardware system can be easily implemented by just removing the IR cut filter from a conventional camera. However, the proposed separation method has problems with image quality and heavy computational complexity due to the ill-posed mathematical structure and consequential iterative reconstruction process for compressively sensed data.
Our prior work [22] took a similar approach but with the complementary CFA of cyan, magenta, yellow, and green (CMYG) instead of the RGB primary colors. CMYG CFAs have the benefit of better sensitivity than RGB CFAs because of the wider spectrum range of each color filter [23,24]. However, the mathematical model in the prior work considered only the noise-free case. Nontrivial noise was observed in the experimentally separated images [22].
In this paper, we improve on our prior work by re-formulating the starting mathematical model. In the proposed mathematical model, the color space conversion is more rigorously performed by converting from CMYG color space to the device independent color space of the standard CIE1931 XYZ [25]. Moreover, by adding an additive noisy term to the proposed mathematical model, the noise characteristic is analyzed, and subsequently, color-guided filtering [26] is applied to minimize the noise after the proposed separation process.
Figure 1 depicts the structure of the proposed method. The proposed imaging system is made by removing the IR cut filter from a conventional camera. The image is taken under several lighting conditions which contain visible and NIR wavelengths. The proposed imaging system captures a monochrome mosaic pattern image and each pixel intensity corresponds to a color filter of the CMYG CFA pattern. To generate a color image from the monochrome image, a demosaicing process, such as bilinear interpolation [27] method, is applied. After the demosaicing process, a four-color channel of the mixed input image is generated, which contains mixed signals from the visible and NIR spectrums. In the separation process, the mixed input image is separated into VIS and NIR images. In the denoising process, the noise in the separated XYZ and NIR images is minimized by the color-guided filtering method [26] which utilizes the four-channel input image as the guide image. The separated visible image in XYZ color space [25] can be changed to the color space of choice, such as the standard RGB (sRGB) [28]. The proposed separation, denoising, and color conversion processes generate both visible and NIR images using a conventional single imaging sensor. We note that the proposed method allows simultaneous acquisition of both images and does not require an additional registration process to match different temporal and spatial viewpoints.
The remainder of this paper is organized as follows. Section 2 addresses the mathematical model and the proposed separation method. Experiment results are given in Section 3, and the conclusion is provided in Section 4.

2. Proposed Method

2.1. Mathematical Model

The image capturing mechanism used in this paper is exactly the same as conventional camera systems. The light rays reflected from a point on the object’s surface goes through the lens. After the light rays pass through the lens, an IR cut filter reflects the NIR wavelengths away while only passing visible wavelengths of light in. The installed CFA in front of the image sensor separates the incoming wavelengths into the appropriate color channels. The number of channels and colors used in the CFA depends on the system characteristics. The light rays passing through the optical lens and filters are integrated by the image sensor to form a corresponding pixel value. The mathematical model for each pixel value of the conventional camera system can be represented as
d F c o n v . ( k ) = λ V I S T F ( λ , k ) E ( λ , k ) R ( λ , k ) d λ + e F ( k )
where F is a specific color channel, k indicates the spatial location of a pixel, d F c o n v . ( k ) is the intensity at the kth pixel position sensed by a sensor of a color channel F, T F ( λ ,   k ) is the spectral distribution of transmittance by the color filter array for F, E ( λ , k ) is the spectral distribution of the light source, R ( λ , k ) is the spectral distribution of object reflectance in the scene, e F ( k ) is the temporal dark noise [29] characteristic of the sensor, and λ V I S refers to range of visible wavelength.
A conventional camera system captures the visible range image only with the help of the IR cut filter, and concurrent capture of the NIR image is possible by removing the IR cut filter. When the IR cut filter is removed, (1) can be changed to
d F ( k ) = λ V I S T F ( λ , k ) E ( λ , k ) R ( λ , k ) d λ   + λ N I R T F ( λ , k ) E ( λ , k ) R ( λ , k ) d λ + e F ( k )
where d F ( k ) represents the intensity at the kth pixel position sensed by a sensor of a color channel F, and λ N I R refers to range of NIR wavelength, and the intensity includes both visible and NIR wavelengths. In this paper, the four-color channels of F   { C ,   M ,   Y ,   G } are employed with a CMYG CFA. The intensity d F ( k ) sensed by a sensor of a color channel F according to the light integration model of (2) can be written as
d F ( k ) = d F V I S ( k ) + d F N I R ( k ) + e F ( k )
where d F V I S ( k ) and d F N I R ( k ) are the pixel intensities due to visible and NIR wavelengths, respectively. In (3), d F N I R ( k ) is the NIR intensity sensed by a sensor of each color channel; that is, d C N I R ( k ) , d M N I R ( k ) , d Y N I R ( k ) , and d G N I R ( k ) . The NIR intensity sensed in each color channel, d F N I R ( k ) , is modeled as d F N I R ( k ) = ω F d N I R ( k ) where ωF (that is, ωC, ωM, ωY, and ωG) is a weighting factor of the NIR transmittance on each channel of the CFA, and d N I R ( k ) is the NIR intensity unaffected by the CFA. Using this representation, (3) can be reformulated as
d F ( k ) = d F V I S ( k ) + ω F d N I R ( k ) + e F ( k )
More detailed information about the linear relation in (4) will be discussed later along with experimental verification. Equation (4) can be represented in the following matrix form:
[ d C V I S ( k ) + ω C d N I R ( k ) d M V I S ( k ) + ω M d N I R ( k ) d Y V I S ( k ) + ω Y d N I R ( k ) d G V I S ( k ) + ω G d N I R ( k ) ] = [ 1 0 0 0 ω C 0 1 0 0 ω M 0 0 1 0 ω Y 0 0 0 1 ω G ] [ d C V I S ( k ) d M V I S ( k ) d Y V I S ( k ) d G V I S ( k ) d N I R ( k ) ]
The matrix form in (5) represents a linear relationship between visible and NIR mixed input and the separated output. The separated four channel CMYG color-based visible output values depend on the device characteristics. Therefore, the CMYG color space is not appropriate to obtain consistent colors on different devices. To produce device independent color in this paper, CMYG color space is converted into the standard CIE1931 XYZ color space [25] with
[ d C V I S ( k ) d M V I S ( k ) d Y V I S ( k ) d G V I S ( k ) ] = [ α 11 α 12 α 13 α 21 α 22 α 23 α 31 α 32 α 33 α 41 α 42 α 43 ] [ d X V I S ( k ) d Y V I S ( k ) d Z V I S ( k ) ]
where the α i j ’s are the color conversion coefficients, and d X V I S ( k ) , d Y V I S ( k ) , and d Z V I S ( k ) are the intensities of the visible image in XYZ color space. Supplementing (6) with the NIR component gives
[ d C V I S ( k ) d M V I S ( k ) d Y V I S ( k ) d G V I S ( k ) d N I R ( k ) ] = [ α 11 α 12 α 13 0 α 21 α 22 α 23 0 α 31 α 32 α 33 0 α 41 α 42 α 43 0 0 0 0 1 ] [ d X V I S ( k ) d Y V I S ( k ) d Z V I S ( k ) d N I R ( k ) ]
By substituting (7) into (5), the following equation can be found:
[ d C V I S ( k ) + ω C d N I R ( k ) d M V I S ( k ) + ω M d N I R ( k ) d Y V I S ( k ) + ω Y d N I R ( k ) d G V I S ( k ) + ω G d N I R ( k ) ] = C [ d X V I S ( k ) d Y V I S ( k ) d Z V I S ( k ) d N I R ( k ) ]   where   C = [ α 11 α 12 α 13 ω C α 21 α 22 α 23 ω M α 31 α 32 α 33 ω Y α 41 α 42 α 43 ω G ]
In (8), C is called the combination matrix. By applying (8), (3) can be rewritten as
[ d C ( k ) d M ( k ) d Y ( k ) d G ( k ) ] = C [ d X V I S ( k ) d Y V I S ( k ) d Z V I S ( k ) d N I R ( k ) ] + [ e C ( k ) e M ( k ) e Y ( k ) e G ( k ) ]
In this paper, the separation matrix, S, is defined as
S = C 1
Finally, the separated XYZ and NIR pixel intensities at kth position can be found with
S [ d C ( k ) d M ( k ) d Y ( k ) d G ( k ) ] = [ d X V I S ( k ) d Y V I S ( k ) d Z V I S ( k ) d N I R ( k ) ] + S [ e C ( k ) e M ( k ) e Y ( k ) e G ( k ) ] = [ d ^ X V I S ( k ) d ^ Y V I S ( k ) d ^ Z V I S ( k ) d ^ N I R ( k ) ]
where d ^ X V I S ( k ) , d ^ Y V I S ( k ) , d ^ Z V I S ( k ) , and d ^ N I R ( k ) are the separated visible (in XYZ color space) and NIR pixels containing temporal dark noise. The noise analysis and corresponding denoising process will be discussed later.

2.2. Color Conversion to XYZ Color Space

In this paper, the CMYG CFA is used to capture the color in the visible wavelengths. The input image has four color channels. On the other hand, the display system controls the color in three channels of RGB color space so that the input image in CMYG color space should be changed into the RGB color space. The conventional way to do this conversion is to change the input intensity values of the sensor to the target standard color space. In this paper, we convert CMYG input to the standard CIE1931 XYZ [25] color space.
From (6), the relation between CMYG and XYZ is linear. The color conversion can be done by linear regression [28] by excluding the bias term, making it easy to adapt to the proposed model. In the process of training the color conversion coefficients by linear regression, the selection of the color temperature of the light source and the reference target color chart are important. The color temperature of a light source can be affected by the white balance of the image. If the color was trained under a 6500K fluorescent light, the white balance is only matched under the same light source. In case of the reference target color, the standard Macbeth color chart [30] is widely used to train the color space conversion coefficients in (6).
After finding the color conversion coefficients from CMYG to XYZ space, the converted XYZ space can be converted to RGB color space for display on a device. In this paper, we apply the conversion to sRGB [31] which is the predominantly used color space for display of photos in the internet [32].

2.3. Calculation of the NIR Weighting Coefficient

Section 2.2 explained how to derive the color space conversion coefficients in (8). This section describes how to find the value of the NIR weighting coefficients ωC, ωM, ωY, and ωG in (8). The proposed method estimates the weighting coefficients by finding the relative ratio between the coefficients. To find the relative ratio, a weighting coefficient of a specific channel is treated as a reference value. In this paper, we set the NIR intensity in the green channel of input image as the reference. To calculate the coefficients, (4) is slightly rewritten as
ω F d N I R ( k ) = ω F ω G ω G d N I R ( k ) = ω F G ω G d N I R ( k )
where ω F G is the ratio of ω F against ω G , that is, ω F / ω G , F ∈ {C, M, Y, G}, and ω G G = 1. By reflecting (8), the combination matrix (C) also can be changed to
C = [ α 11 α 12 α 13 ω C G ω G α 21 α 22 α 23 ω M G ω G α 31 α 32 α 33 ω Y G ω G α 41 α 42 α 43 ω G ]
To find the values of ωCG, ωMG, and ωYG, an experiment was performed. To capture an image which contains only the NIR information, the camera captures a scene which is illuminated with an NIR light source only. The image was taken with an 850 nm NIR light source, so the image only contains reflected NIR light from a white surface. The light source of visible wavelength is absent. By the experiment, (4) by reflecting (13) can be expressed as
d F ( k ) = ω F G ω G d N I R ( k ) + e F ( k )
From (14), it can be estimated that only reflected NIR light is projected onto the sensor after passing through the CMYG CFA. Therefore, the value of the weighting factor only depends on the sensor characteristic and the transmittance of the CMYG CFA.
Figure 2 depicts scatter plots of NIR intensity between two color channels. It clearly shows a linear relationship between the color channels and the slopes of the linear regressions to the scatter plots imply the values of ωCG, ωMG, and ωYG. Their values are computed by setting the value of ωG as 1, which implies that the amount of NIR light passing through the green color filter is treated as a relative reference value of the output NIR image.

2.4. Noise Analysis

After the separation by multiplying the separation matrix (S) and input image pixels, noise are noticeable in the separated visible and NIR images due to the following noise characteristic that is defined as the separation noise in this paper. Equation (11) contains the noise model after the separation process. The noise in each separated image channel is represented as the weighted sum of the input temporal dark noise and each element in the separation matrix. It implies that the separation noise depends on the input temporal dark noise and the separation matrix. Experimental data can elucidate the relationship of noise in the input and the separated images. Figure 3 shows a histogram of the grey reference target taken with the proposed imaging system under fluorescent and 850 nm NIR light. The variance of the captured grey reference target image can be treated as that of the temporal dark noise. In this paper, we assume the noise Gaussian, and the distribution is estimated from the shape of the histogram in Figure 3. To check the relationship between the input noise and the separation noise, following weighted sum of Gaussian random variables [33] is calculated. In general, if
e F ~ N ( μ F , σ F 2 ) , F { C , M , Y , G }
then,
e { X , Y , Z , N I R } = F { C , M , Y , G } s F e F   ~ N ( F { C , M , Y , G } s F μ F , F { C , M , Y , G } ( s F σ F ) 2 + 2 F i , F j { C , M , Y , G } s F i s F j C o v ( e F i , e F j ) )
where μ F and σ F 2 are the mean and variance of the temporal dark noise distribution of input image and N indicates the Gaussian probability density function. SF represents each element in the separation matrix and eF is the temporal dark noise level on the corresponding channel. The “estimation” in Figure 3 shows the estimated noise distribution of the separated image using (16) with the input noise distribution. The estimated result exactly matches with the histogram of the separated image. From this result, it is expected that the noise in the separated images are fully derived from the temporal dark noise of the input image.

2.5. Denoising Method

In Figure 3, it is observed that the variance of noise is larger in the separated images than in the input image. To reduce the noise on the separated images, an effective denoising technique should be applied. In general, we can apply the denoising techniques on the separated images directly. The BM3D filtering [34] is a good example of denoising that can be potentially considered in this case. In Section 2.4, we discovered that the noise distribution in the separated image is formed by the weighted sum of the noise in each input image channel. This implies that if the denoising is applied to the input mixed image, the noises in separated images will be also reduced. However, it is hard to estimate the amount of noise in the input image. In a different approach, the input image can be used as a guidance image to reduce noise of the separated images. If the output image has a linear relationship with a guidance image, the noise can be minimized by deriving the low noise of the guidance image. A denoising technique called color-guided filtering [26] exploits the linear relationship between the noisy image and the guide image. If there is a linear relationship between the input and the separated results, the proposed method is well matched to the guided filtering approach. The color-guided filtering method [26] is applied in this paper to minimize the separation noise by applying the input CMYG image as a guide image. According to [26], a color-guided filter with CMYG can be applied by,
d ˜ X V I S ( k ) = a X Τ ( k ) [ d C ( k ) d M ( k ) d Y ( k ) d G ( k ) ] + b X ( k )
where d ˜ X V I S ( k ) is the kth pixel of the filtered separated image of X channel. a X Τ ( k ) and b X ( k ) can be calculated by,
a X ( k ) = ( Σ k + ε U ) 1 ( 1 | w | i w k [ d C ( i ) d M ( i ) d Y ( i ) d G ( i ) ]   d ^ X V I S ( i ) μ k d ^ ¯ X V I S ( k ) )
b k = d ^ X V I S ( k ) a X Τ ( k ) μ k
where a X ( k ) is a 4 × 1 coefficient vector, Σ k is the 4 × 4 covariance matrix of the input image, and U is a 4 × 4 identity matrix. μ k is the mean of the input image and d ^ ¯ X V I S ( k ) is the mean of d ^ X V I S ( k ) . In case of Y and Z channels, the same method as the color-guided filtering of the X channel is applied.
Figure 4 shows the denoising results of the separated RGB and NIR images. Wiener [27] (Chapter 5) and BM3D [26] filters are applied to the separated images and the color-guided filter is applied by (17). Before the filtering process, the noise is seen obviously on the separated images. The Wiener filter reduces the noise, but also suffers from the effect of blurring along edges. In BM3D, the edge looks clearer than in the Wiener filter, but the texture on leaf in the separated RGB image looks blurrier than in the image without filtering. The image resulting from the guided filter looks better than the Wiener and BM3D filters by keeping the clarity of the edge and the texture on leaf.

3. Experimental Results

3.1. Experimental Condition

To evaluate the experimental results of the proposed method, we used a CMYG CFA-based camera which is available on the consumer market (Figure 5b). The camera is modified by removing the IR cut filter from the camera. Figure 5a shows the hardware of the IR cut filter and the CMYG CFA-based detector inside of the camera. Test images were taken under three different light sources: fluorescent and 850 nm NIR light, a halogen light, and sunlight. Image processing software is implemented according to the proposed method. The color-guided filter-based [26] denoising method is applied to both separated XYZ and NIR images. The separated XYZ image is converted into the sRGB [31] space so that the resulting images are displayed in the sRGB color space. After all the processing, including CMYG to VIS/NIR separation, denoising, and color conversion, the output images are referred to as the separated RGB and the separated NIR images.

3.2. The Separation of Band Spectrum

This experiment shows how well the visible and NIR images are separated from the mixed input image. Band pass filters from 400 to 1000 nm are aligned in a row and the camera captures the light passing through the filters from a halogen light source. Figure 6 shows the experimental environment and the separation results. From the result, the separated RGB image includes wavelengths from 400 to 800 nm. The separated NIR image includes data from 750 to 1000 nm. Due to degradation of spectral sensitivity of silicon-based sensors [35], the results in edge bands (400 nm and 1000 nm) look darker than the other bands. The result shows that the separated RGB and NIR images are overlapped from 750 to 800 nm, but the other bands are clearly separated. From this result, 750 nm is determined to be the starting wavelength of NIR, which is like several other conventional NIR imaging systems.

3.3. Separation under a NIR Spot Light

To check the separation result under a combination of visible and NIR light sources, the images are captured under both fluorescent light and an 850 nm NIR flashlight. Figure 7 depicts the experimental result. The fluorescent light illuminates the entire area of the scene, but the NIR light shines only on a part of the scene to check the separation correctness of the NIR region. NIR light spots can be noted in the first row of Figure 7 from left to right. From the results, it is observed that the separated RGB images are similar because the fluorescent light illumination is identical in all images. On the other hand, the separated NIR images show different results, implying that the proposed method successfully separates the visible and NIR images even if the brightness between the visible and NIR light sources are changed.

3.4. Separation Results on Applications

To evaluate the characteristics of the separated visible and NIR images, images were taken under several light sources. From the separation results, we can check how the separated images represent the characteristics of each spectrum band from the subjects. Figure 8 depicts the separation results from three different scenes. The separated NIR images are indicative of the special characteristics of NIR light.
Counterfeit money detection: The first row in Figure 8 shows the separation results on the image of a Korean banknote. The separated NIR image contains only texture information from certain parts of the bill. This NIR characteristic has been used to detect counterfeit banknotes [36].
Different NIR reflectance of subjects: The second row in Figure 8 was taken outside under sunlight. The scene consists of three different parts: the sky at the top, the building and trees in the middle, and the lake at the bottom. From the separated NIR image, the sky and the lake look relatively darker than the tree objects. According to the Rayleigh scattering [37], the wavelength of NIR is longer than the visible light, so the sky looks dark in the NIR image. At the bottom side, the lake looks dark in the separated NIR image because the water absorption of NIR light is higher than that of visible light [38,39]. On the other hand, the trees look brighter because the spectral reflectance of the NIR is higher than the visible light from the leaves of trees [40]. This characteristic has been applied to remote sensing applications for vegetation searches in a region [4].
Medical application: As one examples of medical applications, NIR images can be utilized to detect veins. The separated RGB image in the third row of Figure 8 shows a picture of an arm in which it is hard to detect the veins. On the other hand, the separated NIR image clearly indicates the shape of veins in the arm. NIR light reflection from the vein inside skin depends on the light intensity and thickness of tissue [41]. This application has been used for non-invasive medical diagnosis [9,10,42].

3.5. Objective Quality Comparison

In this paper, objective quality performance of the proposed separation method is compared with a representative VIS-NIR separation method which is based on compressive sensing (RGB-CS) [21]. The imaging system used in this paper is CMYG CFA-based; however, the work for comparison [21] was applied to an RGB CFA-based imaging system. To minimize the differences in the experimental conditions due to the hardware difference of the imaging systems, a simulation is established to compare the objective performance between the two different approaches. Figure 9 shows the simulation structure for the objective quality measure.
To minimize differences between the method proposed in this work and the method used for comparison, mixed input images are generated from a pre-captured RGB and NIR image dataset [43]. The dataset was captured by a conventional digital camera with and without its IR cut filter removed. The original RGB images were captured with an IR cut off filter and the original NIR images were captured with IR long pass filter (and its IR cutoff filter removed). The input images were generated in two different ways. In the case of the proposed method, the color space of the input RGB image is converted to four channel CMYG color space. The converted CMYG image is then added to the NIR image to obtain the mixed input image. The weighting coefficients are pre-calculated, and the values are applied in the separation process. The mixed input image is converted to the mosaiced input image and the mosaiced input image is used as the input of the separation process. In case of the method used for comparison, the RGB and NIR inputs are added together using the mathematical representation of compressive sensing (RGB-CS) [21] and the pre-calculated weighting coefficients are applied in the same manner as the proposed method. In this paper, we implemented the separation process of RGB-CS by noting the description in the paper [21]. The performance of separated RGB and NIR images are measured with the input RGB and NIR images in terms of the peak-signal-to-noise-ratio (PSNR) [44] and color distance.
PSNR comparison: Table 1 shows the simulation condition. In this paper, 54 sample RGB and NIR images are used for the simulation. Figure 10 depicts the PSNR comparison graphs between the two methods. From the simulation results, the PSNR of the proposed method is higher than that of RGB-CS. The average PSNR of the separated RGB and NIR images of the proposed method is 37.04 dB and 33.29 dB, respectively. On the other hand, the average PSNR of the separated RGB and NIR images of RGB-CS is 30.32 dB and 30.74 dB, respectively. According to the average PSNR comparison, the objective quality of the proposed method is 6.72 and 2.55 dB higher, respectively, than RGB-CS.
Color distance comparison: To compare color differences between two images, color distance between two colors in sRGB, XYZ, and CIELab [46] color spaces are calculated. In this paper, the color distance is calculated by following average of Euclidean distance:
C D R G B = 1 N i = 1 N ( R i o r g . R i s e p . ) 2 + ( G i o r g . G i s e p . ) 2 + ( B i o r g . B i s e p ) 2 C D X Y Z = 1 N i = 1 N ( X i o r g . X i s e p . ) 2 + ( Y i o r g . Y i s e p . ) 2 + ( Z i o r g . Z i s e p ) 2 C D C I E L a b = 1 N i = 1 N ( L i o r g . L i s e p . ) 2 + ( a i o r g . a i s e p . ) 2 + ( b i o r g . b i s e p ) 2 C D R G B h i s t o g r a m = 1 M i = 1 M ( R h i s t . i o r g . R h i s t . i s e p . ) 2 + ( G h i s t . i o r g . G h i s t . i s e p . ) 2 + ( B h i s t . i o r g . B h i s t . i s e p ) 2 C D X Y Z h i s t o g r a m = 1 M i = 1 M ( X h i s t . i o r g . X h i s t . i s e p . ) 2 + ( Y h i s t . i o r g . Y h i s t . i s e p . ) 2 + ( Z h i s t . i o r g . Z h i s t . i s e p ) 2 C D C I E L a b h i s t o g r a m = 1 M i = 1 M ( L h i s t . i o r g . L h i s t . i s e p . ) 2 + ( a h i s t . i o r g . a h i s t . i s e p . ) 2 + ( b h i s t . i o r g . b h i s t . i s e p ) 2
where C D R G B , C D X Y Z , and C D C I E L a b are the color distance in each sRGB, XYZ, and CIELab color space, respectively. C D R G B h i s t o g r a m , C D X Y Z h i s t o g r a m , and C D C I E L a b h i s t o g r a m are the color distance of histogram in each color space. N is the number of pixels in image and M is the number of bins in the histogram. The bin size of histogram is 256 in this paper. In case of NIR image, the intensity distance is also calculated by
I D N I R = 1 N i = 1 N ( N I R i o r g . N I R i s e p . ) I D N I R h i s t o g r a m = 1 M i = 1 M ( N I R h i s t . i o r g . N I R h i s t . i s e p . )
where I D N I R and I D N I R h i s t o g r a m are the intensity distance of NIR images and its histogram, respectively.
Figure 11 depicts results of color distance comparison between RGB-CS and the proposed method. The results show that the proposed method has lower color distance value than RGB-CS method. This means that the color difference between original and separated images of the proposed method looks more similar than RGB-CS method.
Figure 12 depicts the subjective quality comparisons between RGB-CS and the proposed method. In Figure 12, both methods show separated RGB and NIR images pretty well, but in case of RGB, more differences were found between RGB-CS and the ground truth images. The RGB results by RGB-CS looks sharper than the proposed method, but careful comparison with the ground truth reveals that the high frequencies of RGB-CS are over-emphasized (see the cloud in the images in the first row, for example). It means that the proposed method generates separated results more faithful to the ground truth.

4. Conclusions

In this paper, visible and NIR image separation from a single image is proposed. The image is captured with a CMYG CFA-based camera that is modified by removing the IR cut filter in front of the detector. The proposed method is performed in a simple way by multiplying a separation matrix with the input pixels. After the separation, the separated XYZ can be converted into a chosen color space for display, in this work, the sRGB color space is used. To reduce the noise after the separation process, we analyzed the noise characteristics in the separated images and a color-guided filter [26] was applied for denoising using the CMYG input as the guided image. The experimental results by testing with several band pass filters and light sources show that the visible and NIR images are successfully separated. The use of the proposed method is also implemented for three applications: counterfeit money, vegetation, and vein detection. To measure the objective quality in terms of PSNR, the separation performance of the proposed method and RGB-CS [21] is simulated. The simulation results show that the proposed method achieved 6.72 and 2.55 dB higher PSNR than RGB-CS on the separated RGB and NIR images, respectively.
Discussion and future work: In this paper, we proposed four channels of visible and NIR separation from CMYG CFA. However, there is a limitation to the dynamic range of sensor and bit-depth if the mixture of the color bands contains more channels than can be separated. A novel approach should be invented to overcome this limitation as future work. Nevertheless, the proposed separation method can be improved with better color filter array mixtures by increasing the number of equations. If the spectral bands need to be increased for a certain application, the proposed method might be extended to capture more color bands in a single sensor, which needs to be investigated as future work as well.

Author Contributions

Conceptualization, Y.P. and B.J.; methodology, Y.P. and B.J.; software, Y.P.; validation, Y.P.; formal analysis, Y.P.; investigation, Y.P. and B.J.; resources, Y.P.; data curation, Y.P.; writing—original draft preparation, Y.P.; writing—review and editing, B.J.; visualization, Y.P.; supervision, B.J.; project administration, B.J.; funding acquisition, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Korea government (MSIT), grant number 2018-0-00348.

Acknowledgments

This work was supported in part by an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00348, Development of Intelligent Video Surveillance Technology to Solve the Problem of Deteriorating Arrest Rate by Improving CCTV Constraints).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ISO 20473:2007(E) Optics and Photonics–Spectral Bands; ISO: Geneva, Switzerland, 2007.
  2. Süsstrunk, S.; Fredembach, C. Enhancing the Visible with the Invisible: Exploiting Near- Infrared to Advance Computational Photography and Computer Vision. SID Symp. Dig. Tech. Pap. 2010, 41, 90–93. [Google Scholar] [CrossRef] [Green Version]
  3. Varjo, S.; Hannuksela, J.; Alenius, S. Comparison of near infrared and visible image fusion methods. In Proceedings of the International Workshop on Applications Systems and Services for Camera Phone Sensing (MobiPhoto2011), Penghu, Taiwan, 12 June 2011; University of Oulu: Penghu, Taiwan, 2011; pp. 7–11. [Google Scholar]
  4. Gao, B.C. NDWI-A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  5. Schaul, L.; Fredembach, C.; Süsstrunk, S. Color image dehazing using the near-infrared. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1629–1632. [Google Scholar]
  6. Nicolai, B.M.; Beullens, K.; Bobelyn, E.; Peirs, A.; Saeys, W.; Theron, K.I.; Lammertyn, J. Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review. Postharvest Biol. Technol. 2007, 46, 99–118. [Google Scholar] [CrossRef]
  7. Ariana, D.P.; Lu, R.; Guyer, D.E. Near-infrared hyperspectral reflectance imaging for detection of bruises on pickling cucumbers. Comput. Electron. Agric. 2006, 53, 60–70. [Google Scholar] [CrossRef]
  8. Mehl, P.M.; Chen, Y.R.; Kim, M.S.; Chan, D.E. Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations. J. Food Eng. 2004, 61, 67–81. [Google Scholar] [CrossRef]
  9. Crisan, S.; Tarnovan, I.G.; Crisan, T.E. A Low Cost Vein Detection System Using Near Infrared Radiation. In Proceedings of the 2007 IEEE Sensors Applications Symposium, San Diego, CA, USA, 6–8 February 2007; pp. 1–6. [Google Scholar]
  10. Miyake, R.K.; Zeman, R.D.; Duarte, F.H.; Kikuchi, R.; Ramacciotti, E.; Lovhoiden, G.; Vrancken, C. Vein imaging: A new method of near infrared imaging, where a processed image is projected onto the skin for the enhancement of vein treatment. Dermatol. Surg. 2006, 32, 1031–1038. [Google Scholar] [CrossRef] [PubMed]
  11. Matsui, S.; Okabe, T.; Shimano, M.; Sato, Y. Image enhancement of low-light scenes with near-infrared flash images. Inf. Media Technol. 2011, 6, 202–210. [Google Scholar]
  12. Zhuo, S.; Zhang, X.; Miao, X.; Sim, T. Enhancing Low Light Images using Near Infrared Flash Images. In Proceedings of the 2010 IEEE 17th International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 2537–2540. [Google Scholar]
  13. JAI’s 2-CCD Camera. Available online: https://www.jai.com/products/ad-080-cl (accessed on 28 September 2020).
  14. LangfeJder, G.; Malzbender, T.; Longoni, A.F.; Zaraga, F. A device and an algorithm for the separation of visible and near infrared signals in a monolithic silicon sensor. In Proceedings of the SPIE 7882. Visual Information Processing and Communication II, San Francisco Airport, CA, USA, 25–26 January 2011; International Society for Optics and Photonics: Washington, DC, USA, 2011; p. 788207. [Google Scholar]
  15. Koyama, S.; Inaba, Y.; Kasano, M.; Murata, T. A day and night vision MOS imager with robust photonic-crystal-based RGB-and-IR. IEEE Trans. Electron Devices 2008, 55, 754–759. [Google Scholar] [CrossRef]
  16. Lu, Y.M.; Fredembach, C.; Vetterli, M.; Süsstrunk, S. Designing color filter arrays for the joint capture of visible and near-infrared images. In Proceedings of the 2009 IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 3797–3800. [Google Scholar]
  17. OmniVision’s RGB-IR CMOS Sensor. Available online: https://www.ovt.com/sensors/OV2736 (accessed on 28 September 2020).
  18. Monno, Y.; Teranaka, H.; Yoshizaki, K.; Tanaka, M.; Okutomi, M. Single-Sensor RGB-NIR Imaging: High-Quality System Design and Prototype Implementation. IEEE Sens. J. 2019, 19, 497–507. [Google Scholar] [CrossRef]
  19. Hu, X.; Heide, F.; Dai, Q.; Wetzstein, G. Convolutional Sparse Coding for RGB+NIR Imaging. IEEE Trans. Image Process. 2018, 27, 1611–1625. [Google Scholar] [CrossRef]
  20. Park, C.; Kang, M.G. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition. Sensors 2016, 16, 719. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Sadeghipoor, Z.; Lu, Y.M.; Süsstrunk, S. A novel compressive sensing approach to simultaneously acquire color and near-infrared images on a single sensor. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 1646–1650. [Google Scholar]
  22. Park, Y.; Shim, H.J.; Dinh, K.Q.; Jeon, B. Visible and Near-Infrared Image Separation from CMYG Color Filter Array based Sensor. In Proceedings of the 58th International Symposium ELMAR, Zadar, Croatia, 12–14 September 2016; pp. 209–212. [Google Scholar]
  23. Baer, R.L.; Holland, W.D.; Hoim, J.; Vora, P. A Comparison of Primary and Complementary Color Filters for CCD based Digital Photography. In Sensors, Cameras, and Applications for Digital Photography; International Society for Optics and Photonics: Washington, DC, USA, 1999; pp. 16–25. [Google Scholar]
  24. Axis Communications, CCD and CMOS Sensor Technology. Technical White Paper; Axis Communications: Waterloo, ON, Canada, 2011. [Google Scholar]
  25. Smith, T.; Guild, J. The C.I.E. colorimetric standards and their use. Trans. Opt. Soc. 1931, 33, 73–134. [Google Scholar] [CrossRef]
  26. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  27. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson: Upper Saddle River, NJ, USA, 2010; pp. 57–125. [Google Scholar]
  28. Pointer, M.R.; Attridge, G.G.; Jacobson, R.E. Practical camera characterization for colour measurement. Imaging Sci. J. 2001, 49, 63–80. [Google Scholar] [CrossRef]
  29. Nakamura, J. Image Sensors and Signal Processing for Digital Still Cameras; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  30. ColorChecker. Product No. 50105 (Standard) or No. 50111 (Mini); Manufactured by the Munsell Color Services Laboratory of GretagMacbeth; X-Rite: Grand Rapids, MI, USA, 1976. [Google Scholar]
  31. International Electrotechnical Commission. Multimedia Systems and Equipment–Colour Measurement and Management–Part 2-1: Colour Management–Default RGB Colour Space–sRGB; IEC 61966-2-1; International Electrotechnical Commission: Geneva, Switzerland, 1999. [Google Scholar]
  32. Stokes, M.; Anderson, M.; Chandrasekar, S.; Motta, R. A Standard Default Color Space for the Internet-sRGB. W3C Document 1996. Available online: https://www.w3.org/Graphics/Color/sRGB.html (accessed on 29 September 2020).
  33. Johnson, R.A.; Wichern, D.W. Matrix algebra and random vectors. In Applied Multivariate Statistical Analysis, 6th ed.; Pearson: Upper Saddle River, NJ, USA, 2007; pp. 49–110. [Google Scholar]
  34. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. BM3D Image Denoising with Shape-Adaptive Principal Component Analysis. In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations; Inria Rennes-Bretagne Atlantique: Saint Malo, France, 2009. [Google Scholar]
  35. Darmont, A. Spectral Response of Silicon Image Sensor. In Aphesa White Paper; Aphesa: Harzé, Belgium, 2009. [Google Scholar]
  36. Bruna, A.; Farinella, G.M.; Guarnera, G.C.; Battiato, S. Forgery Detection and Value Identification of Euro Banknotes. Sensors 2013, 13, 2515–2529. [Google Scholar] [CrossRef] [PubMed]
  37. Young, A.T. Rayleigh scattering. Phys. Today 1982, 35, 42–48. [Google Scholar] [CrossRef]
  38. Pope, R.M.; Fry, E.S. Absorption spectrum (380–700 nm) of pure water. II. Integrating cavity measurements. Appl. Opt. 1997, 36, 8710–8723. [Google Scholar] [CrossRef]
  39. Kou, L.; Labrie, D.; Chýlek, P. Refractive indices of water and ice the 0.65- to 2.5-µm spectral range. Appl. Opt. 1993, 32, 3531–3540. [Google Scholar] [CrossRef] [Green Version]
  40. Peñuelas, J.; Filella, I. Visible and near-infrared reflectance techniques for diagnosing plant physiological status. Trends Plant Sci. 1998, 3, 151–156. [Google Scholar] [CrossRef]
  41. Kim, D.; Kim, Y.; Yoon, S.; Lee, D. Preliminary Study for Designing a Novel Vein-Visualizing Device. Sensors 2017, 17, 304. [Google Scholar] [CrossRef] [Green Version]
  42. Arsalan, M.; Naqvi, R.A.; Kim, D.S.; Nguyen, P.H.; Owais, M.; Park, K.R. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors. Sensors 2018, 18, 1501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Brown, M.; Süsstrunk, S. Multispectral SIFT for Scene Category Recognition. In Proceedings of the Computer Vision and Pattern Recognition (CVPR 2011), Providence, RI, USA, 20–25 June 2011; IEEE Computer Society: Washington, DC, USA, 2011; pp. 177–184. [Google Scholar]
  44. Sayood, K. Mathematical preliminaries for lossy coding. In Introduction to Data Compression, 3rd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2006; pp. 195–226. [Google Scholar]
  45. Mohimani, H.; Babaie-Zadeh, M.; Jutten, C. A fast approach for overcomplete sparse decomposition based on smoothed norm. IEEE Trans. Signal Process. 2009, 57, 289–301. [Google Scholar] [CrossRef] [Green Version]
  46. ISO 11664–4:2008(E)/CIE S 014-4/E: Joint ISO/CIE Standard Colorimetry-Part4: CIE 1976 L*a*b* Colour Space; ISO: Geneva, Switzerland, 2007.
Figure 1. The proposed visible and near-infrared imaging system structure.
Figure 1. The proposed visible and near-infrared imaging system structure.
Sensors 20 05578 g001
Figure 2. The Near-infrared (NIR) intensity scatter plot between two different color channels (a) Green-Cyan, (b) Green-Magenta, and (c) Green-Yellow.
Figure 2. The Near-infrared (NIR) intensity scatter plot between two different color channels (a) Green-Cyan, (b) Green-Magenta, and (c) Green-Yellow.
Sensors 20 05578 g002
Figure 3. The noise distribution of the input and the separated images (in images of grey surface object taken under fluorescent and 850 nm light).
Figure 3. The noise distribution of the input and the separated images (in images of grey surface object taken under fluorescent and 850 nm light).
Sensors 20 05578 g003
Figure 4. Subjective quality comparison of the denoising methods. The processing time is also shown.
Figure 4. Subjective quality comparison of the denoising methods. The processing time is also shown.
Sensors 20 05578 g004
Figure 5. Imaging system used in the experiment. (a) The IR cut filter and imaging detector in the camera, (b) the camera used in the experiment.
Figure 5. Imaging system used in the experiment. (a) The IR cut filter and imaging detector in the camera, (b) the camera used in the experiment.
Sensors 20 05578 g005
Figure 6. Experimental results of the separation of a band spectrum (bandwidth of the bandpass filters: 20 nm).
Figure 6. Experimental results of the separation of a band spectrum (bandwidth of the bandpass filters: 20 nm).
Sensors 20 05578 g006
Figure 7. The separation experimental result under fluorescent and 850 nm NIR spot light illuminating four different locations from left to right.
Figure 7. The separation experimental result under fluorescent and 850 nm NIR spot light illuminating four different locations from left to right.
Sensors 20 05578 g007
Figure 8. Examples of potential applications of the proposed method (Note: “SPECIMEN” text at banknote is overlapped on the images).
Figure 8. Examples of potential applications of the proposed method (Note: “SPECIMEN” text at banknote is overlapped on the images).
Sensors 20 05578 g008
Figure 9. Block diagram of simulation method for the measurement of objective quality.
Figure 9. Block diagram of simulation method for the measurement of objective quality.
Sensors 20 05578 g009
Figure 10. Peak-signal-to-noise-ratio (PSNR) comparison of RGB-CS [21] and the proposed method (a) The separated RGB image, (b) the separated NIR image.
Figure 10. Peak-signal-to-noise-ratio (PSNR) comparison of RGB-CS [21] and the proposed method (a) The separated RGB image, (b) the separated NIR image.
Sensors 20 05578 g010
Figure 11. Color distance comparison of RGB-CS [21] and the proposed method (a) sRGB space, (b) Histogram in sRGB space, (c) XYZ space, (d) Histogram in XYZ space, (e) CIELab space, (f) Histogram in CIELab space, (g) intensity distance of NIR image, (h) Histogram distance of NIR intensity.
Figure 11. Color distance comparison of RGB-CS [21] and the proposed method (a) sRGB space, (b) Histogram in sRGB space, (c) XYZ space, (d) Histogram in XYZ space, (e) CIELab space, (f) Histogram in CIELab space, (g) intensity distance of NIR image, (h) Histogram distance of NIR intensity.
Sensors 20 05578 g011aSensors 20 05578 g011b
Figure 12. The subjective quality comparison between RGB-CS [21] and the proposed method (Note: the result images were encoded by portable network graphics (PNG) format).
Figure 12. The subjective quality comparison between RGB-CS [21] and the proposed method (Note: the result images were encoded by portable network graphics (PNG) format).
Sensors 20 05578 g012
Table 1. The Simulation Conditions.
Table 1. The Simulation Conditions.
MethodRGB-CS [21]The Proposed Method
Sample image info.54 images (508 × 768–1024 × 762) [43]
Type of CFA 1RGBCMYG
Separation matrixPre-defined
Sparsifying matrixDiscrete cosine transformN/A
DemosaicingBilinear
(GRBG pattern)
Bilinear
(GMYC pattern)
Separation methodSL0 sparse decomposition [45]Matrix multiplication
1 RGB means that the input image is generated by considering the RGB CFA-based system and CMYG is by considering the CMYG-CFA.

Share and Cite

MDPI and ACS Style

Park, Y.; Jeon, B. An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor. Sensors 2020, 20, 5578. https://doi.org/10.3390/s20195578

AMA Style

Park Y, Jeon B. An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor. Sensors. 2020; 20(19):5578. https://doi.org/10.3390/s20195578

Chicago/Turabian Style

Park, Younghyeon, and Byeungwoo Jeon. 2020. "An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor" Sensors 20, no. 19: 5578. https://doi.org/10.3390/s20195578

APA Style

Park, Y., & Jeon, B. (2020). An Acquisition Method for Visible and Near Infrared Images from Single CMYG Color Filter Array-Based Sensor. Sensors, 20(19), 5578. https://doi.org/10.3390/s20195578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop