Next Article in Journal
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation
Next Article in Special Issue
Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras
Previous Article in Journal
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Previous Article in Special Issue
Planar Laser-Based QEPAS Trace Gas Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition

LE2I UMR CNRS 6306, Université de Bourgogne, Dijon 21000, France
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 993; https://doi.org/10.3390/s16070993
Submission received: 2 May 2016 / Revised: 2 June 2016 / Accepted: 16 June 2016 / Published: 28 June 2016
(This article belongs to the Special Issue Infrared and THz Sensing and Imaging)

Abstract

:
Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

Graphical Abstract

1. Introduction

While MultiSpectral Imaging (MSI) is now a solution commonly considered for several problems and wide range applications, i.e., medical imaging, security, automotive, earth, food and cultural heritage [1,2,3,4], there is still the need to develop a compact and affordable solution to generalize its use.
MSI is defined as the mean to obtain a multispectral image I. Such multispectral image is formed of I k layers, that correspond to the k spectral sensitivities defined for a specific MSI system. The case of color imaging is a specific subset of MSI, where k = 3 .
Although there has been extended research on how to acquire multispectral images [5], the complexity and need for tunability of such MSI system limits its use to some specialized areas and often requires experts to handle them.
In digital color imaging, the concept of Color Filter Array (CFA) has been exploited since the 1970s and the invention of the Bayer pattern [6]. Indeed, at the expense of spatial resolution, one can increase the spectral resolution of an imaging device. The optical and spectral characteristics of such a setup have been extensively studied during the last decades [7,8,9,10,11,12,13]. Also, the demosaicing of such sensors has been investigated [14], and in general, the CFA camera processing pipeline is well understood [15].
Recent studies considered the problem of extending and generalizing the CFA concept to Spectral Filter Array (SFA), where an arbitrary spatio-spectral sampling of the image captured may be performed by the sensor, even beyond visible limits. A comprehensive review of single shot MSI technologies may be found in the work of Hagen and Kudenov [16]; and an exhaustive review of SFA sensors can be read in the work of Lapray et al. [17].
Much effort has been put into the realisation of spectral filters for SFA cameras, and the Fabry-Perot interferometric system, as realized by nano-etching, seems to be the dominant process when it comes to the realisation of matrices of pixels of small dimension [18,19]. However, despite the simulation made at LETI [18], no realization of filters based on Fabry-Perot interferometer that cover the visible and Near Infra-Red (NIR) range has been made in practice. An attempt by Lapray et al. combined nano-etching on one side of the substrate to handle the visible part of the spectrum and thin layer deposition on the other side to handle the NIR part of the spectrum. Beside, a great improvement has been achieved on spectral filters based on nano-tube, and a few realisations have been demonstrated [20,21]. However, none of these have been stabilized and positioned on the sensor chip.
Noticeable works have also been performed in the realisation of SFA cameras in the visible [22,23,24], and in the NIR [25], without combining visible and NIR filters on the same sensor chip.
SFA differs in constraints and use from CFA, and some effort has been put into the pre-processing of such data, such as demosaicing [26,27,28,29,30,31], deblurring and denoising of spectral data [32]. Considering the last case in particular, a lot of effort has been put into the strategy to recover colors or improving color images from the joint acquisition of color and NIR information [29,32,33,34,35,36,37,38]. In addition, camera sensitivities may be optimized. Indeed, one could consider to select the best filters for pre-processing purpose, such as energy balance to fit specific material or lighting conditions [39] and demosaicing [40]. Filters may be also optimized for a given application, from very general reflectance estimation [41] to specific analysis, i.e., medical imaging applications [42]. Although in theory and simulation good results can be achieved, the practical realization of filters and resulting camera sensitivities are usually not meeting very well the expectations. For instance, a Gaussian model was used in [40] to optimize the sensitivities for demosaicing, but selected curves are not quite similar to the resulting practical sensor that is shown in [22]. This is a material limitation to filter optimization so far, which will be overcome in the future development of transmittance filter technologies.
While Lapray et al. [17] focused on demonstrating the filter technology and on how to assemble the camera, in this paper we propose to perform the demonstration and spectral characterization of the whole SFA based MSI camera. Indeed, this prototype is designed to acquire multispectral data at single shot, based on a Complementary Metal Oxide Semi-conductor (CMOS) sensor and SFA filters. Both articles should be considered as complementary and not overlapping. Lapray et al. [17] focused on the state of the art of snapshot multispectral imaging and on the realisation and spectral analysis of the filter. They consider also how to combine them within a camera. This paper is an investigation on the realized camera, where we focus on the final product and before a specific application. We begin with the description of the hardware system and of its assembly in Section 2. Then, we apply denoising and pre-processing on the raw data. These raw data are used to measure the spectral characteristics of the SFA sensor in Section 3. In Section 4, we demonstrate the acquisition of real scene and show images based on demosaicing and color calibration. All these data are provided as a database for further studies, comparison and validation of simulation within the imaging and sensor communities.

2. Description of the System

SFA imaging technology is the central subject of this work. SFA is essentially a spatio-spectral sampling mechanism where the number of spectral bands may vary a lot both in number and shape. The choice of filters can be rather specific to the application. We consider here a general-purpose system that spans the visible and NIR part of the electromagnetic spectrum to serve as a proof of concept and further research. We propose a generic SFA arrangement with 8 channels. This section concerns the description of the system in term of hardware design.

2.1. Camera Architecture

The system is developed in order to acquire multispectral images and video sequences. The global pipeline is shown in Figure 1. The first element is a lens which focuses the incoming light onto the sensor plane. Then, the light is passing through a single monochrome image sensor (CMOS sensor Sapphire EV76C661 from E2V [43]), covered by a custom SFA layer aligned with the sensor pixels and stuck on it with a glue.
The image sensor offers a 10-bit digital readout speed at 60 frames per second (fps) with the global shutter method acquisition. The size of each pixel is 5.3 × 5.3 μm2, for a spatial resolution of 1280 × 1024 pixels. A Field Programmable Gate Array (FPGA) receives the digitized uncompressed data from the sensor, organizes it as a video stream and transmits it to a computer via Ethernet UDP protocol or directly to a monitor through an HDMI protocol. We can notice that this kind of architecture is suitable for an embedded camera architecture, which could provide intelligent processing inside like in Lapray et al. [44], for future development and applications. Finally, a software running on a PC is used to receive, demosaic and save the data coming from the camera for analysis purpose.
Conventional sensors achieve color imaging by holding a CFA situated between the photodetectors and the microlens array. In our design, we do not remove the lens array before mounting the SFA, so that the filters are mounted over the original lens array attached to the sensor (see Figure 2). This process differs from the technique used in industry but has been used in a few works [20], however they did not fix the filters over the sensor.
The selected sensor provides a relatively good sensitivity in the NIR spectrum (quantum efficiency >50 % at 860 nm), while keeping good performance in the visible spectrum (about 80 % ). Due to the generally low transmission factors of the Fabry-Perrot filters, it is important to have a good sensor sensitivity in order to keep a low exposure time, thus to keep the maximum frame rate available for video purpose. The relative quantum efficiency of the nude sensor is shown on Figure 3b.
The customized matrix of filters is built by SILIOS technologies [45]. SILIOS Technologies developed the COLOR SHADES® technology, allowing the manufacture of transmittance multispectral filters. COLOR SHADES® technology is based on the combination of thin film deposition and micro-/nano-etching processes onto a fused silica substrate. Standard micro-photolithography steps are used to define the cell geometry of the multispectral filter. COLOR SHADES® provides band pass filters originally in the visible range from 400 nm to 700 nm. Through our collaboration, SILIOS developed the filters in the NIR range, combining their technology with a classical thin layer interference technology to realize assembled filters. Filters transmittance have been extensively studied by Lapray et al. [17]. The SFA contains eight filters, referred to as { P 1 , P 2 , P 3 , P 4 , P 5 , P 6 , P 7 , I R } . Due to the constraints and difficulties to realize the filters in practice, we did not aim at optimizing their distribution for any specific application, but concentrated on having a balanced sensor with equidistant peaks, at least in the visible part that we controlled better.

2.2. Spatial Arrangement

Mosaic arrangement impacts directly on the image resolution through the demosaicing process. In a mosaic pattern, each pixel captures only one value relative to one spectral sensitivity at a time. Other spectral band values can be estimated using the neighboring pixels of a given band. The increasing number of spectral channels, increases the sparsity of channel occurrence and make the demosaicing more difficult than in the CFA case.
Miao et al. [46,47] proposed a generic mosaicing and demosaicing algorithm that is the most comprehensive definition in the literature. They take into account the probability of appearance of the channels, the spectral consistency and the uniformity of the distribution with their method. This implementation is based on the binary tree decomposition, given a number of spectral band and the probability of appearance of each band. They also propose a demosaicing technique, where the interpolation is done by order, with a first interpolation for the spectral band that has the highest probability of appearance.
According to their work, we define a periodic spatial distribution corresponding to this approach that promotes the spectral information recovery, i.e., each channel has the same probability of occurrence, 1 / 8 . The filter arrangement chosen is shown in Figure 4a. A microscope image of this arrangement after manufacturing the filter is shown on Figure 4b.
The manufacturing process had required us to have 16 (4 × 4) adjacent photosensitive elements for one filter. So each color square has an area of 21.2 × 21.2 μm2 (4 × 5.3 × 5.3 μm2), with an uncertainty level. At the corners of the filter layer, some marks permit to identify pixel positions. The filters are then positioned over the sensor by active alignment where an image is recorded from the sensor and magnified: We could thus see when the filters are aligned in real time. Filters are glued on-top of the sensor in the same process. The transmittance of the glue (see in Figure 3a and its affect was not studied in the work of Lapray et al. The transmittance of the glue is consistent at ± 2 % between 400 and 1100 nm and is superior to 95 % for the thickness used. It is interesting to notice that the glue acts as a UV-cut filter and limit the sensor noise that could come from UV radiations.
Once this is done, the sensor is combined with our camera system and can be used.

3. Spectral Characterization

This section considers the characterization of the spectral sensitivities of the sensor. To this aim, we propose a pre-processing of the measured data before to investigate the spatio-spectral properties of the sensor.

3.1. Pre-Processing

The pre-processing includes a dark correction to account for dark noise and a downsampling of the image to account for cross-talk, leakage and inaccuracy in filter realisation. This is made at the expense of the spatial resolution.

3.1.1. Dark Master

At a specific integration time, we create a dark master, I D a r k , based on a set of N = 10 images of dark I d n , with n an integer such as n [ 1 , 10 ] . For each pixel, we select the median values of the pixels from the set, such as in Equation (1).
I D a r k ( i , j ) = m e d i a n ( I d n ( i , j ) )
The resulting image I D a r k is subtracted from all images taken with this integration time. In the following, all images and measurements have been accordingly corrected. When the subtraction gives negative values, we are clipping them to 0. This dark image correction is standard and is described in several works [48,49].

3.1.2. Downsampling

Figure 4b shows inaccuracy in filter realization and shows also that the NIR filter is overlapping on the connected cells. In addition, the filter layer is positioned at some distance of the micro-lenses, which creates cross-talk on neighbor pixels. Without any pre-processing, we observed that the bands of the visible domain transmit a part of the intensity range in the NIR. The shape of P 1 P 7 response curves seemed to be consistent with the infrared channel itself. In addition, the light that hit the bands P 1 , P 2 , P 3 and P 4 appeared to pass through the wavelength range of 780−1100 nm, and in a greater magnitude compared to bands P 5 and P 6 . We also observed that P 7 was very poorly affected by this phenomenon due to its position in the mosaic. This behavior is explained by the fact that the bands passing infrared light are located physically closer to the pixels of the infrared band. This effect highlights the technical difficulties in obtaining good filters and alignment, physically uncorrelated and without overlap between materials. In order to denoise these data, we decided to sacrifice the contiguous pixels, at the expense of the spatial resolution of our camera. As we can see in Figure 5, we take the four center pixels for each channel, and make the average of them to build a new downsampled image. The spatial resolution of the sensor becomes then 320 × 256 pixels. This pre-processing provided a noticeable improvement and confirmed our hypothesis on leakage, cross-talk and spatial filter pollution. Reader can refer to the Figure A1 and Figure A2 in the Appendix to see how the processing improved the quality of the sensor.

3.2. Spectral Sensitivities

3.2.1. Spectral Characterization

We measure the relative spectral response of the camera system in a white room controlled environment. The measurement system is composed of a light source based on the halogen quartz lamp of the OL 740-20D/UV Source Attachment and of a double monochromator OL 750-M-D Double Monochromator that includes spherical mirrors to concentrate and collimate the light. Both are from http://goochandhousego.com/Gooch & Housego Company.
We sweep the wavelength of the incoming light by step of 10 nm from 380 nm to 1100 nm. We capture a picture for each wavelength with an integration time of 0.503 ms, which permits no saturation of any of the channels but maximizes the incoming signal to limit noise. We repeat the procedure for two sets of captures in order to minimize error in measurements. These two sets are averaged after the pre-processing is applied.
For the integration time, the actions for the calibration of the camera are:
  • Create a Dark Master image for the given exposure time as described in Section 3.1.1.
  • Downsample and pre-process the images as described in Section 3.1.1 and Section 3.1.2.
  • Capture 2 sets of images of monochromatic light.
  • Average the 2 image sets.
  • Select a square of 84 pixels at the center of each image, where a small angle inaccuracy would be negligible and where the monochromatic light is assumed to be uniform according to the specification of our devices, with a large security margin.
  • Sort out pixels by filter type and apply light source monochromator calibration to the data.
  • Average the curves from the 84 × 84 pixels.
  • Normalize the curve over the highest number. By doing that, we preserve the ratio of efficiencies by channel, assuming a linear sensor.
Finally, by using this technique of calibration, the curves of the effective camera response with filters are shown on Figure 6.

3.2.2. Analysis

We study the spectral interference between camera sensitivities by spectral bands and the spatial uniformity over the sensor.
We compute the interference values, Θ, between each spectral profile pairs in Table 1. The mutual interferences are quantified, according to previous works [50] and extended from filters to sensitivities (We modified the integral boundaries so we include the multi-modalities of the sensitivities in the evaluation.), by determining the interference coefficients computed by using the ratio between overlapping area of two sensitivities over one of these sensitivities, as in Equation (2):
Θ = 380 λ c S i ( λ ) . d λ + λ c 1100 S j ( λ ) . d λ 380 1100 S j ( λ ) . d λ
where 380 and 1100 are limits of the wavelength interval of interest and λ c is the wavelength at the intersection of S i and S j . λ c is evaluated manually from the curves in Figure 6.
We observe that in the visible, band P 7 has a harmonic transmittance peak that lets pass light up to 400 nm, this is a typical Farby-Perrot behavior. This increases correlation between P 1 and P 7 . Rest of the visible bands show an expected correlation. Despite of the pre-processing, there is still cross-talk and leakage in the NIR to be noticed on bands P 1 P 4 . The NIR channel shows a noticeable sensitivity in the visible, which is a critical limitation for applications in computer vision, which would benefit from a good separability between visible and NIR. This is inherently a problem of a bad control in filter realization. These leakage and visible-NIR pollution may be handled and corrected with a post-processing similar to what is done in works such as in Sadeghipoor et al. [29].
We also investigate qualitatively the spatial uniformity of the filters over our square of 84 pixels. Results are shown in Figure 7, where we plot the sensitivities for all the pixels within the center window of 84 × 84 by spectral bands. We observe a reasonably good consistency for a single prototype.
A quantitative analysis is performed by analyzing the average of the variances, computed at each point of these curves, every 10 nm, around the average curve. Results are shown in Table 2. It seems that filters closer to the NIR filter are showing more variance, which may be explained by some leakage as we discussed in the pre-processing step. We observe also that the bands that are showing a larger variance have the peak sensitivity which is shift on Figure 7. This is not easy to explain in an affirmative way since this may be related to the measurement sampling every 10 nm, or to the technology instability in the filter realization.
In addition, the quantitative impact of the variance in sensitivities on the multispectral image quality is yet to be analyzed. In general, both the drawbacks of spatial uniformity and quality in filter realization will be overcome to some extend with the development of the technology and with the industrialization of the process.

4. Multispectral Imaging

In this section we demonstrate the capability of the sensor to capture multispectral images.

4.1. Energy Balance

Energy balance is important for single sensor spectral imaging [39] in order to minimize the noise and balance it between channels. Indeed, it might happen that one channel get saturated while another does not get enough incoming light, which would impair critically the application.
In Table 3, we show the relative acquisition value for a perfect diffuser enlightened with different illuminations. For each tested illuminant shown in Figure 8, the results are normalized by the maximum value of the visible bands. We call ρ p the response of the camera according to a simple model of image formation assuming the perfect diffuser reflectance, such as defined in Equation (3):
ρ p = λ m i n λ m a x I ( λ ) . S p ( λ ) . d λ
where I ( λ ) is the spectral emission of the tested illuminant, S ( λ ) is the camera response shown in Figure 6 and p the index of the spectral band.
The Commission Internationale de l’Eclairage (CIE) standard illuminants cannot be used here, as we consider also the N I R part of the spectrum, that is not described yet by these standards. Thus, we selected and computed alternative illuminations. We selected a measure of solar emission at the ground level, performed a measure of a D65 simulator, we computed the theoretical black body emission (A illuminant) and use a measure of its practical tungsten realization. In addition, we used also illuminant E as a reference.
We note that the energetic distribution is reasonably well balanced in the visible range with natural exposures, since the variance between the spectral bands is acceptable, according to typical R G B cameras, for all tested illuminants. All these results can be indeed compared to the typical curves of the RGB Sinarback camera [51], where the camera response variance is considered to be good enough for the sensor energy balance of an R G B device. When it comes to the joint acquisition of visible and NIR, we note the quite large difference between values of ρ. Such acquisition may benefit from a similar-to-HDR acquisition process, but this would require multiple captures with different integration times. However, at the expense of noise, we experienced the feasibility of the set-up.

4.2. Images Acquired

Images were captured in order to illustrate the practical results of our system. The images have been taken under the D65 simulator, which relative spectral emission is shown in Figure 8. This illuminant proved to be not very well spatially uniform, but we implemented no flat field corrections, so spatial non-uniformity in the pictures may be due to effect of lenses and to effect of illumination. The Figure 9 and Figure 10 show two examples of a mosaiced raw image of two scenes captured with the camera. By zooming in the pixel pattern, we can clearly distinguish the moxel arrangement defined in Figure 4a.
Images were demosaiced with Miao et al. [46,47] binary tree algorithm for benchmark. Figure 11 and Figure 12 show the 8 bands reconstructed by the demosaicing process for scene 1 and 2.
It may be easier to visualize the data as color images. For color visualization of the images, we performed a colorimetric calibration of the sensor based on the Gretag MacBeth color checker measured reflectances between 380 and 1100 nm. We used the 1931 2° CIE observer and adequate CIE computations to compute the tristimulus values at every pixel, and then computed s R G B data. The model is a straightforward linear model which compute XYZ values from the 8 sensor values. The illuminant data used is the D65 simulator. The color image of the scenes are shown in Figure 13 and Figure 14.

5. Conclusions

In this work, we presented and characterized a multispectral camera based on SFA. Our contribution lies in the practical study of a SFA design and its real implementation.
This system shows some advantages compared to existing multispectral capture: it has an exact registration of the images, it can be low cost compared to actual MSI systems, with some compactness and robustness advantages, and it opens opportunities for on-the-fly analysis and video processing. This kind of design could be suitable for many types of CMOS/CCD sensor applications, regardless of the resolution, the frame rate or the implemented pixel technology.
However, the technique of mounting filters directly on the lens array can increase the amount of optical crosstalk. Indeed, the filter array is spaced relatively far from the lens array (in part due to the glue), a photon interacts firstly with the incoming matrix filters before to interact with the lens array. And future directions of work include to build a setup that leads to amount filters directly on the sensor.
We demonstrated the feasibility of the system and developed it up to color image representation. We provide our experimental data and a couple of acquired scenes to the community. The provided data and images may be used to benchmark methods and algorithms related to multispectral image acquisition, surface object properties estimation (reflectance reconstruction) and demosaicing as well as denoising and image restoration. Data may also be used for pseudo-real simulation with a ground base.
Designing optimal camera peak sensitivities for specific applications become possible now that we can provide a demonstration of the realization in practice of a working system. This may lead to interesting development of the technology in the future and new methodologies to tackle open problems. In addition, further work on deblurring and other optical corrections may be considered as well as significant contributions in optimizing the filters for specific application or energy balance of the sensor.

Supplementary Materials

The following are available online at www.mdpi.com/1424-8220/16/7/993/s1, Excel S1: SensorSensitivities; Excel S2: IlluminantData; Figure S1: Scene1-D65-raw_8_bits-16-082-Preprocessed; Figure S2: Scene2-D65-raw_8_bits-8-049-Preprocessed; Figure S3: Demosaiced-binaryTree-Preprocessed-Scene1-D65-raw_8_bits-16-082; Figure S4: Demosaiced-binaryTree-Preprocessed-Scene1-D65-raw_8_bits-8-049; Figure S5: Scene1-D65-color_8_bits-16-082-Preprocessed; Figure S6: Scene2-D65-color_8_bits-8-049-Preprocessed.

Acknowledgments

The authors thank the Open Food System project for funding. Open Food System is a research project supported by Vitagora, Cap Digital, Imaginove, Aquimer, Microtechnique and Agrimip, funded by the French State and the Franche-Comté Region as part of The Investments for the Future Programme managed by BPIfrance, www.openfoodsystem.fr. The authors thank also the EU and the H2020 EXIST project.

Author Contributions

Main contributions can be distributed like in the following. We put the percentage as a rough indication of contributions only. Design and conception: J.-B.T. (45%), P.G. (45%); Methodology: J.-B.T. (60%), P.-J.L. (20%), P.G. (10%), C.C. (10%); Measurements: J.-B.T. (40%), P.-J.L. (40%) C.C.(20%); State of the art: J.-B.T. (70%), P.-J.L. (10%), P.G. (20%); Analysis: J.-B.T. (60%), P.-J.L. (10%), P.G. (20%), C.C. (10%).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix proposes a deeper insight into the effect of the pre-processing we propose. It serves also as a qualitative evaluation and justification of this process. Raw images are pre-processed in our camera framework to make a better image by taking into account a few observations. These observations are discussed in the following.
Figure A1 shows the sensitivities of each pixel of the sensor sorted by type of filters that cover them. These are data without any processing. From these curves, we make three observations.
  • We observe a large offset common to all the channels in shorter wavelengths.
  • We observe a large bandpass behavior in the NIR for the bands that have peaks in the visible (Figure A1a,g). We observe that the behavior of some pixels are quite different in this part.
  • We observe that the channel that have peak in the NIR (Figure A1h) has a huge variance compared to visible spectral bands.
Thus, we provide the following analysis:
  • The large offset common to all channels is higher for the short wavelengths. This is related and proportional to the weakness of the light source used in the experiment, shown in Figure A2, in this range of wavelengths. We argue this would be a multiplicative contribution of the dark noise while accounting for the energy of the light source. This can be corrected by applying a dark noise correction to all acquired images.
  • According to the specific behavior in the NIR sensitivity of the visible bands, we argue that this is directly related to the proximity of an infrared pixel. This is induced by an inaccuracy in NIR filter realization as can be seen in Figure 4b. The NIR filter is overlapping on the connected cells. In addition, the filter layer is positioned at some distance of the micro-lenses, which creates cross-talk on neighbor pixels. Without any pre-processing, we observed that the bands of the visible domain transmit a part of the intensity range in the NIR. The shape of P 1 P 7 (Figure A1a,g) response curves seemed to be rather consistent with the NIR channel itself. In addition, the light that hits the bands P 1 , P 2 , P 3 and P 4 appeared to pass through the wavelength range of 780–1100 nm, and in a greater magnitude compared to bands P 5 and P 6 . We also observed that P 7 was very poorly affected by this phenomenon due to its position in the mosaic. This behavior is explained by the fact that the bands located physically closer to the pixels of the NIR band are affected by it. This effect can be partially corrected by selecting pixels at the center of each filters and discarding the others.
  • The big variance in the NIR channel is due to the process of thin layer deposition, which is different from the micro-/nano-etching of the visible bands. Different behaviors can be observed sliding from a high sensitivity in the visible to less. This behavior resembles variation of transmittance filters according to a gradient thin layer deposition. Indeed, we could explain this behavior by a graduated thickness of the NIR filter. This effect can be partially corrected by selecting the pixels at the center of the NIR filters, where the thin layer is supposed flat and uniform and discarding the others.
Pre-processings include a dark correction to account for dark noise, and a downsampling to account for cross-talk, leakage and inaccuracy in filter realization. This is made at the expense of the spatial resolution. Benefits can be clearly observed by looking at Figure 7. This result confirms our observations.
Figure A1. Spectral variation of camera sensitivities over sensor pixels, without applying any pre-processing on the raw images. In red, the average curve per band. (a) Sensitivity 1 over pixels; (b) Sensitivity 2 over pixels; (c) Sensitivity 3 over pixels; (d) Sensitivity 4 over pixels; (e) Sensitivity 5 over pixels; (f) Sensitivity 6 over pixels; (g) Sensitivity 7 over pixels; (h) Sensitivity 8 over pixels.
Figure A1. Spectral variation of camera sensitivities over sensor pixels, without applying any pre-processing on the raw images. In red, the average curve per band. (a) Sensitivity 1 over pixels; (b) Sensitivity 2 over pixels; (c) Sensitivity 3 over pixels; (d) Sensitivity 4 over pixels; (e) Sensitivity 5 over pixels; (f) Sensitivity 6 over pixels; (g) Sensitivity 7 over pixels; (h) Sensitivity 8 over pixels.
Sensors 16 00993 g015
Figure A2. Relative energy emission by wavelength of the light source used in the measurements. The light source has been measured by using the OL DH-300EC silicon detector.
Figure A2. Relative energy emission by wavelength of the light source used in the measurements. The light source has been measured by using the OL DH-300EC silicon detector.
Sensors 16 00993 g016

References

  1. Kong, L.; Sprigle, S.; Yi, D.; Wang, F.; Wang, C.; Liu, F. Developing handheld real time multispectral imager to clinically detect erythema in darkly pigmented skin. Proc. SPIE 2010, 7557. [Google Scholar] [CrossRef]
  2. Shrestha, S.; Deleuran, L.C.; Olesen, M.H.; Gislum, R. Use of Multispectral Imaging in Varietal Identification of Tomato. Sensors 2015, 15, 4496–4512. [Google Scholar] [CrossRef] [PubMed]
  3. Smith, M.O.; Ustin, S.L.; Adams, J.B.; Gillespie, A.R. Vegetation in deserts: I. A regional measure of abundance from multispectral images. Remote Sens. Environ. 1990, 31, 1–26. [Google Scholar] [CrossRef]
  4. Park, B.; Lawrence, K.C.; Windham, W.R.; Smith, D.P. Multispectral imaging system for fecal and ingesta detection on poultry carcasses. J. Food Process Eng. 2004, 27, 311–327. [Google Scholar] [CrossRef]
  5. Vagni, F. Survey of Hyperspectral and Multispectral Imaging Technologies (Etude sur les Technologies D’imagerie Hyperspectrale et Multispectrale). Available online: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwjt4afg3MfNAhUBrI8KHYGuBMEQFggnMAE&url=http%3A%2F%2Fwww.dtic.mil%2Fcgi-bin%2FGetTRDoc%3FAD%3DADA473675&usg=AFQjCNFIYZvH_ms6uCrogY-vBniY9JM9wA&bvm=bv.125596728,d.dGo&cad=rjt (accessed on 1 April 2016).
  6. Bayer, B.E. Color Imaging Array. U.S. Patent 3,971,065, 20 July 1976. [Google Scholar]
  7. Vrhel, M.J.; Trussell, H.J. Filter considerations in color correction. IEEE Trans. Image Process. 1994, 3, 147–161. [Google Scholar] [CrossRef] [PubMed]
  8. Vrhel, M.J.; Trussell, H.J. Optimal color filters in the presence of noise. IEEE Trans. Image Process. 1995, 4, 814–823. [Google Scholar] [CrossRef] [PubMed]
  9. Barnard, K.; Funt, B. Camera characterization for color research. Color Res. Appl. 2002, 27, 152–163. [Google Scholar] [CrossRef]
  10. Quan, S. Evaluation and Optimal Design of Spectral Sensitivities for Digital Color Imaging. Ph.D. Thesis, Rochester Institute of Technology, New York, NY, USA, April 2002. [Google Scholar]
  11. Sadeghipoor Kermani, Z.; Lu, Y.; Süsstrunk, S. Optimum Spectral Sensitivity Functions for Single Sensor Color Imaging. Proc. SPIE 2012. [Google Scholar] [CrossRef]
  12. Wang, X.; Pedersena, M.; Thomas, J.B. The influence of chromatic aberration on demosaicking. In Proceedings of the 2014 5th European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 December 2014; pp. 1–6.
  13. Wang, X.; Green, P.J.; Thomas, J.; Hardeberg, J.Y.; Gouton, P. Evaluation of the Colorimetric Performance of Single-Sensor Image Acquisition Systems Employing Colour and Multispectral Filter Array. In Proceedings of the 5th International Workshop, CCIW 2015, Saint Etienne, France, 24–26 March 2015.
  14. Li, X.; Gunturk, B.; Zhang, L. Image demosaicing: A systematic survey. Proc. SPIE 2008. [Google Scholar] [CrossRef]
  15. Ramanath, R.; Snyder, W.E.; Yoo, Y.; Drew, M.S. Color image processing pipeline. IEEE Signal Process. Mag. 2005, 22, 34–43. [Google Scholar] [CrossRef]
  16. Hagen, N.; Kudenov, M.W. Review of snapshot spectral imaging technologies. Opt. Eng. 2013, 52, 090901. [Google Scholar] [CrossRef]
  17. Lapray, P.J.; Wang, X.; Thomas, J.B.; Gouton, P. Multispectral Filter Arrays: Recent Advances and Practical Implementation. Sensors 2014, 14, 21626–21659. [Google Scholar] [CrossRef] [PubMed]
  18. Frey, L.; Masarotto, L.; Armand, M.; Charles, M.L.; Lartigue, O. Multispectral interference filter arrays with compensation of angular dependence or extended spectral range. Opt. Expr. 2015, 23, 11799–11812. [Google Scholar] [CrossRef] [PubMed]
  19. Vial, B. Study of Open Electromagnetic Resonators by Modal Approach. Application to Infrared Multispectral Filtering. Ph.D. Thesis, Ecole Centrale Marseille, Marseille, France, Decemeber 2013. [Google Scholar]
  20. Park, H.; Crozier, K.B. Multispectral imaging with vertical silicon nanowires. Sci. Rep. 2013, 3, 2460. [Google Scholar] [CrossRef] [PubMed]
  21. Najiminaini, M.; Vasefi, F.; Kaminska, B.; Carson, J.J.L. Nanohole-array-based device for 2D snapshot multispectral imaging. Sci. Rep. 2013, 3, 2589. [Google Scholar] [CrossRef] [PubMed]
  22. Monno, Y.; Kikuchi, S.; Tanaka, M.; Okutomi, M. A Practical One-Shot Multispectral Imaging System Using a Single Image Sensor. IEEE Trans. Image Process. 2015, 24, 3048–3059. [Google Scholar] [CrossRef] [PubMed]
  23. Murakami, Y.; Yamaguchi, M.; Ohyama, N. Hybrid-resolution multispectral imaging using color filter array. Opt. Expr. 2012, 20, 7173–7183. [Google Scholar] [CrossRef] [PubMed]
  24. Martínez, M.A.; Valero, E.M.; Hernández-Andrés, J.; Romero, J.; Langfelder, G. Combining transverse field detectors and color filter arrays to improve multispectral imaging systems. Appl. Opt. 2014, 53, C14–C24. [Google Scholar] [CrossRef] [PubMed]
  25. Geelen, B.; Tack, N.; Lambrechts, A. A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic. Proc. SPIE 2014. [Google Scholar] [CrossRef]
  26. Wang, X.; Thomas, J.B.; Hardeberg, J.Y.; Gouton, P. Median filtering in multispectral filter array demosaicking. Proc. SPIE 2013. [Google Scholar] [CrossRef]
  27. Wang, X.; Thomas, J.B.; Hardeberg, J.Y.; Gouton, P. Discrete wavelet transform based multispectral filter array demosaicking. In Proceedings of the Colour and Visual Computing Symposium (CVCS), Gjovik, Norway, 5–6 September 2013; pp. 1–6.
  28. Wang, C.; Wang, X.; Hardeberg, J.Y. A Linear Interpolation Algorithm for Spectral Filter Array Demosaicking. In Proceedings of the 6th International Conference on Image and Signal Processing, Cherbourg, France, 30 June–2 July 2014; Volume 8509, pp. 151–160.
  29. Sadeghipoor, Z.; Lu, Y.M.; Süsstrunk, S. Correlation-based joint acquisition and demosaicing of visible and near-infrared images. In Proceedings of the 18th IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, 11–14 September 2011; pp. 3165–3168.
  30. Shrestha, R.; Hardeberg, J.Y.; Khan, R. Spatial arrangement of color filter array for multispectral image acquisition. Proc. SPIE 2011. [Google Scholar] [CrossRef]
  31. Mihoubi, S.; Losson, O.; Mathon, B.; Macaire, L. Multispectral demosaicing using intensity-based spectral correlation. In Proceedings of the 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA), Orleans, France, 10–13 November 2015; pp. 461–466.
  32. Sadeghipoor Kermani, Z. Joint Acquisition of Color and Near-Infrared Images on a Single Sensor. Ph.D. Thesis, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, March 2015. [Google Scholar]
  33. Monno, Y.; Tanaka, M.; Okutomi, M. N-to-SRGB Mapping for Single-Sensor Multispectral Imaging. In Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile, 7–13 December 2015; pp. 66–73.
  34. Lu, Y.M.; Fredembach, C.; Vetterli, M.; Süsstrunk, S. Designing color filter arrays for the joint capture of visible and near-infrared images. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3797–3800.
  35. Chen, Z.; Wang, X.; Liang, R. RGB-NIR multispectral camera. Opt. Expr. 2014, 22, 4985–4994. [Google Scholar] [CrossRef] [PubMed]
  36. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Simultaneous capturing of RGB and additional band images using hybrid color filter array. Proc. SPIE 2014. [Google Scholar] [CrossRef]
  37. Tang, H.; Zhang, X.; Zhuo, S.; Chen, F.; Kutulakos, K.N.; Shen, L. High Resolution Photography with an RGB-Infrared Camera. In Proceedings of the 2015 IEEE International Conference on Computational Photography (ICCP), Houston, TX, USA, 24–26 April 2015; pp. 1–10.
  38. Martinello, M.; Wajs, A.; Quan, S.; Lee, H.; Lim, C.; Woo, T.; Lee, W.; Kim, S.S.; Lee, D. Dual Aperture Photography: Image and Depth from a Mobile Camera. In Proceedings of the 2015 IEEE International Conference on Computational Photography (ICCP), Houston, TX, USA, 24–26 April 2015; pp. 1–10.
  39. Péguillet, H.; Thomas, J.B.; Gouton, P.; Ruichek, Y. Energy balance in single exposure multispectral sensors. In Proceedings of the Colour and Visual Computing Symposium (CVCS), Gjovik, Norway, 5–6 September 2013; pp. 1–6.
  40. Monno, Y.; Kitao, T.; Tanaka, M.; Okutomi, M. Optimal spectral sensitivity functions for a single-camera one-shot multispectral imaging system. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 2137–2140.
  41. Ng, D.Y.; Allebach, J.P. A subspace matching color filter design methodology for a multispectral imaging system. IEEE Trans. Image Process. 2006, 15, 2631–2643. [Google Scholar] [PubMed]
  42. Styles, I.B. Selection of optimal filters for multispectral imaging. Appl. Opt. 2008, 47, 5585–5591. [Google Scholar] [CrossRef] [PubMed]
  43. E2v Technologies. EV76C661 BW and Colour CMOS Sensor. 2009. Available online: www.e2v.com (accessed on 1 April 2016).
  44. Lapray, P.J.; Heyrman, B.; Ginhac, D. HDR-ARtiSt: An adaptive real-time smart camera for high dynamic range imaging. J. Real-Time Image Process. 2014. [Google Scholar] [CrossRef]
  45. Silios Technologies. MICRO-OPTICS Supplier. Available online: http://www.silios.com/ (accessed on 1 April 2016).
  46. Miao, L.; Qi, H.; Ramanath, R.; Snyder, W.E. Binary Tree-based Generic Demosaicking Algorithm for Multispectral Filter Arrays. IEEE Trans. Image Process. 2006, 15, 3550–3558. [Google Scholar] [CrossRef] [PubMed]
  47. Miao, L.; Qi, H. The design and evaluation of a generic method for generating mosaicked multispectral filter arrays. IEEE Trans. Image Process. 2006, 15, 2780–2791. [Google Scholar] [CrossRef] [PubMed]
  48. López-Álvarez, M.; Hernández-Andrés, J.; Romero, J.; Campos, J.; Pons, A. Calibrating the Elements of a Multispectral Imaging System. J. Imaging Sci. Technol. 2009, 53, 31102:1–31102:10. [Google Scholar]
  49. Mansouri, A.; Marzani, F.S.; Gouton, P. Development of a protocol for CCD calibration: Application to a multispectral imaging system. Int. J. Robot. Autom. 2005, 20, 94–100. [Google Scholar] [CrossRef]
  50. Zhang, J.; Lin, S.; Zhang, C.; Chen, Y.; Kong, L.; Chen, F. An evaluation method of a micro-arrayed multispectral filter mosaic. Proc. SPIE 2013. [Google Scholar] [CrossRef]
  51. Day, D.C. Spectral Sensitivities of the Sinarback 54 Camera. Available online: http://art-si.org/PDFs/Acquisition/TechnicalReportSinar_ss.pdf (accessed on 1 April 2016).
Figure 1. Overall architecture of the ad hoc imaging system. It is composed of four hardware blocks with dedicated features, through capture to processing. The video output can be visualized both before and after pre-processing, via respectively the FPGA board and the PC application.
Figure 1. Overall architecture of the ad hoc imaging system. It is composed of four hardware blocks with dedicated features, through capture to processing. The video output can be visualized both before and after pre-processing, via respectively the FPGA board and the PC application.
Sensors 16 00993 g001
Figure 2. The mosaic arrangement is a moxel assembly over a monochrome sensor. The sensor matrix is composed of photodiodes and a microlens array. Each channel filter covers 4 × 4 sensor pixels.
Figure 2. The mosaic arrangement is a moxel assembly over a monochrome sensor. The sensor matrix is composed of photodiodes and a microlens array. Each channel filter covers 4 × 4 sensor pixels.
Sensors 16 00993 g002
Figure 3. Glue transmittance (a) and sensor efficiency of the CMOS sensor EV76C661 (b).
Figure 3. Glue transmittance (a) and sensor efficiency of the CMOS sensor EV76C661 (b).
Sensors 16 00993 g003
Figure 4. Visualization of moxel spatial arrangement. (a) Moxel arrangement for P 1 P 7 , N I R channels. It shows uniformly distributed samples as an instance of Miao et al. [46,47] binary tree algorithm; (b) Microscope image of the MSFA filters after they are manufactured and before to mount them on the sensor.
Figure 4. Visualization of moxel spatial arrangement. (a) Moxel arrangement for P 1 P 7 , N I R channels. It shows uniformly distributed samples as an instance of Miao et al. [46,47] binary tree algorithm; (b) Microscope image of the MSFA filters after they are manufactured and before to mount them on the sensor.
Sensors 16 00993 g004
Figure 5. Pre-processing (downsampling) applied before using pixel values of images. We select the four center pixels to filter the spatial non-uniformity related to each spectral bands.
Figure 5. Pre-processing (downsampling) applied before using pixel values of images. We select the four center pixels to filter the spatial non-uniformity related to each spectral bands.
Sensors 16 00993 g005
Figure 6. The actual SFA MSI system relative spectral sensitivities after pre-processing, as described in Section 3.1. This is the measure of the relative efficiencies of each channel, and no simulation is added. Curves data are provided as Supplementary Materials in file Excel S1.
Figure 6. The actual SFA MSI system relative spectral sensitivities after pre-processing, as described in Section 3.1. This is the measure of the relative efficiencies of each channel, and no simulation is added. Curves data are provided as Supplementary Materials in file Excel S1.
Sensors 16 00993 g006
Figure 7. Spectral variation of sensor sensitivities along with the spatial dimension. In red, the average curve. Relatively to the pixel response, some of the bands have a more or less good rejection in the IR wavelengths. It seems to be directly correlated to the adjacency of the filter with the IR pixels in the moxel arrangement. (a) Sensitivity 1 over pixels; (b) Sensitivity 2 over pixels; (c) Sensitivity 3 over pixels; (d) Sensitivity 4 over pixels; (e) Sensitivity 5 over pixels; (f) Sensitivity 6 over pixels; (g) Sensitivity 7 over pixels; (h) Sensitivity 8 over pixels.
Figure 7. Spectral variation of sensor sensitivities along with the spatial dimension. In red, the average curve. Relatively to the pixel response, some of the bands have a more or less good rejection in the IR wavelengths. It seems to be directly correlated to the adjacency of the filter with the IR pixels in the moxel arrangement. (a) Sensitivity 1 over pixels; (b) Sensitivity 2 over pixels; (c) Sensitivity 3 over pixels; (d) Sensitivity 4 over pixels; (e) Sensitivity 5 over pixels; (f) Sensitivity 6 over pixels; (g) Sensitivity 7 over pixels; (h) Sensitivity 8 over pixels.
Sensors 16 00993 g007
Figure 8. Multiple illuminants used for the energy balance study given in Table 3. The tested wavelength range is 380–1000 nm. The D65 simulator spectral emission is used for the acquisition of the image database. Illuminant data are provided as Supplementary Materials in file Excel S2.
Figure 8. Multiple illuminants used for the energy balance study given in Table 3. The tested wavelength range is 380–1000 nm. The D65 simulator spectral emission is used for the acquisition of the image database. Illuminant data are provided as Supplementary Materials in file Excel S2.
Sensors 16 00993 g008
Figure 9. Raw image of scene 1 after applying the pre-processing described in Section 3.1. The integration time was of 16.082 ms. RAWimage is provided as Supplementary Materials in Figure S1.
Figure 9. Raw image of scene 1 after applying the pre-processing described in Section 3.1. The integration time was of 16.082 ms. RAWimage is provided as Supplementary Materials in Figure S1.
Sensors 16 00993 g009
Figure 10. Raw image of scene 2 after applying the pre-processing described in Section 3.1. The integration time was of 8.049 ms. RAWimage is provided as Supplementary Materials in Figure S2.
Figure 10. Raw image of scene 2 after applying the pre-processing described in Section 3.1. The integration time was of 8.049 ms. RAWimage is provided as Supplementary Materials in Figure S2.
Sensors 16 00993 g010
Figure 11. Demosaiced image by Miao binary tree algorithm. Channels 1 to 8 are reconstructed to provide a full resolution multispectral image. These data are encapsulated in one single tiff file for storage in Supplementary Materials. (a) Channel P1; (b) Channel P2; (c) Channel P3; (d) Channel P4; (e) Channel P5; (f) Channel P6; (g) Channel P7; (h) Channel IR. Demosaiced image is provided as a tiff file as Supplementary Materials in Figure S3.
Figure 11. Demosaiced image by Miao binary tree algorithm. Channels 1 to 8 are reconstructed to provide a full resolution multispectral image. These data are encapsulated in one single tiff file for storage in Supplementary Materials. (a) Channel P1; (b) Channel P2; (c) Channel P3; (d) Channel P4; (e) Channel P5; (f) Channel P6; (g) Channel P7; (h) Channel IR. Demosaiced image is provided as a tiff file as Supplementary Materials in Figure S3.
Sensors 16 00993 g011
Figure 12. Demosaiced image by Miao binary tree algorithm. Channels 1 to 8 are reconstructed to provide a full resolution multispectral image. These data are encapsulated in one single tiff file for storage in Supplementary Materials. (a) Channel P1; (b) Channel P2; (c) Channel P3; (d) Channel P4; (e) Channel P5; (f) Channel P6; (g) Channel P7; (h) Channel IR. Demosaiced image is provided as a tiff file as Supplementary Materials in Figure S4.
Figure 12. Demosaiced image by Miao binary tree algorithm. Channels 1 to 8 are reconstructed to provide a full resolution multispectral image. These data are encapsulated in one single tiff file for storage in Supplementary Materials. (a) Channel P1; (b) Channel P2; (c) Channel P3; (d) Channel P4; (e) Channel P5; (f) Channel P6; (g) Channel P7; (h) Channel IR. Demosaiced image is provided as a tiff file as Supplementary Materials in Figure S4.
Sensors 16 00993 g012
Figure 13. sRGB image of scene 1 computed according to a colorimetric calibration of the sensor. Color image is provided as Supplementary Materials in Figure S5.
Figure 13. sRGB image of scene 1 computed according to a colorimetric calibration of the sensor. Color image is provided as Supplementary Materials in Figure S5.
Sensors 16 00993 g013
Figure 14. sRGB image of scene 2 computed according to a colorimetric calibration of the sensor. Color image is provided as Supplementary Materials in Figure S6.
Figure 14. sRGB image of scene 2 computed according to a colorimetric calibration of the sensor. Color image is provided as Supplementary Materials in Figure S6.
Sensors 16 00993 g014
Table 1. Mutual spectral interference Θ coefficients of the spectral sensor efficiencies.
Table 1. Mutual spectral interference Θ coefficients of the spectral sensor efficiencies.
i ∖ jP1P2P3P4P5P6P7IR
P11.00000.71310.63310.57450.82180.84721.12800.1352
P20.71621.00000.65340.53630.75060.76911.02920.1252
P30.61610.63321.00000.66880.85170.85221.10840.1554
P40.58530.54400.70011.00000.88650.82451.06760.1632
P50.57370.52180.61100.60751.00000.48910.60010.1798
P60.58080.52500.60030.55480.48021.00000.68690.2018
P70.62610.56880.63210.58170.47710.55621.00000.2090
IR1.02900.96671.04950.97070.99991.20211.79781.0000
Table 2. For each spectral band, we show the average of the variances, computed at each point of these curves, around the average curve.
Table 2. For each spectral band, we show the average of the variances, computed at each point of these curves, around the average curve.
BandsP1P2P3P4P5P6P7IR
Average variances0.09430.35850.46810.06660.02150.00890.01050.0198
Table 3. Relative values of the sensor response by the filter ( ρ p ), for a given input illuminant ( I λ ) and a perfect diffuser. Illuminant E is extended to the NIR and the simulator of D65 used in image acquisition has been measured up to 1000 nm. All the illuminant emissions are visible in Figure 8.
Table 3. Relative values of the sensor response by the filter ( ρ p ), for a given input illuminant ( I λ ) and a perfect diffuser. Illuminant E is extended to the NIR and the simulator of D65 used in image acquisition has been measured up to 1000 nm. All the illuminant emissions are visible in Figure 8.
IlluminantETungstenD65 Simu.A (Extended)Solar
R S i n a r b a c k 0.470.700.400.660.45
G S i n a r b a c k 11111
B S i n a r b a c k 0.820.460.850.500.79
P 1 0.970.620.930.660.87
P 2 0.980.690.990.730.99
P 3 0.880.790.810.820.87
P 4 10.9510.981
P 5 0.870.880.790.900.88
P 6 0.8510.6310.82
P 7 0.730.850.550.840.64
I R (380–780 nm)2.062.771.532.711.83
I R (380–1000 nm)4.215.842.875.553.34

Share and Cite

MDPI and ACS Style

Thomas, J.-B.; Lapray, P.-J.; Gouton, P.; Clerc, C. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition. Sensors 2016, 16, 993. https://doi.org/10.3390/s16070993

AMA Style

Thomas J-B, Lapray P-J, Gouton P, Clerc C. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition. Sensors. 2016; 16(7):993. https://doi.org/10.3390/s16070993

Chicago/Turabian Style

Thomas, Jean-Baptiste, Pierre-Jean Lapray, Pierre Gouton, and Cédric Clerc. 2016. "Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition" Sensors 16, no. 7: 993. https://doi.org/10.3390/s16070993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop