Next Article in Journal
Adaptive Rehabilitation Bots in Serious Games
Previous Article in Journal
Sample-Guided Adaptive Class Prototype for Visual Domain Adaptation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

High Sensitivity Snapshot Spectrometer Based on Deep Network Unmixing

School of Electronic Engineering and Optoelectronic Technology, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(24), 7038; https://doi.org/10.3390/s20247038
Submission received: 20 October 2020 / Revised: 27 November 2020 / Accepted: 3 December 2020 / Published: 9 December 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
Spectral detection provides rich spectral–temporal information with wide applications. In our previous work, we proposed a dual-path sub-Hadamard-s snapshot Hadamard transform spectrometer (Sub-s HTS). In order to reduce the complexity of the system and improve its performance, we present a convolution neural network-based method to recover the light intensity distribution from the overlapped dispersive spectra, rather than adding an extra light path to capture it directly. In this paper, we construct a network-based single-path snapshot Hadamard transform spectrometer (net-based HTS). First, we designed a light intensity recovery neural network (LIRNet) with an unmixing module (UM) and an enhanced module (EM) to recover the light intensity from the dispersive image. Then, we used the reconstructed light intensity as the original light intensity to recover high signal-to-noise ratio spectra successfully. Compared with Sub-s HTS, the net-based HTS has a more compact structure and high sensitivity. A large number of simulations and experimental results have demonstrated that the proposed net-based HTS can obtain a better-reconstructed signal-to-noise ratio spectrum than the Sub-s HTS because of its higher light throughput.

1. Introduction

Spectral detection is widely used in industry, remote sensing, military and many other aspects. However, the traditional slit spectrometer has a contradiction between fast acquisition and high signal-to-noise ratio (SNR). If we want to improve the response speed of the system, it will inevitably reduce the acquisition time of the system, resulting in a decrease in the system SNR. In order to improve the SNR of detected spectra, researchers have introduced compressed sensing (CS) [1,2,3], computational slits [4,5] and deconvolution [6].
Compressed sensing is a representative method to realize snapshot acquisition. Compressed sensing was first proposed by Donoho, Candes and Tao in 2006 [1,2]. In the same year, Brady et al. of Duke University proposed a new spectral imaging technology called coded aperture snapshot spectral imager (CASSI) and developed several CASSI-based snapshot imaging spectrometers in the following years [7,8]. In the aspect of reconstruction algorithms, a greedy algorithm [4,5,6], L1 norm convex optimization algorithm [9,10,11], L1 norm non-convex optimization algorithm [12], Bayesian method [13,14,15] and other reconstruction algorithms have been developed. However, these reconstruction algorithms all have some drawbacks. Although the greedy algorithm has low computational complexity, the reconstruction effect is not ideal. The L1 norm minimization method has good reconstruction performance, but it has high computational complexity. The Bayesian method lacks a strict theoretical guarantee. Thus, we should develop new reconstruction algorithms.
Based on the idea of compressed sensing, researchers proposed a static multimode multichannel spectrometer (MMS), which takes advantage of the noise reduction performance of Hadamard coding and uses the non-negative least squares method to directly reconstruct the spectrum [16,17,18]. This method can achieve snapshot measurement and maintain the advantage of high light throughput. However, compressed sensing is an ill-posed inverse problem, and the measurement results are not stable enough, which limits its application.
Compared to multiplexing methods, these numerical methods belong to so-called ill-posed inverse problems and cannot obtain certain and robust SNR improvement. In our previous work [19], we proposed a dual-path sub-Hadamard-s snapshot Hadamard transform spectrometer (Sub-s HTS) based on the Hadamard transform spectrometer (HTS) [20,21]. In this Sub-s HTS, all incident light is encoded simultaneously by the encoding matrix, and an extra imaging path is designed to measure the intensity distribution of the scene. The spectral measurement problem is transformed into a positive definite problem so that a stable and reliable SNR enhancement can be obtained and maintain snapshot.
However, there are some problems in our previous work [19]. Firstly, the simultaneous acquisition of light intensity and dispersion image requires a dual optical path system, which will reduce the overall light throughput by half and decrease the SNR. Secondly, dual-camera images require pixel-level registration which will cause a lot of problems and affect the reconstruction results. Finally, the dual optical path system increases the overall system size and reduces reliability. Therefore, how to use a single camera to complete the reconstruction process while ensuring the quality of the reconstructed spectrum becomes a problem that must be solved.
In recent years, the convolution neural network has shown strong fitting ability in many fields. In order to improve the light throughput of the system and reduce its complexity, we designed a convolution neural network to recover the light intensity distribution through the overlapped dispersive spectra and realize the single-path high-sensitivity snapshot spectral measurement.
The main contributions of this paper are as follows:
(1)
A light intensity recovery network (LIRNet) to solve the problem of spectral image unmixing and realize the direct acquisition of light intensity distribution through an overlapped dispersive spectral image;
(2)
The feasibility of using reconstructed light intensity data for sub-Hadamard matrix spectral detection is proven;
(3)
Both simulated and experimental results demonstrated that the performance of the net-based scheme can achieve similar or even better reconstructed results compared with the dual-path scheme and can improve the compact of the system.

2. Design of Snapshot Spectrum Detection Framework

In our previous work [19], we proposed a dual-path snapshot Hadamard transform spectrometer. The implementation and scheme of the snapshot spectrometer are shown in Figure 1a,b, respectively.
The snapshot HTS contains two imaging paths: non-dispersive and dispersive imaging paths. The non-dispersive imaging path is employed to capture the light intensity at the coding aperture. The dispersive imaging path is employed to capture the overlapped dispersive spectra. We take the 7 × 7 Hadamard matrix as an example to explain the spectral dispersion overlapping process in the snapshot spectrometer and show it in Figure 2.
In Figure 2, S 1 is the 1st row of the Hadamard-S matrix, I 1 is the 1st row light intensity distribution of the scene, f 1 is the spectrum of the 1st row, f 11 is the spectrum of the 1st pixel in the 1st row and m represents its range and the zeros in the spectrum represent shift invariant, which makes processing easy to understand. The measurement problems of the ith row of snapshot HTS are as follows:
g i = ( S i I i ) f i + n s n a p i
where g i is the overlapping dispersion spectrum of the ith row, S i is the ith row of Hadamard-S matrix, I i is the ith row of light intensity distribution, f i is the ith row of the spectrum to be measured and n s n a p i is the measurement noise in ith row measurement. If each row of the scene has the same spectrum, or we want to measure the average spectrum of each column, i.e., f 1 = f 2 = = f n or f 11 = f 21 = = f n 1 = 1 n i = 1 n f i 1 . Based on this assumption, the whole measurement of snapshot HTS can be simplified as:
g = ( S I ) f + n s n a p = S s n a p f + n s n a p = ( S S h ) f + n s n a p
where S s n a p is the normalized modulation intensity distribution (sub-Hadamard-s matrix or Sub-s matrix) and S h is the difference between Hadamard-S matrix and normalized modulation intensity distribution. Thus, once the light intensity distribution S s n a p is obtained, the spectra need to measure can be recovered through an inverse process, otherwise, we cannot recover the spectra.
As shown in Figure 2, the extra non-dispersive imaging path is added to measure the light intensity distribution. The extra imaging path increases the complexity of the system and weakens the intensity of the measured overlapped spectra. If we can recover the light intensity from the measured overlapped spectra such as the red arrow in Figure 1b, we can remove the extra imaging path and improve the performance of the system. We designed a light intensity recovery network (LIRNet) to recover the light intensity from the dispersive image. The overall structure of the network is shown in Figure 3.
In LIRNet, we first designed an unmixing module (UM) to approximate the inverse process of convolution in the following Equation (3) which reduced the mixture of input pixels and output coarse unmixed spectral images. We then used an enhanced module (EM) to reconstruct the light intensity for sharper edges and better details.

3. Network Setup

As shown in Figure 3, in the raw spectral image, the intensity dispersion of each point is overlapped by grating, and the captured spectral image can be expressed as the overlapped image of each band spectral image. This process can be expressed by the convolution operator. Each pixel in the spectrum can be represented as:
f = conv(S, k)
where f is a single pixel in the spectral image, k is a one-dimensional convolution kernel in the spectral direction, the kernel size of convolution is the number of spectral bands and S is a high-dimensional spectral image.
From the perspective of spectral dimension, the light intensity of a point is the sum of its dispersive spectral intensity:
f s = i = 1 n S i
where n is the number of spectral bands. This optical process is equal to the overlapping of a spectral image.
The UM contains five convolution layers and a deconvolution layer which extracts the primary features at the spectral dimension. The EM is an encoder–decoder network output feature map of UM, a further enhancement to supplement more visual information such as image contrast, brightness and texture features.
While training the network, we used two types of the loss function, intensity loss (IL) and spectrum loss (SL). IL adopts the mean square error (MSE) based on the online hard example mining (OHEM) [22] strategy as a loss function and SL adopts the SNR of the reconstructed spectrum as another loss function.
Euclidean distance is expressed as D i = | | x i x ^ i | | 2 . The total number of samples is N, then:
L o s s ( x i , x ^ i ) = 1 N i = 1 N D i
where x i , x ^ i are real and reconstructed samples respectively. In this equation, samples with higher (high-frequency areas) and lower (low-frequency areas) errors are averaged for network training, which makes it difficult for the optimizer to optimize in the training stage. To make the optimizer focus on the harder samples, Wu et al. put forward a method of hard-sample mining for pixel-level classification tasks [22]. We adopted the idea and modified it to adapt our intensity approximation task. We selected pixel samples with higher loss values as hard negative samples for training and ignored others with lower loss values so that the network would not repeat learn samples in low-frequency areas and would improve the performance in high-frequency areas. In this case, the loss function can be expressed as:
L o s s = 1 i N 1 { D i < t } i N 1 { D i < t } D i
where t is the threshold, which is the minimum loss value of the first half of the sample with a larger loss value.
In order to make the intensity approximation consistent with realistic physical processes better, we used spectrum loss to further fine-tune the network:
L o s s s = R S N R ( r e c o n ( x ^ ) , f )
where R is the reversal factor, recon is the reconstruction processing of overlapped spectrum and x ^ , f is the real spectrum.
We used 1650 groups of multispectral images with 127 × 127 size for training, 350 groups for validation and 200 groups that were not involved in the training for testing. The images are padded to 128 × 128 to train conveniently. Finally, a 128 × (128 + n) overlapped spectral image was constructed where n is the number of spectral bands. The light intensity images that were superimposed by all bands were used as the training label.
In the UM, the convolution layer with 1 × n convolution kernel was used for unmixing, and the convolution layer with 3 × 3 convolution kernel and rectified linear unit (ReLU) [23] activation function was used to extract the primary features. In the enhancement module, the encoder–decoder network was used to extract the low-level and high-level semantic information of the unmixed feature map to enhance the unmixing image. The design of the enhance module structure is referred to as ERFNet [24], as shown in Table 1. The down-sampler layer is a pooling down-sampling process. The Non-bt-1d is the factorized convolution which decomposes the 3 × 3 convolution into a pair of 1D convolutions, which constitutes a non-bottleneck 1D structure as shown in Figure 4. With a 33% reduction of parameters, we can achieve the same learning ability and accuracy as the traditional non-bottleneck structure, improve the network efficiency and contribute to real-time spectral analysis.

4. SNR Analysis

In order to analyze the denoise performance of the net-based HTS, we set the sub-Hadamard matrix normalized intensity distribution as S s n a p ; S 1 is the difference between the network reconstructed light intensity and S s n a p ( S 1 S s n a p ). The reconstructed spectra can be written as follows:
f ^ = f + ( S s n a p S 1 ) 1 n s
Based on Equation (8), the SNR of the reconstructed spectrum can be expressed as:
S N R f ^ = 10 l o g ( f T f n s T ( S s n a p T S 1 T ) 1 ( S s n a p S 1 ) 1 n s f T f n s T n s )
According to our previous works, S N R f ^ can be sampled as:
S N R f ^ 10 l o g ( ( n s ) T ( ( 1 k ) 2 S s n a p T S s n a p + 1 k k k 2 S s n a p T S s n a p ) n s ( n s ) T n s ) = 10 l o g ( ( n s ) T S s n a p T S s n a p n s ( n s ) T n s ) + 10 l o g ( 1 k )
where n s = ( S s n a p S 1 ) 1 n s , k denotes perturbations involving reconstruction errors and actual light intensity.
The results show that the reconstructed SNR of the network decreases by about 10 l o g ( 1 1 k ) (dB) compared with the sub-Hadamard matrix which can accurately obtain the intensity distribution. Compared with a traditional slit spectrometer, it still has obvious advantages. In practice, the intensity of overlapped spectra in the single-path snapshot spectrometer is twice the intensity in the original dual-path snapshot spectrometer. Thus, the actual spectrum can be expressed as:
f ^ = 2 f + ( S s n a p S 1 ) 1 n s
After a similar deduction, the final SNR can be expressed as:
S N R f ^ 10 l o g ( ( n s ) T S s n a p T S s n a p n s ( n s ) T n s ) + 10 l o g ( 1 k ) + 10 l o g ( 2 )
Therefore, the proposed method has certain advantages over the dual-path Sub-s HTS.

5. Experiments and Simulations

In order to compare the reconstructed performance of different coding matrices, the full-1 matrix and Hadamard-S matrix are involved. Additionally, four frames CASSI [25] using two-step iterative soft threshold optimization (TwIST) are also involved. We present some reconstructed light intensity results that are different from the testing dataset [26,27] and calculate its peak signal-to-noise ratio (PSNR). The reconstructed results are shown in Figure 5. The resolution of reconstructed image is 127 × 127. The comparison of PSNR and reconstruction time of different scenes are shown in Table 2.
It can be seen that the performance of the proposed method is stable under different conditions, and the reconstruction quality of the proposed method is better than that of the four frames CS in the case of a single frame. The testing images are not involved in the training data, and the reconstructed results can also get the expected results, which shows that the network has good generalization ability. Additionally, we can find that the PSNR of reconstructed full-1 coding is better than Hadamard-S coding since the full-1 coding can use the full information and bring higher throughput. However, the full-1 matrix is an irreversible matrix and cannot ensure stable SNR boosting.
We bought a desktop on JD.com with Intel Core i7 8700K 4.3Ghz, manufactured by Intel, Santa Clara, CA, USA; 16GB memory, manufactured by Corsair Memory, Fremont, CA, USA; NVIDIA GeForce GTX 1060 6GB, manufactured by NVIDIA, Santa Clara, CA, USA; The reconstructed time is 0.075 s. It is 20 times faster than traditional methods without additional hardware acceleration. If we use a higher-performance GPU such as RTX TITAN, the reconstructed speed will be significantly improved, which can basically meet the real-time requirements.
In the training strategy, we used the Adam algorithm for gradient optimization; the initial learning rate was 0.001, the whole training process lasted about 150 epochs when reaching convergence and batch size was set to 4. The convolution kernel size of the spectral dimension was modulated according to the dispersion length of the input spectral image.
To verify the effectiveness of reconstructed light intensity, we put the system into implementation and show it in Figure 6. The spectrum with band-pass filters and the whole spectrum were trained and tested in the actual optical system to test the influence of overlapping degrees on the reconstructed effect. In the implementation, the incident light is encoded by a digital micromirror device (DMD). The DMD (ED01N) has a resolution of 1024 × 768 and a pixel size of 13.68 μm × 13.68 μm. The camera (Basler acA1920-155 µm) has a resolution of 1920 × 1200 and a pixel size of 5.86 μm × 5.86 μm, and the dispersion grating is THORLABS GT25-03(300 Grooves/mm, 17.5 deg).
We also tested the light intensity reconstructed performance by increasing spectral bands, and the results are shown in Figure 7. A bandpass filter with 100 nm (400–500 nm) was used to truncate the spectrum, and the results are shown in the first row of Figure 7. The second row of Figure 7 is the result without any filter. The PSNR of reconstructed light intensity with filter is 25.76 dB and 20.26 dB without the filter.
As shown in Figure 7, it can be seen that the proposed method can still reconstruct the light intensity well. However, with the increase of spectral bands, the overlapping becomes more serious, and the reconstructed quality will decrease.
In practice, the Sub-s HTS will lose half throughput since it is the extra imaging path, which will have a negative impact on the SNR of the reconstructed spectrum. According to the theoretical deduction, the SNR decline is 10 l o g ( 2 ) (dB). Thus, we compared the slit-based spectrometer, the Sub-s HTS [19] and the proposed net-based HTS through simulation and show the results in Figure 8.
The results show that the proposed net-based HTS can achieve higher SNR results on the premise of reducing an optical path while maintaining snapshot. The SNR of dual-path Sub-s HTS is lower than the proposed net-based HTS which is in accord with the result of theoretical analysis.
The reconstructed quality of light intensity will affect the SNR of the reconstructed spectrum, and with the iteration of network, the quality of reconstructed light intensity will increase. Thus, we compared the SNR of reconstructed spectrum with the iteration of network and Sub-s HTS. The results are shown in Figure 9. It can be seen that as the iterations increase, the SNR of reconstructed spectra increases. As the quality of reconstruction increases, the advantage of light intensity gradually manifests, and the SNR is better than that of Sub-s HTS. This accords with the results of the previous theoretical derivation.
In order to evaluate the performance of the proposed net-based HTS, Sub-s HTS, slit-based spectrometer, we measured the SNR of each method with a low-level light source. In the experiment, we used an absorptive neutral density filter to cover an LED to simulate the low-level light source. We used the slit-based spectrometer to obtain the standard spectrum with the high light of LED and long exposure time. In order to compare the performance of all methods quantitatively, we set the same exposure time in all methods. The captured overlapped spectral image is shown in Figure 10, and the reconstructed spectral data are shown in Figure 11.
The camera (Basler acA1920-155 µm) provides a maximum analog gain setting of 36 dB that increases the brightness of the images output. However, with the increase of analog gain, a lot of electronic noise and temporal dark noise will be introduced. It will be a great influence on the final image quality.
In Figure 11, it can be seen that due to the advantage of light throughput, the net-based HTS has a better SNR than the Sub-s HTS in the case of low-level light environment and strong noise interference. This proves that our system has stronger anti-noise capability. In the case of limited detector capability, our system can greatly improve the SNR of the detected spectrum.

6. Conclusions

In this paper, we propose a light intensity recover network (LIRNet) to obtain the original image from overlapped spectra and construct a network-based single-path snapshot Hadamard transform spectrometer (net-based HTS) system. Its advantages over our previous work (Sub-s HTS) lie in the following three aspects. First, we proposed a neural network LIRNet to recover the light intensity distribution from the overlapped dispersive spectra, where the calculation speed is 20 times faster than the traditional methods. Second, the proposed net-based HTS can solve the drawbacks of Sub-s HTS such as the pixel-level registration and the loss of light throughput. Third, thanks to the simple structure of the single-path optical system and the advantage of light throughput, the net-based HTS can obtain a better performance than Sub-s HTS, which further improves its practicability. Additionally, our system can greatly improve the SNR of detected spectrum with limited detector performance.
At the same time, the proposed idea of spectral unmixing can be further developed in the field of computational spectral imaging. We believe that the proposed framework can promote the development and applications of spectral imaging, with reduced hardware and software complexity. In our next research, we will introduce it into hyperspectral imaging to obtain clearer hyperspectral images, which will be an interesting avenue for future work.

Author Contributions

Methodology, Validation, Writing—original draft, Investigation, Data curation, H.X.; Methodology, Writing-review, Investigation, Data curation, Z.Z.; Conceptualization, Writing—review & editing, Project administration, Funding acquisition, J.H.; Supervision, Funding acquisition, L.B.; Writing—review & editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61727802, 61901220, 2019K216).

Acknowledgments

We thank Xu Wang, Haocun Qi, Jiang Yue, Xiaoyu Chen and Enlai Guo for technical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  2. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  3. Brady, D.J.; Gehm, M.E. Compressive imaging spectrometers using coded apertures. Vis. Inf. Process. 2006, 6246, 62460A-1–62460A-9. [Google Scholar]
  4. Huang, H.; Makur, A. Backtracking-based matching pursuit method for sparse signal reconstruction. IEEE Signal Process. Lett. 2011, 18, 391–394. [Google Scholar] [CrossRef]
  5. Donoho, D.; Tsaig, Y.; Drori, I.; Starck, J.L. Sparse Solution of Underdetermined Linear Equations by Stagewise Orthogonal Matching Pursuit; Technical Report; Department of Statistics, Stanford University: Stanford, CA, USA, 2006. [Google Scholar]
  6. Needell, D.; Tropp, J. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  7. Wagadarikar, A.; John, R.; Willett, R.; Brady, D.J. Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt. 2008, 47, B44–B51. [Google Scholar] [CrossRef] [Green Version]
  8. Cao, X.; Yue, T.; Lin, X.; Lin, S.; Yuan, X.; Dai, Q.H.; Carin, L.; Brady, D.J. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Process. Mag. 2016, 33, 95–108. [Google Scholar] [CrossRef]
  9. Gill, P.; Wang, A.; Molnar, A. The in-crowd algorithm for fast basis pursuit denoising. IEEE Trans. Signal Process. 2011, 59, 4595–4605. [Google Scholar] [CrossRef]
  10. Daubechies, I.; Defrise, M.; De-Mol, C. An iterative thresholding algorithm forlinear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  11. Bioucas-Dias, J.M.; Figueiredo, M.A.T. A new TwIST: Two-step iterative shrinking/thresholding algorithms for image restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Chartrand, R.; Yin, W. Iteratively reweighted algorithms for compressive sensing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3869–3872. [Google Scholar]
  13. Tipping, M.E. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  14. Tipping, M.E.; Faul, A.C. Fast marginal likelihood maximization for sparse bayesian models. In Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 1 January 2003; pp. 3–6. [Google Scholar]
  15. Schniter, P.; Potter, L.C.; Ziniel, J. Fast bayesian matching pursuit. In Proceedings of the Workshop on Information Theory and Applications, La Jolla, CA, USA, 1 January 2008; pp. 326–333. [Google Scholar]
  16. Wagadarikar, A.; Gehm, M.E.; Brady, D.J. Performance comparison of aperture codes for multimodal, multiplex spectroscopy. Appl. Opt. 2007, 46, 4932–4942. [Google Scholar] [CrossRef] [PubMed]
  17. Gehm, M.E.; McCain, S.T.; Pitsianis, N.P.; Brady, D.J.; Potuluri, P.; Sullivan, M.E. Static two-dimensional aperture coding for multimodal, multiplex spectroscopy. Appl. Opt. 2006, 45, 2965. [Google Scholar] [CrossRef] [PubMed]
  18. Fernandez, C.; Guenther, B.D.; Gehm, M.E.; Brady, D.J.; Sullivan, M.E. Longwave infrared (LWIR) coded aperture dispersive spectrometer. Opt. Express 2007, 15, 5742–5753. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, Z.; Bai, L.; Han, J.; Yue, J. High-SNR snapshot multiplex spectrometer with sub-Hadamard-S matrix coding. Opt. Commun. 2019, 453, 124322–124328. [Google Scholar] [CrossRef] [Green Version]
  20. Yue, J.; Han, J.; Zhang, Y.; Bai, L. Denoising analysis of hadamard transform spectrometry. Opt. Lett. 2014, 39, 3744–3747. [Google Scholar] [CrossRef]
  21. Yue, J.; Han, J.; Li, L.; Bai, L. Denoising analysis of spatial pixel multiplex coded spectrometer with hadamard, H.-matrix. Opt. Commun. 2018, 407, 355–360. [Google Scholar] [CrossRef]
  22. Wu, Z.; Shen, C.; van den Hengel, A. High-performance semantic segmentation using very deep fully convolutional networks. arXiv 2016, arXiv:1604.04339. [Google Scholar]
  23. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
  24. Romera, E.; Alvarez, J.M.; Bergasa, L.M.; Arroyo, R. ERFNet: Efficient residual factorized ConvNet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 2018, 19, 263–272. [Google Scholar] [CrossRef]
  25. Kittle, D.; Choi, K.; Wagadarikar, A.; Brady, D.J. Multiframe image estimation for coded aperture snapshot spectral imagers. Appl. Opt. 2010, 49, 6824–6833. [Google Scholar] [CrossRef] [PubMed]
  26. Indian Pines. Available online: http://lesun.weebly.com/hyperspectral-data-set.html (accessed on 5 December 2020).
  27. Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the IEEE Confernence on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 23 June 2011; pp. 193–200. [Google Scholar]
Figure 1. The flowchart of dual-path snapshot Hadamard transform spectrometer (HTS). (a) Actual system diagram of HTS. (b) Principle system diagram of HTS.
Figure 1. The flowchart of dual-path snapshot Hadamard transform spectrometer (HTS). (a) Actual system diagram of HTS. (b) Principle system diagram of HTS.
Sensors 20 07038 g001
Figure 2. The coding processing of snapshot Hadamard transform spectrometer.
Figure 2. The coding processing of snapshot Hadamard transform spectrometer.
Sensors 20 07038 g002
Figure 3. Structural sketch of the network model.
Figure 3. Structural sketch of the network model.
Sensors 20 07038 g003
Figure 4. Non-bottleneck 1D structure.
Figure 4. Non-bottleneck 1D structure.
Sensors 20 07038 g004
Figure 5. Reconstructed results of different methods. (The resolution of reconstructed image is 127 × 127) (a) Ground truth. (b) Reconstructed results of coded aperture snapshot spectral imager (CASSI) 4 Frame. (c) Reconstructed results of the full-1 matrix (our method). (d) results of Hadamard-S matrix (our method).
Figure 5. Reconstructed results of different methods. (The resolution of reconstructed image is 127 × 127) (a) Ground truth. (b) Reconstructed results of coded aperture snapshot spectral imager (CASSI) 4 Frame. (c) Reconstructed results of the full-1 matrix (our method). (d) results of Hadamard-S matrix (our method).
Sensors 20 07038 g005
Figure 6. Implementation of single-path snapshot Hadamard transform spectrometer.
Figure 6. Implementation of single-path snapshot Hadamard transform spectrometer.
Sensors 20 07038 g006
Figure 7. Reconstructed results with different spectral bands. (a) Captured light intensity. (b) Measured overlapped dispersive spectra. (c) Reconstructed light intensity.
Figure 7. Reconstructed results with different spectral bands. (a) Captured light intensity. (b) Measured overlapped dispersive spectra. (c) Reconstructed light intensity.
Sensors 20 07038 g007
Figure 8. Comparison of different methods.
Figure 8. Comparison of different methods.
Sensors 20 07038 g008
Figure 9. The SNR of the reconstructed spectrum with the iterations number of the network.
Figure 9. The SNR of the reconstructed spectrum with the iterations number of the network.
Sensors 20 07038 g009
Figure 10. The results of actual experiments with 30 dB (analog gain of camera). (a) Overlapped dispersive spectra. (b) Reconstructed light intensity.
Figure 10. The results of actual experiments with 30 dB (analog gain of camera). (a) Overlapped dispersive spectra. (b) Reconstructed light intensity.
Sensors 20 07038 g010
Figure 11. Experimental results of the proposed method, snapshot HTS and slit-based spectrometer with 100 us (exposure time of camera) and 30 dB (analog gain of camera).
Figure 11. Experimental results of the proposed method, snapshot HTS and slit-based spectrometer with 100 us (exposure time of camera) and 30 dB (analog gain of camera).
Sensors 20 07038 g011
Table 1. Network structure.
Table 1. Network structure.
LayerTypeOut-FOut-Res
SIMULATION1conv(n × 1)1128 × 128
2deconv(1 × 1 × n)n128 × 128
3conv+ReLUn/2128 × 128
4conv+ReLUn/4128 × 128
5conv+ReLUn/8128 × 128
6conv+ReLU1128 × 128
ENCODER7Downsampler1664 × 64
8Downsampler6432 × 32
9–135 × Non-bt-1D6432 × 32
14Downsampler12816 × 16
15–228 × Non-bt-1D12816 × 16
DECODER23deconv6432 × 32
24–252 × Non-bt-1D6432 × 32
26deconv1664 × 64
27–282 × Non-bt-1D1664 × 64
29deconvC128 × 128
Table 2. Comparison of peak signal-to-noise ratio (PSNR) and reconstruction time of different scenes.
Table 2. Comparison of peak signal-to-noise ratio (PSNR) and reconstruction time of different scenes.
ImageCASSI 4 FrameFull-1 (Ours)Hadamard-S (Ours)
a16.09 dB22.37 dB21.39 dB
b12.39 dB21.48 dB21.13 dB
c20.51 dB20.82 dB20.60 dB
d19.44 dB23.03 dB22.90 dB
Reconstruction time1.599 s0.077 s0.075 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, H.; Zhao, Z.; Han, J.; Bai, L.; Zhang, Y. High Sensitivity Snapshot Spectrometer Based on Deep Network Unmixing. Sensors 2020, 20, 7038. https://doi.org/10.3390/s20247038

AMA Style

Xie H, Zhao Z, Han J, Bai L, Zhang Y. High Sensitivity Snapshot Spectrometer Based on Deep Network Unmixing. Sensors. 2020; 20(24):7038. https://doi.org/10.3390/s20247038

Chicago/Turabian Style

Xie, Hui, Zhuang Zhao, Jing Han, Lianfa Bai, and Yi Zhang. 2020. "High Sensitivity Snapshot Spectrometer Based on Deep Network Unmixing" Sensors 20, no. 24: 7038. https://doi.org/10.3390/s20247038

APA Style

Xie, H., Zhao, Z., Han, J., Bai, L., & Zhang, Y. (2020). High Sensitivity Snapshot Spectrometer Based on Deep Network Unmixing. Sensors, 20(24), 7038. https://doi.org/10.3390/s20247038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop