Next Article in Journal
Two-Dimensional Angle Estimation of Two-Parallel Nested Arrays Based on Sparse Bayesian Estimation
Previous Article in Journal
Iterative High-Accuracy Parameter Estimation of Uncooperative OFDM-LFM Radar Signals Based on FrFT and Fractional Autocorrelation Interpolation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement

1
State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(10), 3552; https://doi.org/10.3390/s18103552
Submission received: 30 August 2018 / Revised: 26 September 2018 / Accepted: 26 September 2018 / Published: 19 October 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
Simple optical system imaging is a method to simplify optical systems by removing aberrations using image deconvolution. The point spread function (PSF) used in deconvolution is an important factor that affects the image quality. However, it is difficult to obtain optimal PSFs. The blind estimation of PSFs relies heavily on the information in the image. Measured PSFs are often misused because real sensors are wide-band. We present an optimal PSF estimation method based on PSF measurements. Narrow-band PSF measurements at a single depth are used to calibrate the optical system; these enable the simulation of real lenses. Then, we simulate PSFs in the wavelength pass range of each color channel all over the field. The optimal PSFs are computed according to these simulated PSFs. The results indicated that the use of the optimal PSFs significantly reduces the artifacts caused by misuse of PSFs, and enhances the image quality.

1. Introduction

Optical aberration is an important factor in image-quality degradation. Designers of modern optical imaging systems must use lenses of different glass material to obtain aberration corrected and balanced [1], which makes optical systems cumbersome and expensive. Fortunately, spatially varying deconvolution can reduce many aberrations [2]. The use of deconvolution methods in image post-processing enables the constraints in optical system design to be relaxed, and the optical system can be simplified [3]. Deconvolution is the key part of this strategy, and the point spread function (PSF) is an important factor that affects the deconvolution result [4,5]. Once the PSF is obtained inaccurately, the restored image is likely to have severe artifacts.
The PSF measurement can be affected by a demosaicing algorithm for color filter arrays (CFAs) and sensor noise. PSFs are usually not large, and their size can even be 3 × 3 pixels, if the field of view (FOV) is well focused. The Bayer filter array [6] is a CFA that is used in most single-chip digital image sensors. As shown in Figure 1, the Bayer filter pattern is 25% red, 25% blue and 50% green, which means that 75% red, 75% blue and 50% green information are estimated using the demosaicing algorithm [7,8]. This will produce a significant error in the PSF measurement.
Furthermore, color filters of real image sensors are wide-band, and PSFs vary with the wavelength [9]. Because the objects have various spectra, the PSFs of the image received by sensors are not fixed. Measuring PSFs directly can lead to their misuse. The spectral response of a color image sensor is shown in Figure 2 [10]. The blue channel can even receive 532-nm light. Assuming that the spectrum of the object is mostly 532-nm light, the blur in the blue channel would be caused by 532-nm PSF. If the 450-nm PSF is used as the PSF of the blue channel in deconvolution to reduce the blur, the misuse of the PSF will lead to a poor restored result, as shown in Figure 3.
Blind deconvolution estimates PSFs from blurred images directly. In principle, blind estimation will not have the problem of misusing PSFs owing to wide-band sensors. However, the blind estimation of PSFs relies heavily on the information in the blurred image. The PSF would be wrongly estimated if the information in the FOV of the optical system is not adequate. Moreover, blind deconvolution, which estimates blur and image simultaneously, is a strongly ill-posed problem, and can be unreliable as a result.
We present an optimal PSF estimation method based on PSF measurements. Narrow-band PSF measurements and image-matching technology are used to calibrate the optical system, which makes it possible to simulate a real lens. Then, we simulate the PSFs in the wavelength pass range of each color channel all over the FOV. The optimal PSFs are computed according to these simulated PSFs, which can reduce the blur caused by the PSFs of every wavelength in the channel without introducing severe artifacts. A flowchart of the method is illustrated in Figure 4.
By utilizing the PSF measurements, our proposed method can obtain the PSFs of arbitrary wavelengths all around the FOV accurately, and can reduce the interference of noise and the demosaicing algorithm. The use of optimal PSFs avoids the artifacts caused by the misuse of PSFs owing to the use of wide-band sensors, which makes the results of image restoration robust.
The paper is organized as follows. The related works are presented in Section 2. In Section 3, the optical system calibration based on PSF measurements is described. Section 4 demonstrates the estimation of optimal PSFs. We show the results of optical system calibration and the restored images from real-world images captured by our simple optical system in Section 5. Finally, the conclusions are presented in Section 6.

2. Related Work

Many recent works have been proposed to estimate and remove blur due to camera shake or motion [11,12,13,14,15]. Most of them used blind estimation methods based on blurred images. However, optical blur has received less attention.
According to the properties of optical blur, some works introduced priors or assumptions in the blind method to estimate optical blur kernel [16,17,18,19]. However, the optical blur is spatially varying, so some fields of the blurred image may not have sufficient information to estimate the PSFs.
Some works created calibration boards. After obtaining blurred images and clear images of the calibration boards, PSFs were computed using deconvolution methods [3,20,21,22]. Further, Kee et al. [22] developed two models of optical blur to fit the PSFs, so that the models can predict spatially varying PSFs.
The most closely related work is [23]. Instead of estimating PSFs from blurred images, Shih et al. used measured PSFs to calibrate lens prescription, and computed fitted PSFs by simulation. However, real sensors are wide-band. The misuse of PSFs had not been considered in this work.
Kernel fusion [24] looks like the optimal PSF computation of our work, but it is actually different. Kernel fusion fused several blind kernels estimated by other papers and generated a better one as a result. In our work, the real kernel is a linear combination of multiple PSFs with unknown proportions. We compute an optimal PSF which can handle blurs caused by all these multiple PSFs stably.

3. PSF Measurement and Optical System Calibration

Real optical systems differ from designed optical systems owing to manufacturing tolerances and fabrication errors. Thus, the optical system needs to be calibrated before PSF simulation. Fortunately, for a simple optical system, the PSF errors caused by tolerances are very small after sensor-plane compensation. We performed many times of tolerance disturbances randomly within a reasonable range on the designed optical system using CODE V optical design software, including the radius tolerance, thickness tolerance, element decentration tolerance, element tilt tolerance and refractive index tolerance. After the sensor-plane compensation, the changes of the PSFs are much smaller than the errors introduced by the assumption that there is a constant PSF in each patch in spatially varying deconvolution. Thus, we replaced the optical system calibration with the sensor-plane calibration.
PSF measurements and image-matching technology are used in sensor-plane calibration. We compute the match between measured PSFs and simulated PSFs of different sensor-plane positions. The most matching position is the sensor-plane position after sensor-plane compensation.
The PSF measurement setup is shown in Figure 5a, and consists of a self-designed two-lens optical system with a Sony IMX185LQJ-C image sensor, an optical pinhole, light-emitting diode (LED) lamps, and three narrow-band filters whose pass ranges are around 650 nm, 532 nm and 450 nm. The LED lamp, the optical pinhole, and the narrow-band filter are installed in a black box.
The measurements only need to be done for a single object plane. Once the sensor-plane position is calibrated, we can simulate the PSFs of any object plane. We first adjust the position of the sensor to make the image clear and fix it. Then, the measured PSF can be captured by the camera. We measure the PSFs of different FOVs by moving the black box perpendicular to the optical axis. The exposure time is controlled to avoid the saturation of the light intensity.
The above experiment is used to measure the different field PSFs of 650 nm, 532 nm and 450 nm in a fixed and unknown sensor-plane position. Then, we simulate different-field PSFs of 650 nm, 532 nm and 450 nm in different sensor-plane positions using CODE V to compute the match with the measured PSFs.
Before the sensor-plane matching, we need to find the simulated PSF that is in the same field as the measured PSF. As show in Figure 6, the PSFs array is a set of spatially varying simulated PSFs of 650 nm in one sensor-plane position. The single PSF out of the array is a measured PSF of one field. Optical systems have rotational symmetry with respect to the optical axis. In the simulation, the intersection of optical axis and the sensor is the image center. However, in the real optical system, the intersection will deviate from the image center. According to the rotational symmetry of the optical system, if the distance from the real intersection to the measured PSF is close to the distance from the image center to the simulated PSF, their fields match.
We first calibrate the realistic intersection using the least-square method. In optical systems, a PSF that does not lie on the optical axis of the lens has reflection symmetry with respect to the meridional plane. The symmetric axis of the PSF goes through the intersection of the optical axis and the sensor. We find these symmetric axes of the measured PSFs. Considering the error, the point with the least sum of distances from these symmetry axes is the best choice of the intersection.
The distances from simulated PSFs to the image center form a set S . We find the element in S closest to the distance from the intersection of the optical axis and the sensor to the measured PSF. If their difference is less than Δ d which represents the acceptable field error, the simulated PSF and the measured PSF are considered to be in the same field. If their difference is more than Δ d , which means that there is no simulated PSF in the same field as the measured PSF, the measured PSF will be abandoned. After field matching, we rotate the simulated PSFs in different sensor planes to make them have the same tilt angle as the measured PSF.
The measured PSFs are affected by a demosaicing algorithm and sensor noise, but their sizes and shapes are hardly affected. Therefore, a template matching approach can be used to find the real sensor plane. We compute the maximum of the normalized cross-correlation matrix of the simulated PSFs and the measured PSF as the matching degree [25]. The sensor plane in which the matching degree of the simulated PSF and the measured PSF is highest is the calibration result. The normalized cross-correlation matrix is expressed as follows:
γ ( x , y ) = s t [ w ( s , t ) w ¯ ] [ f ( x + s , y + t ) f ¯ x y ] { s t [ w ( s , t ) w ¯ ] 2 s t [ f ( x + s , y + t ) f ¯ x y ] 2 } 1 2 ,
where w is the measured PSF, and f is the simulated PSF. w ¯ is the average value of w, and f ¯ x y is the average value of f in the overlapping region of w and f.
There is a potential ambiguity that more than one sensor plane has a high matching degree. However, for the best sensor plane, the matching degrees are high in all fields. We calculate the average of the matching degrees in all fields to prevent ambiguity.

4. Optimal Point Spread Function Estimation

After calibration, the PSFs of every wavelength in all fields can be simulated accurately. However, real sensors are wide-band. Each color channel of a real sensor allows light in a large wavelength range to pass through. In the daily use of cameras, the complex illumination and the multiple reflectance of objects make it difficult to obtain the spectrum received by sensors. Thus, it is difficult to calculate the real PSF by weighting. We estimate an optimal PSF according to the simulated PSFs in the wavelength pass range of the color channel, which can reduce the blur caused by each wavelength PSF in the channel without introducing severe artifacts.
In optical systems, the latent sharp image i is blurred by PSF k λ . The observed image b can be expressed as follows:
b = i k λ + n ,
where is the convolution operator, and n is the additive noise. The effect of noise can be suppressed significantly by employing regularization in state-of-the-art deconvolution methods, so we will not consider noise in the following discussion.
In the frequency domain, Equation (2) can be written as follows:
B = I × K λ ,
where the terms in capital letters are the Fourier transforms of the corresponding terms in Equation (2).
If we use a new PSF ko in the deconvolution to restore the image, the new resultant image i′ in the frequency domain can be expressed as follows:
I = I × K λ K o .
Therefore, the new resultant image can be seen as the latent sharp image blurred by F 1 ( K λ / K o ) , where F 1 represents the inverse Fourier transform.
In optical systems, the Fourier transform of the PSF is the optical transfer function (OTF), which can be considered as a combination of the modulation transfer function (MTF) and the phase transfer function (PTF):
O T F = M T F × e i P T F .
Therefore, the equivalent OTF of the optical system after deconvolution is as follows:
O T F e = K λ K o = M T F λ M T F o e i ( P T F λ P T F o ) ,
where M T F λ and P T F λ are the MTF and PTF of k λ , respectively; MTFo and PTFo are the MTF and PTF of ko, respectively. The equivalent MTF and PTF of the optical system after deconvolution are as follows:
M T F e = M T F λ M T F o P T F e = P T F λ P T F o
MTF is related to contrast reduction. Ideally, MTF is a constant 1, which means that the modulation of the image is the same as the modulation of the object. Actually, some of the information of the object will be lost in a real optical system, and MTF will be less than 1. If MTF is more than 1, the modulation will increase, and the intensity of the pixels will be saturated easily. As shown in Figure 3b, this saturation is unacceptable. Thus, for an image that is blurred by a series of PSFs, that is, k λ 1 , k λ 2 , …, k λ n , MTFe should be close to 1 and should not exceed 1. The MTFo should be as follows:
M T F o ( ν ) = max ( { M T F λ i ( ν ) } i = 1 n ) ,
where ν is the spatial frequency, and M T F λ i represents the MTF of k λ i .
PTF indicates the shape and size information of PSF. Ideal PSF is spot-like or disk-like. The energy is concentrated, and the PTF is generally a constant 0. For a set of PSFs, there are various shapes and sizes. The PTFs are close at some spatial frequency area and have a large difference at other spatial frequency areas. In the close area, we calculate the average of these PTFs as the PTF of the optimal PSF to make P T F e ( ν ) close to 0 for this entire set of PSFs. In the other area, there is no fixed value of PTF to restore P T F λ of all the wavelengths in the pass range of the color channel. We set the P T F o to 0 in the area to avoid the presence of artifacts caused by incorrect recovery, although the blur caused by P T F λ in the spatial frequency area will remain.
The optimal PSF ko can be expressed as follows:
k o = F 1 ( M T F o e i P T F o ) .

5. Result

In this section, we show the results of the optical system calibration, and we compare the restored results from real-world images captured by our simple optical system.
The sensor-plane matching curve is shown in Figure 7. Where the abscissa is 1.38 mm, the matching degree is the highest which is 0.858. Note that the upper limit of the matching degree is 1. This means that the matched simulated PSFs are very similar to the measured PSFs. Figure 8 shows the comparison of the measured PSFs and the matched simulated PSFs. The sizes and shapes of the matched simulated PSFs in all fields are close to the measured PSFs. We can see that the simulated PSFs have more details and are more accurate, because they are not affected by noise or demosaicing.
We simulated the PSFs for which the wavelengths range from 450 nm to 650 nm using CODE V, and the interval is 10 nm. According to the spectral sensitivity characteristics of the Sony IMX185LQJ-C image sensor, the wavelength pass ranges of the color channels are extracted. The pass ranges of the red channel, the green channel and the blue channel are 570–650 nm, 470–640 nm and 450–530 nm, respectively. After computing the optimal PSFs, the blurred image is divided into 7 × 13 rectangular overlapping patches and restored using the deconvolution method in [26].
For comparison, we also restored the patches of the image using blind-estimated PSFs [13] and selected wavelength PSFs. The deconvolution method that was employed was the same as in our proposed method, and all the deconvolution parameters were the same. The selected wavelengths are 620 nm, 530 nm and 470 nm, which respectively have the highest spectral sensitivities in the red channel, the green channel and the blue channel of the Sony IMX185LQJ-C image sensor.
Figure 9, Figure 10 and Figure 11 show the restored results obtained from real-world images captured by our simple optical system. The resolution is 1920 × 1080 pixels. The results obtained using selected wavelength PSFs and blind-estimated PSFs include noticeable artifacts. They can hardly handle images reasonably that are captured by our simple optical system with a wide-band sensor. Compared to the other methods, our proposed method has fewer artifacts and the image quality is more stable. Furthermore, our method is several-dozen times faster than the blind method [13], because we only need simple calculations to estimate the PSFs.
We have not compared our results with the restored image using measured PSFs because it is difficult and time consuming to measure PSFs in all fields accurately. As shown in Figure 8, the measured PSFs are significantly affected by noise and demosaicing. The restored results will be worse than the results obtained using selected wavelength PSFs.

6. Conclusions

In this paper, we present an optimal PSF estimation method that is based on PSF measurements. By performing PSF measurements and PSF simulations, the proposed method can obtain PSFs more accurately. Considering the use of wide-band sensors, the method can restore images of simple optical systems stably without severe artifacts. The experiment carried out using the real-world images captured by a self-designed two-lens optical system shows the effectiveness of our method.
In the optical system calibration part, the proposed method ignores the error caused by residual tolerance after sensor-plane compensation. The method is suitable for lenses with high machining accuracy. However, for low-precision lenses, there is a need for a more precise optical system calibration method.

Author Contributions

Y.Z., W.H. and M.X. conceived and designed the experiments; Y.Z. and Y.P. performed the experiments, analyzed the data and wrote the paper.

Funding

This research received no external funding.

Acknowledgments

The project was supported by the State Key Laboratory of applied optics.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meyer-Arendt, J.R. Introduction to Classical and Modern Optics; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  2. Schuler, C.J.; Hirsch, M.; Harmieling, S.; Scholkopf, B. Non-stationary correction of optical aberrations. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 659–666. [Google Scholar]
  3. Heide, F.; Rouf, M.; Hullin, M.B.; Labitzke, B.; Weidrich, W.; Kolb, A. High-quality computational imaging through simple lenses. ACM Trans. Graph. 2013, 32, 149. [Google Scholar] [CrossRef]
  4. Ahi, K.; Shahbazmohamadi, S.; Asadizanjani, N. Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging. Opt. Lasers Eng. 2017, 104, 274–284. [Google Scholar] [CrossRef]
  5. Ahi, K. A method and system for enhancing the resolution of Terahertz imaging. Measurement 2018. [Google Scholar] [CrossRef]
  6. Bayer, B.E. Color Imaging Array. U.S. Patent No. 3,971,065, 5 March 1975. [Google Scholar]
  7. Gunturk, B.K.; Glotzbach, J.; Altunbasak, Y.; Schafer, R.W.; Mersereau, R.M. Demosaicking: Color filter array interpolation. IEEE Signal Process. Mag. 2005, 22, 44–54. [Google Scholar] [CrossRef]
  8. Ramanath, R.; Snyder, W.E.; Bilbro, G.L. Demosaicking methods for Bayer color arrays. J. Electron. Imaging 2002, 11, 306–315. [Google Scholar] [CrossRef]
  9. Ahi, K. Mathematical modeling of THz point spread function and simulation of THz imaging system. IEEE Trans. THz Sci. Technol. 2017, 7, 747–754. [Google Scholar] [CrossRef]
  10. Sahin, F.E. Novel Imaging Systems for Intraocular Retinal Prostheses and Wearable Visual Aids. Ph.D. Thesis, University of Southern California, Los Angeles, CA, USA, 2016. [Google Scholar]
  11. Levin, A.; Fergus, R.; Durand, F.; Freeman, W. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; Volume 8, pp. 1964–1971. [Google Scholar]
  12. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 27, 787–794. [Google Scholar] [CrossRef]
  13. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the 2011 Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; Volume 42, pp. 233–240. [Google Scholar]
  14. Cho, S.; Lee, S. Fast motion deblurring. In Proceedings of the ACM SIGGRAPH Asia 2009, Yokohama, Japan, 16–19 December 2009; Volume 28, pp. 1–8. [Google Scholar]
  15. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the European Conference on Computer Vision (ECCV), Crete, Greece, 5–11 September 2010; Volume 4, pp. 157–170. [Google Scholar]
  16. Schuler, C.J.; Hirsch, M.; Harmieling, S.; Scholkopf, B. Blind correction of optical aberrations. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; pp. 187–200. [Google Scholar]
  17. Yue, T.; Suo, J.; Wang, J.; Cao, X.; Dai, Q. Blind optical aberration correction by exploring geometric and visual priors. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1684–1692. [Google Scholar]
  18. Gong, X.; Lai, B.; Xiang, Z. A L0 sparse analysis prior for blind poissonian image deconvolution. Opt. Express 2014, 22, 3860–3865. [Google Scholar] [CrossRef] [PubMed]
  19. Fang, H.; Yan, L.; Liu, H.; Chang, Y. Blind poissonian images deconvolution with framelet regularization. Opt. Lett. 2013, 38, 389. [Google Scholar] [CrossRef] [PubMed]
  20. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  21. Brauers, J.; Seiler, C.; Aach, T. Direct psf estimation using a random noise target. In Proceedings of the SPIE 7537, Digital Photography VI, San Jose, CA, USA, 17–21 January 2010; p. 75370B. [Google Scholar] [CrossRef]
  22. Kee, E.; Paris, S.; Chen, S.; Wang, J. Modeling and removing spatially-varying optical blur. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 8–12 April 2011; pp. 1–8. [Google Scholar]
  23. Shih, Y.; Guenter, B.; Joshi, N. Image enhancement using calibrated lens simulations. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; Volume 7575, pp. 42–56. [Google Scholar]
  24. Mai, L.; Liu, F. Kernel fusion for better image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 371–380. [Google Scholar]
  25. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  26. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Proceedings of the Advances in Neural Information Processing Systems 22 (NIPS 2009), Vancouver, BC, Canada, 7–10 December 2009; Volume 22, pp. 1–9. [Google Scholar]
Figure 1. The Bayer arrangement of color filters.
Figure 1. The Bayer arrangement of color filters.
Sensors 18 03552 g001
Figure 2. Spectral response of the three different color filters of a commercially available color image sensor array, the Aptina MT9M034. The cutoff frequency of an IR-cut filter that is typically employed in cameras is shown with the dashed line.
Figure 2. Spectral response of the three different color filters of a commercially available color image sensor array, the Aptina MT9M034. The cutoff frequency of an IR-cut filter that is typically employed in cameras is shown with the dashed line.
Sensors 18 03552 g002
Figure 3. (a) Image blurred by 532-nm (point spread function) PSF; (b) image restored from (a) using 450-nm PSF. The 450-nm PSF is shown on the top right; (c) image restored from (a) using 532-nm PSF. The 532-nm PSF is shown on the top right.
Figure 3. (a) Image blurred by 532-nm (point spread function) PSF; (b) image restored from (a) using 450-nm PSF. The 450-nm PSF is shown on the top right; (c) image restored from (a) using 532-nm PSF. The 532-nm PSF is shown on the top right.
Sensors 18 03552 g003
Figure 4. Flowchart of our proposed method.
Figure 4. Flowchart of our proposed method.
Sensors 18 03552 g004
Figure 5. (a) Experimental setup of PSF measurement; and (b) the self-designed lens and three narrow-band filters.
Figure 5. (a) Experimental setup of PSF measurement; and (b) the self-designed lens and three narrow-band filters.
Sensors 18 03552 g005
Figure 6. Diagram of PSF field matching.
Figure 6. Diagram of PSF field matching.
Sensors 18 03552 g006
Figure 7. Sensor-plane matching curve. The abscissa is the distance from the simulated sensor plane to the designed sensor plane, and the units are in mm. The ordinate is the matching degree of the simulated PSFs and the measured PSF.
Figure 7. Sensor-plane matching curve. The abscissa is the distance from the simulated sensor plane to the designed sensor plane, and the units are in mm. The ordinate is the matching degree of the simulated PSFs and the measured PSF.
Sensors 18 03552 g007
Figure 8. Comparison of the measured PSFs and the matched simulated PSFs. The measured PSFs are on the top, and the matched simulated PSFs are on the bottom.
Figure 8. Comparison of the measured PSFs and the matched simulated PSFs. The measured PSFs are on the top, and the matched simulated PSFs are on the bottom.
Sensors 18 03552 g008
Figure 9. Results for the periodic table image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Figure 9. Results for the periodic table image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Sensors 18 03552 g009
Figure 10. Results for the dictionary image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Figure 10. Results for the dictionary image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Sensors 18 03552 g010
Figure 11. Results for the Mickey image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Figure 11. Results for the Mickey image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.
Sensors 18 03552 g011

Share and Cite

MDPI and ACS Style

Zheng, Y.; Huang, W.; Pan, Y.; Xu, M. Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement. Sensors 2018, 18, 3552. https://doi.org/10.3390/s18103552

AMA Style

Zheng Y, Huang W, Pan Y, Xu M. Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement. Sensors. 2018; 18(10):3552. https://doi.org/10.3390/s18103552

Chicago/Turabian Style

Zheng, Yunda, Wei Huang, Yun Pan, and Mingfei Xu. 2018. "Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement" Sensors 18, no. 10: 3552. https://doi.org/10.3390/s18103552

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop