Next Article in Journal
Design of a Tubular Permanent Magnet Actuator for Active Lateral Secondary Suspension of a Railway Vehicle
Previous Article in Journal
CFD Studies of the Effects of Waveform on Swimming Performance of Carangiform Fish
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems

Department of Radioelectronics, Faculty of Electrical Engineering, Czech Technical University in Prague, Technická 2, 166 27 Prague 6, Czech Republic
*
Author to whom correspondence should be addressed.
Appl. Sci. 2017, 7(2), 151; https://doi.org/10.3390/app7020151
Submission received: 3 October 2016 / Revised: 23 January 2017 / Accepted: 25 January 2017 / Published: 8 February 2017
(This article belongs to the Section Optics and Lasers)

Abstract

:
Ultra-wide-field of view (UWFOV) imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difficult task that can be achieved by estimating the Point Spread Function (PSF) of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining field-dependent expansion coefficients of the Zernike polynomials by fitting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the fitting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefficients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model.

1. Introduction

Modern photonic imaging systems, used in astronomy and in other areas of science, are required to perform with high imaging quality, a wide aperture (i.e., low F-numbers), and high spatial resolution. These systems are known as UWFOV systems. The wide-angle optical elements used in UWFOV systems introduce a significant shift variance [1], which complicates the modeling task. Many recent research works have focused on finding a model of UWFOV systems that is suitable for reducing the uncertainties in reconstructing image data from wide-field microscopic systems [2,3,4], security cameras [5,6,7], and all-sky cameras [8,9,10,11,12,13,14,15]. Any imaging system can be described by its spatial impulse response function, commonly referred to as the Point Spread Function (PSF). PSF is used to represent the aberrations of UWFOV devices, and can be applied either to the entire image or within regions uniquely defined by the isoplanatic angle. However, it can sometimes be complicated to obtain a PSF model of the system. It can be difficult to achieve the desired accuracy using conventional methods—not from the measurement point of view, but because it is difficult to describe the space variance (SV) of the PSF over the Field of View (FOV). This leads to an obvious issue. From the measurement point of view, it is necessary to obtain the PSF for all discrete points of the entire FOV. Then the PSF of the system is described by such a field of individual PSFs. Born [16] calls this type of PSF partially space-invariant because there are just discrete points (PSFs) in parts of the FOV. However, a space-variant PSF is described by a model in all points of the acquired image. Of course, this brings high demands on the precision of the model. Issues associated with sampling, pixel size and the pixel sensitivity profile must be taken into account in the design. It is very difficult to obtain the parameters of the model, especially in image areas with an aberrated impulse response and for objects with a small Full Width at Half Maximum (FWHM) parameter in relation to the pixel size. These objects, captured, e.g., in microscopy using all-sky cameras and/or special optical elements, can hardly be processed using current techniques, which are typically oriented to multimedia content.
A widely-used approach for obtaining the PSF is to model wavefront aberrations using Zernike polynomials. These polynomials were introduced by Zernike in [17]. This work was later used by Noll [18], who introduced his approach to indexing, which is widely used in the design of optics. Zernike polynomials are usually applied to rotationally symmetric optical surfaces. Zernike polynomials form a basis in optics with Hopkins [19] field dependent wave aberration. Recently, Sasián [20] reformulated the work of Hopkins. This new formulation is more practical for optical design. A more recent contribution by Ye et al. [21] uses Bi-Zernike polynomials, which is an alternative to the method mentioned in Gray’s article [22].
Zernike polynomials are used in modern interferometers for describing wave aberrations. However, researchers have begun to think about using these polynomials for an overall description of lens optics. When the first all-sky camera projects appeared, the question of how to model aberrations of fisheye lenses arose. Weddell [23] proposed the approach of a partially space-invariant model of PSF based on Zernike polynomials. Another all sky project, Pi of the Sky [1], focusing on gamma ray bursts, attempted to make a model of PSF. Their approach was quite different from Weddell’s [23]. First, they calculate the dependency of the coefficients of the polynomials on the angle to the optical axis for several points. Then they interpolate these coefficients over the entire FOV. The WILLIAM project [24] (WIde-field aLL-sky Image Analyzing Monitoring system) faces the issue of aberrations at an extremely wide-field FOV (see also [25,26]). Gray [22] proposed a method for describing the spatial variance of Zernike polynomials. This approach is derived from the description provided by Hopkins [19], and laid the foundations for truly space-variant Zernike polynomials. Of course, there is also the problem of space-variant PSF. In fact, we cannot be limited to rotationally symmetric imaging systems only. Hasan [27] and Thompson [28] came up with a more general description of the pupil function for elliptical or rotationally non-symmetrical Zernike polynomials. This description can be used for calculating the wave aberration in optical materials such as calomel for acousto-optical elements [29]. The first optical approach to wave aberration estimation was described by Páta et al. [30].
The search for the best description of UWFOV systems is not limited to astronomical imaging. Microscopy is another area where the UWFOV type of lens is used. Measurements of the special spherical aberration using a shearing interferometer were described in [31]. A promising new approach was formulated in [32]. It is based on aberration measurements of photolithographic lenses with the use of hybrid diffractive photomasks. The aperture exit wavefront deformation is modeled in [33] for wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography.
UWFOV cameras are also used in surveillance systems. However, an image affected by aberrations can have a negative effect in criminal investigations [6]. Ito et al. [5] face this issue, and they propose an approach that estimates a matrix of coefficients of the wavefront aberrations. Many authors have reported on investigations of space-variant imaging systems. An estimate of the PSF field dependency is critical for restoring the degraded image. Qi and Cheng proposed their linear space-variant model [34] and focused on the restoration algorithm for imaging systems. Heide et al. [35] proposed techniques for removing aberration artifacts using blind deconvolution for the imaging system with a simple lens.
In our work, we propose a modeling method for PSF estimation for space-variant systems. Since we would like to use this model for general optical systems, the method is based on modeling the PSF of the system without knowledge of the wavefront. Thus, the method can be an alternative to the Shack–Hartmann interferometer [4,36], or to other direct wavefront measurement methods, since we can estimate wavefront aberrations from fitting PSF in the image plane. The following section begins with a description of wavefront aberrations.

2. Wavefront Aberration Functions

An ideal diffraction limited imaging system transforms a spherical input wave with an inclination equal to the plane wave direction. As described by Hopkins [19], real imaging systems convert the input plane wave into a deformed wavefront. The difference between an ideal wavefront and an aberrated wavefront in the exit aperture can be expressed as
W ( x , y ) = W a b ( x , y ) W s p ( x , y )
where W(x, y) are wavefronts—the surface of points characterized by the same phase. Figure 1 illustrates the shape of the ideal spherical wavefront W s p ( x , y ) and aberrated wavefront W a b ( x , y ) . It can be useful to introduce the coordinate system illustrated in Figure 2. The object plane is described by the ξ, η coordinates; the exit pupil uses x, y notation, and the image plane uses u, ν notation. Then we introduce the x ^ , y ^ normalized coordinates (see Figure 2b,c) at the exit pupil and ρ, θ polar coordinates. The normalized image plane coordinates are u ^ , v ^ and H, ϕ, respectively.
The aberrated wavefront can be represented by a set of orthonormal base functions, known as Zernike polynomials. Zernike polynomials, which are described in [17] and in [37,38,39,40], are a set of functions orthogonal over the unit circle, usually described in polar coordinates as Z n m ( ρ , θ ) , where ρ is the radius and θ is the angle with respect to the x ^ -axis in the exit aperture (see Figure 2b). They represent functions of optical distortions that classify each aberration using a set of polynomials. The set of Zernike polynomials is defined in [17]; other adoptions are in [37,38,39,40], and it can be written as
Z n m ( ρ , θ ) = N n m R n | m | ( ρ ) cos ( m θ ) m 0 Z n m ( ρ , θ ) = N n m R n | m | ( ρ ) sin ( m θ ) m < 0 ,
with n describing the power of the radial polynomial and m describing the angular frequency.
N n m = 2 ( n + 1 ) 1 + δ m 0 ,
is the normalization factor with the Kronecker delta function δ m 0 = 1 for m = 0, and δ m 0 = 0 for m ≠ 0, and
R n | m | ( ρ ) = s = 0 ( n | m | ) / 2 ( 1 ) s ( n s ) ! s ! [ ( n + | m | ) 2 s ] ! [ ( n | m | ) 2 s ] ! ρ ( n 2 s ) ,
is the radial part of the Zernike polynomial.
Any wavefront phase distortion over a circular aperture of unit radius can be expanded as a sum of the Zernike polynomials as
W ( ρ , θ ) = n k m = n n A n m Z n m ( ρ , θ ) ,
which can be rewritten using Equation (2) as
W ( ρ , θ ) = n k m = 0 n A n m N n m R n | m | ( ρ ) cos ( m θ ) n k m = n 1 A n m N n m R n | m | ( ρ ) sin ( m θ ) ,
where m has values of n , n + 2 , n , k is the order of the polynomial expansion, and A n m = A n m ( H , φ ) is the coefficient of the Z n m mode in the expansion, i.e., it is equal to the root mean square (RMS) phase difference for that mode. The wavefront aberration function across the field of view of the optical system can then be described as
W ( ρ , θ , H , φ ) = n = 0 k m = n n A n m ( H , φ ) Z n m ( ρ , φ θ )
where the wavefront is described with normalized polar coordinates. A better understanding can be provided by Table A1 in Appendix A and by Figure 3, which describes the indexing of Zernike polynomials and the kind of aberration associated with an index of Zernike polynomials.

3. Field Dependency of the Wavefront

A description of the wavefront aberration of a circular rotationally symmetric optical system can be adopted from [22], originally from [19]
W ( ρ , θ ; H , φ ) = p n m W k l m H k ρ l cos m ( φ θ ) ,
where k = 2p + m and l = 2n + m. Symbol W k l m is used for the expansion coefficients; the coordinates are defined in Figure 2c. The field dependency of A n m ( H , φ ) can be solved by comparing wavefront aberrations using Equations (7) and (8) as
n = 0 k m = n n A n m ( H , φ ) Z n m ( ρ , φ θ ) = p n m W k l m H k ρ l cos m ( φ θ ) .
Equation (10) can be obtained by expanding Equation (7) and rewriting terms cos m ( θ , φ ) into terms containing a set of goniometric functions cos ( m θ ) , sin ( m θ ) , r N and comparing the coefficients in this form with expanded Zernike polynomials (as in Table A1). The resulting coefficients A n m ( H , φ ) describing the space variation of Zernike polynomials are presented in [22]. Coefficients W k l m are then used for describing the aberration of the space-variant optical system. The order of the aberration terms is defined by Hopkins in [19] as
o r d e r = ( s u m o f t h e p o w e r s o f ρ a n d θ ) 1 .

4. The Space-Variant Point Spread Function

Wide-field optics is usually affected by various aberrations, such as distortion which makes the PSF of the system space-variant, i.e., the PSF changes across the field of view. It is, therefore, necessary to use a complicated model of the system’s PSF. Let us begin with a linear space-invariant optical imaging system, which can be expressed as
h ( u , v ) = | F T { P ( x , y ) } | 2 = | F T { p ( x , y ) exp ( i 2 π λ W ( x , y ) ) } | 2 ,
where p ( x , y ) defines the shape, the size, and the transmission of the exit pupil, and W ( x , y ) is the phase deviation of the wavefront from a reference sphere. The generalized exit pupil function is described in [16] as
P ( x , y ) = p ( x , y ) exp ( i 2 π λ W ( x , y ) ) .
An imaging system is space-invariant if the image of a point source object changes only in position, not in the functional form [41]. However, wide-field optical systems give images where the point source object changes both in position and in functional form. Thus, wide-field systems and their impulse responses lead to space-variant optical systems. If we consider a space-variant system, we cannot use the convolution for expressing the relation between object and image. When computing SVPSF, we have to use the diffraction integral [6,42]. SVPSF can then be expressed as
h ( u , v , ξ , η ) = 1 λ 2 z 1 z 2 p ( x , y ) exp ( i 2 π λ W ( x , y ) ) exp { j 2 π λ z 2 [ ( u M ξ ) x + ( v M η ) y ] } d x d y ,
with a defined magnification of the system
M = z 2 z 1 ,
where z1 is the distance from the object plane to the principal plane and z2 is the distance from the principal plane to the image plane.

5. Proposed Method

The method proposed in this paper is based on modeling the PSF of the system, and comparing it with real image data without requiring measurements of the wavefront aberrations. The fitting of the model takes place in the image plane.
There are three conditions for acquiring the set of calibration images. The first criterion is that FWHM of the image of the light source should be sufficient for PSF estimation. The experimental results show that FWHM size greater than 5 px is enough for the algorithm. FWHM of the diffractive image of the source should be smaller than 1 px. Otherwise the diffractive function has to be added to the model of the system. The image of the 200 µm pinhole has FWHM size 0.6 µm which is less than the size of the pixel of our camera. The influence of the source shape has to be taken into account. The second criterion is the Signal-to-Noise Ratio (SNR), which should be greater than 20 dB. Section 6 contains a comparison of results obtained with different SNR. The third criterion is that the number of test images depends on the space variance of the system, i.e., a heavily distorted optical system will require more test images to satisfy this criterion. We have to choose the distance of two different PSFs in such a way that
R M S E I N ( f i , f i + 1 ) = 1 M × N u = 0 N 1 v = 0 M 1 ( f i f i + 1 ) 2 < 5 % .
where f i and f i + 1 are the images of the point light source in the image plane. M and N are the sizes of the f i and f i + 1 images. In our example, a grid of 24 positions of the PSFs in one quadrant of the optical system is sufficient to estimate the model. The wavefront is modeled using Zernike polynomials and known optical parameters. In addition to the input image data, we need to know camera sensor parameters such as resolution, the size of the sensor and optical parameters such as the focal length (crop factor, if included), the F-number and the diameter of the exit pupil. The obtained model of the PSF of the optical system is based on the assessment of differential metrics.
We can describe the modeling of real UWFOV systems as a procedure with three main parts: the optical part, the image sensor, and the influence of the sensor noise. The space-variant impulse response h ( u , v , ξ , η ) can include the influence of the image sensor (e.g., pixel shape and sensitivity profile, noise or quantization error). The sensor has square pixels with uniform sensitivity. Then, the PSF of the imaging system can be described as
h ( u , v , ξ , η ) = h o p t ( u , v , ξ , η ) h s e n ( u , v ) ,
where h s e n ( u , v ) is the PSF of the sensor and h o p t ( u , v , ξ , η ) is the PSF of the optical part. Symbol * describes the convolution. h o p t ( u , v , ξ , η ) can be calculated from the system parameters and wavefront deformations using the Fourier transform as described in Equation (13). The wavefront deformation is modeled using Zernike polynomials for the target position in the image plane (see Equation (7)). Ultra-wide field images typically have angular dependent PSF. High orders of Zernike polynomials are therefore used in the approximation. The wavefront approximation is used up to the 8th order plus the 9th spherical aberration of the expansion function. The field dependence of the coefficients is formulated in [22]. In our work, the set of field-dependent coefficients was expanded up to the 8th order plus the 9th spherical aberration. The actual table of the used W k l m coefficient is attached in Appendix A as Table A1.
Let us assume an imaging system with an unknown aberration model. We then obtain with this system a grid of K test images of a point light sources covering the entire FOV as
{ f i ( u , v ) } i = 1 K .
Then, let
{ f d ( u ^ , v ^ ) } d = 1 L ,
be the d-th realization of the model in the corresponding position in the image as the original object. Sub-matrix f i is the image of the point light source in the image plane, while matrix f d is the sub-image model computed by our method. The size of the sub-arrays must be sufficient to cover the whole neighborhood of the point light object on the positions { u 0 i , v 0 i ) } i = 1 K .
As was mentioned above, symbols u ^ , v ^ are used for image plane coordinates in the normalized optimization space, and u , v are image plane coordinates. Note that f i and f d can be located at any point over the entire field of view. The step between the positions of the point light source has to be chosen to cover the observable difference of the acquired point. The positions of the point light source in the field of view play an important role in the convergence efficiency. In our example, we divided the field of view each 10 degrees uniformly horizontally and vertically. We therefore obtain a matrix of PSFs sufficient for a description of the model. It turned out that finer division of the FOV is not necessary and does not improve the accuracy of the model and it satisfies the condition in Equation (15).
The algorithm uses two evaluation methods. The first method is based on calculating the root mean square error (RMSE) over the difference matrix in Equation (21) and optimizing the W k l m parameters and the Δ u ^ , Δ v ^ positions for decreasing residuals. The second method is based on deducting the original and model matrix to obtain the maximum difference (MAXDIFF). Then, the residuals of this method indicate the deviation against the original matrix. The first method with RMSE calculation provides a better shape description. However, this method can result in local extremes. The reduction can be resolved by using the MAXDIFF calculation method. This method minimizes the local extremes, because it is focused on minimizing the maximum difference between the f i and f d object matrices, but the output can be a more general shape of PSF.
Let us now introduce operators R M S E ( f i , f d ) and M A X D I F F ( f i , f d ) , which are used as descriptors of the differences between the original PSF and the model of PSF ( f i and f d ).
R M S E ( f i , f d ) = 1 M × N u ˜ = 0 N 1 v ˜ = 0 M 1 ( f i ( u , v ) f d ( u ^ , v ^ ) ) 2 .  
M A X D I F F ( f i , f d ) = max ( u , v ) ( | f i ( u , v ) f d ( u ^ , v ^ ) | ) .
M and N are the sizes of the f i and f d sub-arrays. The optimization parameter of the W k l m coefficients uses the Nelder–Mead optimizing algorithm, which is described in detail in [43]. Let R f i , f d be the optimizing operator, then
R f i , f d ( u ^ , v ^ ) = min W k l m , Δ u , Δ v [ R M S E ( f i , f d ) M A X D I F F ( f i , f d ) ] ,
where W k l m , Δ u ^ , Δ v ^ are variables for minimizing the cost function. For multiparameter optimization tasks, the challenge is to find the global minimum of the function. Considering this issue, we can find an appropriate set of starting parameters of the fit algorithm by smart selection of on-axis points (PSFs) on which we will obtain appropriate W k l m , Δ u ^ , Δ v ^ variables, and we will then increase the precision of our model by selecting off-axis points and improving W k l m , Δ u ^ , Δ v ^ variables. The process of point selection is illustrated in Figure 4, and the steps in the algorithm are as follows:
Select a point placed on the optical axis (this point is considered as SV aberration free).
The W k l m optimization coefficient is based on minimizing RMSE or the MAXDIFF metrics. Then we obtain
R f i , f d ( 0 , 0 ) W k l m ( 1 ) , Δ u ^ , Δ v ^ ,
where W k l m ( 1 ) is the first realization of the fit, and Δ u ^ , Δ v ^ represent the displacement of the object point in the image plane.
The next calibration point will be placed on the u ^ -axis and next to the first point.
All W k l m coefficients from the first point fit will be used as start conditions in the next step of the fit.
In the next step, we will fit all the points along the u ^ -axis by increasing distance H.
The previous result is used as the start condition for the next point.
R f i , f d ( u ^ , 0 ) W k l m ( d ) , Δ u ^ , Δ v ^ .
Then, we can continue along the v ^ -axis by increasing distance H. This procedure gives the first view of the model.
R f i , f d ( 0 , v ^ ) W k l m ( d ) , Δ u ^ , Δ v ^ .
where W k l m ( d ) is the d-th realization of the fit.
After fitting all the on-axis points, we will start to fit all the off-axis points.
The example in this paper uses 24 points.
R f i , f d ( u ^ , v ^ ) W k l m , Δ u ^ , Δ v ^ .
After fitting all the points, we need to evaluate the output W k l m coefficients which can describe the field dependency of our model.
We verified experimentally that the median applied to the set of estimated W k l m coefficients provides better results of the output model than other statistical methods. Thus, we need to find the median of every W k l m coefficient over all fit realizations (the number of realizations is L) of the used points. This step will eliminate extreme values of W k l m coefficients which can occur at some positions of the PSF due to convergence issues caused by sampling of the image or overfitting effects caused by high orders polynomials. Extreme values indicate that the algorithm found some local minimum of the cost function and not the global minimum. The values of the W k l m coefficients are then significantly different from the coefficients obtained in the previous position. These variations are given by the goodness of fit.
The output set of W k l m coefficients then consists of values verified over the field.
As is illustrated in Figure 4, the described procedure repeats in every order (as defined in Equation (10)) of W k l m coefficients. The estimates of higher order coefficients (6th and 8th) come from lower order coefficients that have already been estimated from lower orders. If we want to describe our optical system with coefficients up to the 8th order, we will first need to obtain coefficients of the 4th order. Then, it is necessary to repeat the procedure from Equation (22) to Equation (25) of the previously described procedure assuming 4th order W k l m coefficients as a starting condition. Following the whole procedure, we will estimate the 6th order coefficients. Then, repeating the procedure again with 6th order coefficients as a starting condition, we can finally calculate 35 coefficients of the 8th order. The result is a set of coefficients (the number of coefficients depends on the order that is used) related to the optical system with the field dependency described in Section 3.

6. Results

This section is divided into two subsections. Our first goal was to verify the method for estimating the W k l m coefficients. We, therefore, simulated an artificial imaging system and tested the convergence of the introduced algorithm to find proper W k l m parameters. The second subsection of results involves modeling real imaging systems with the results of different orders of the Zernike polynomials. The input conditions, such as SNR > 20 dB (peak to noise) of all considered PSFs, the number of calibration images and FWHM, mentioned in Section 5 have to be taken into account before acquiring the calibration images and applying the algorithm of the PSF estimate.

6.1. Numerical Stability Verification

This section deals with verifying the stability of the optimization convergence. The test pattern, used for verifying the functionality of the algorithm was an image obtained by generating random W k l m coefficients (i.e., random optical aberrations) and placing PSFs in locations covering the entire field of view (see Figure 5), assuming a rotational symmetric imaging system. To verify the algorithm, we test PSFs in the locations marked with red circles.
Table 1 summarizes the parameters of the simulated system. Table 2, Figure 6 and Figure 7 show successive verification of the proposed algorithm by values of MAXDIFF and the RMSE operator, a fitted curve illustrating the trend of the results, and standard deviation error bars. We can see that the difference between the original and the model is within the order of thousandths. The difference is given only by quantization noise and the sampling of the test pattern. The sampling of the original PSF appears as a serious issue, and low resolution causes problems within the convergence in the optimization of the W k l m parameters. The resolution of the image may affect the accurate positioning of the PSF. For an explanation, we can find the maximum of the PSF by calculating the center of the mass precisely, if the sampling of the pattern is finer. Therefore, if we can find the maximum precisely, we can use precise positioning u ^ , v ^ for calculating the PSF of the system.
The graphs mentioned above provide information about the accuracy of the model against the positioning of the PSF in the image. However, it can also be useful to mention the accuracy of the model when the original image includes noise. For this reason, we performed a test where one PSF, in the center of the image, was affected by additional Gaussian noise. A test was performed in which the SNR in the image under test was from 28 dB to 9 dB (see Figure 8). We can see that the algorithm works quite well from 28 dB to 20 dB, and MAXDIFF is less than 3%. However, when SNR is below 20 dB, the error decreases rapidly. This is due to the fact that our model focuses on optical aberrations, not on modeling the noise.
Figure 9 illustrates a direct comparison between the original PSF, the obtained model and the difference between them in relation to RMSE and the MAXDIFF operator. This verification process was performed to verify whether the algorithm is able to find the model W k l m parameters that are closest to the W k l m coefficients used in generating the pattern illustrated in Figure 5.

6.2. Experimental Laboratory Results

The second part of the results describes the performance of our method when dealing with real data. The experimental images were obtained with the setup illustrated in Figure 10. A small white LED diode was used as a light source, together with a pinhole 200 μm in diameter. The image data were acquired with a G2-8300 astronomical camera [44], with a 3358 × 2536 pixel CCD sensor, corresponding to a size of 18.1 × 13.7 mm, which gives a pixel size of 5.39 µm. The camera was equipped with a Sigma 10 mm EX DC HSM fisheye lens [45], which is the diagonal type of fisheye lens. A camera was mounted on a 2D rotational stage. This configuration, together with an observation distance of 5 m, gives FWHM size of a PSF of around 6 pixels. Different positions of the light source, to cover the entire FOV, were achieved by rotating the camera in both axes. The advantage of this approach is that the same distance is kept between the light source and the camera, assuming that nodal mounting has been used.
Table 3 summarizes parameters of the simulated system, which are the same as for the simulated system in Section 6.1. However, it is necessary to take into account that the optical aberrations are different. The graphs in Figure 11, Figure 12 and Figure 13 and Table 4 show the results of the fit with a different order of W k l m coefficients, a fitted curve illustrating the trend of the results, and standard deviation error bars. Figure 14 shows a comparison of the results with other PSF modeling methods. It can be seen that the best result is obtained with coefficients up to the 4th order. Then, with a higher order of the coefficients, we obtain slightly worse results, even for the PSFs placed on the optical axis. With increasing image distance, the situation seems to deteriorate, and the difference increases. One reason for this is the optimization convergence, which is worse for the more involved W k l m coefficients in 6th and 8th orders. Another explanation is provided by Figure 15 and Table 5. It can be seen that the absolute difference is slightly worse for higher orders of W k l m coefficients, but that the shape of the model or the precision of the obtained shape of the PSF is better. The image of the differences in Figure 15 provides information proving that the maximum of the difference is located in a smaller number of pixels in the central part of the PSF. Thus the model seems to be well described in terms of the shape description. We can conclude that the higher order (6th and 8th) W k l m coefficients provide better parameters of the model in terms of the shape description and better localization of aberrations, but sometimes at the cost of a slightly worse absolute difference. Table 4 shows another interesting phenomenon. The results obtained by 8th order fitting are slightly better than the results obtained by 6th order fitting. We observed that the 6th order model aberrations are overfitted; the model adds aberrations which are actually not present. However, use of the 8th order model compensates these overfitting effects. Finally, we have to admit that it is not always necessary to use high order Zernike polynomials, and our results illustrate that a simple solution including only basic aberrations (up to the 4th order) provides results of several percent.
When we compare our method for estimating field-dependent PSF with other approaches mentioned in the introduction, we obtain similar results. Results from all implemented approaches are illustrated in Figure 14. An approach based on work by Piotrowski [1], labeled as the Interpolated model, provide results of the MAXDIFF operator from 7.8% to 9.9% over the FOV. Another approach based on Weddel’s method [23], labeled as the Sectorized model, which divides the image into smaller invariant parts, provides similar results to the Interpolated model. However, it can be seen that the Sectorized model may fail when the PSF model in one sector does not precisely fit to all PSFs inside the sector. This can be seen in Figure 14, where the results for the Sectorized model (marked with a black+) at H = 0.83, 0.88, and 0.92 vary from 8% to 17%. The last method included here is fitting the PSFs with a Gaussian function [46] which was choosen because it can be use as a model of the diffraction limited optical system. This approach provides results from 14% to 22%; however, we had not expected very good results from this method. The Gaussian model was used as a complement to the space-variant methods. We also implemented an approach that uses the Moffat function [46]. However, the results provided by fitting the Moffat function start from 20% and go up to 40%. We therefore concluded that the Moffat model is inappropriate, and we did not include these results in Figure 14.
Our method includes detailed information about the shape of the PSF of the imaging system. Information about the shape of the PSF can be important for PSF fitting photometry in astronomy [47], for deconvolution algorithms in microscopy [48,49] and for other applications. Our model of the UWFOV system was successfully used by Fliegel [50] for a comparison of the deconvolution methods. In this context, a direct estimate of field-dependent Zernike polynomials brings a new approach to the description of space-variant imaging systems. The method was also used for modeling the WILLIAM all-sky camera [51], where the modeling method faced the issue of the presence of the Bayer mask in the system.

7. Conclusions

The proposed method for estimating PSF from an optical system is novel in a direct estimate of optical aberrations of the optical system. In our work, we have used findings reported in previous papers devoted to a complex description of optical systems and their aberrations. It is a difficult task to describe a system of this type, and UWFOV systems further greatly complicate this situation, since they are heavily aberrated. This paper has summarized a complex mathematical approach, and provides an algorithm for modeling the PSF of space-variant optical systems. The proposed algorithm has been verified with simulated data, and has been applied to real image data, showing error of the model around 5%. A comparison of images with different SNR provide MAXDIFF results around 3% for images with SNR greater than 20 dB. The algorithm has been compared with other space-variant modeling methods. It has been shown to be competitive since our results are better than or equal to the results provided by other modeling methods. Other space-variant models provide accuracy around 8.5%. Our results demonstrate that the approach described here is also suitable for UWFOV systems. The results compare models of imaging systems of 4th, 6th, and 8th orders of Zernike polynomials and show some benefits of using different orders. The contribution of this paper to the description of aberration is via a method for obtaining aberration coefficients in an unknown optical system and for using them in the model of space-variant PSF. The algorithm was used to model the WILLIAM system [51], and the model provided by the approach described here was used in a comparison [50] of deconvolution methods. Further research should address pixel profile sensitivity and sampling methods aimed at improving the way in which the model describes sensor PSF.

Acknowledgments

This work has been supported by Grant No. GA14-25251S “Nonlinear imaging systems with the spatially variant point spread function” of the Grant Agency of the Czech Republic and by the Grant Agency of the Czech Technical University in Prague, Grant No. SGS16/165/OHK3/2T/13 “Algorithms for Advanced Modeling and Analysis of Optical Systems with Variable Impulse Response”.

Author Contributions

Petr Janout conceived and designed the algorithm and the experiments; Petr Janout and Petr Skala performed the experiments; Jan Bednář contributed in field dependency analysis; and Petr Janout and Petr Páta wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The relation between indices and coefficients of the Zernike polynomials.
Table A1. The relation between indices and coefficients of the Zernike polynomials.
Z n m ( ρ , θ ) A n m Expansion Coefficient FunctionName
Z 0 0 ( ρ , θ ) A 0 0 1 2 [ 5 9 W 0 , 10 , 0 + 2 5 W 080 + 1 2 W 060 + 2 3 W 040 + W 020 + H 6 ( W 620 + 1 2 W 622 ) + H 4 ( W 420 + 1 2 W 422 + 2 3 W 440 + 1 3 W 442 + 1 4 W 444 ) + H 2 ( W 220 + 1 2 W 222 + 2 3 W 240 + 1 3 W 242 + 1 2 W 260 + 1 4 W 262 ) ] + W 800 H 8 + W 600 H 6 + W 400 H 4 + W 200 H 2 + W 000 Piston
Z 1 1 ( ρ , θ ) A 1 + 1 [ 16 35 W 171 + 1 2 W 151 + 2 3 W 131 + W 111 + H 6 W 171 + H 4 ( W 511 + 2 3 W 531 + 1 2 W 533 + ) + H 2 ( W 311 + 1 2 W 351 9 40 W 533 + 2 3 W 331 + 1 2 W 333 + 3 5 W 353 ) ] H cos ( φ ) Tilt
Z 1 1 ( ρ , θ ) A 1 1 [ 16 35 W 171 + 1 2 W 151 + 2 3 W 131 + W 111 + H 6 W 171 + H 4 ( W 511 + 2 3 W 531 + 1 2 W 533 + ) + H 2 ( W 311 + 1 2 W 351 9 40 W 533 + 2 3 W 331 + 1 2 W 333 + 3 5 W 353 ) ] H sin ( φ )
Z 2 0 ( ρ , θ ) A 2 0 1 2 [ 20 21 W 0 , 10 , 0 + 4 5 W 080 + 9 10 W 060 + W 040 + W 020 + H 6 ( W 620 + 1 2 W 622 ) + H 4 ( W 420 + 1 2 W 422 + W 440 + 1 2 W 442 + 3 8 W 444 ) + H 2 ( W 220 + 1 2 W 222 + W 240 + 1 2 W 242 + 9 10 W 260 + 9 20 W 262 ) ] Focus
Z 2 2 ( ρ , θ ) A 2 + 2 1 2 [ 3 5 W 262 + 3 4 W 242 + W 222 + H 4 W 622 + H 2 ( W 422 + 3 4 W 444 + 3 4 W 442 ) ] H 2 cos ( 2 φ ) Astigmatism
Z 2 2 ( ρ , θ ) A 2 2 1 2 [ 3 5 W 262 + 3 4 W 242 + W 222 + H 4 W 622 + H 2 ( W 422 + 3 4 W 444 + 3 4 W 442 ) ] H 2 sin ( 2 φ )
Z 3 1 ( ρ , θ ) A 3 + 1 1 3 [ 6 5 W 171 + 6 5 W 151 + W 131 + H 4 ( W 531 + 3 4 W 533 ) + H 2 ( W 531 + 3 4 W 333 + 6 5 W 351 + 9 10 W 353 ) ] H cos ( φ ) Coma
Z 3 1 ( ρ , θ ) A 3 1 1 3 [ 6 5 W 171 + 6 5 W 151 + W 131 + H 4 ( W 531 + 3 4 W 533 ) + H 2 ( W 531 + 3 4 W 333 + 6 5 W 351 + 9 10 W 353 ) ] H sin ( φ )
Z 4 0 ( ρ , θ ) A 4 0 1 6 [ 25 14 W 0 , 10 , 0 + 12 7 W 080 + 3 2 W 060 + W 040 + H 4 ( W 440 + 1 2 W 442 + 3 8 W 444 ) + H 2 ( W 240 + 1 2 W 242 + 3 2 W 260 + 3 4 W 262 ) ] Spherical
Z 3 3 ( ρ , θ ) A 3 + 3 1 4 ( W 333 + 1 20 W 353 + H 2 W 533 ) H 3 cos ( 3 φ ) Elliptical Coma
Z 3 3 ( ρ , θ ) A 3 3 1 4 ( W 333 + 1 20 W 353 + H 2 W 533 ) H 3 sin ( 3 φ )
Z 4 2 ( ρ , θ ) A 4 + 2 1 4 [ 1 2 W 242 + 2 3 W 262 + 1 2 H 2 ( W 444 + W 442 ) ] H 2 cos ( 2 φ ) Oblique Spherical
Z 4 2 ( ρ , θ ) A 4 2 1 4 [ 1 2 W 242 + 2 3 W 262 + 1 2 H 2 ( W 444 + W 442 ) ] H 2 sin ( 2 φ )
Z 5 1 ( ρ , θ ) A 5 + 1 1 10 [ 12 7 W 171 + W 151 + H 2 ( W 351 + 3 4 W 353 ) ] H cos ( φ ) 5th Coma
Z 5 1 ( ρ , θ ) A 5 1 1 10 [ 12 7 W 171 + W 151 + H 2 ( W 351 + 3 4 W 353 ) ] H sin ( φ )
Z 6 0 ( ρ , θ ) A 6 0 1 20 [ 25 9 W 0 , 10 , 0 + 2 W 080 + W 060 + H 2 ( W 260 + 1 2 W 262 ) ] 5th Spherical
Z 5 3 ( ρ , θ ) A 5 + 3 1 20 W 353 H 3 cos ( 3 φ )
Z 5 3 ( ρ , θ ) A 5 3 1 20 W 353 H 3 sin ( 3 φ )
Z 4 4 ( ρ , θ ) A 4 + 4 1 8 W 444 H 4 cos ( 4 φ )
Z 4 4 ( ρ , θ ) A 4 4 1 8 W 444 H 4 sin ( 4 φ )
Z 7 1 ( ρ , θ ) A 7 + 1 1 35 W 171 H cos ( φ )
Z 7 1 ( ρ , θ ) A 7 1 1 35 W 171 H sin ( φ )
Z 6 2 ( ρ , θ ) A 6 + 2 1 30 W 262 H 2 cos ( 2 φ )
Z 6 2 ( ρ , θ ) A 6 2 1 30 W 262 H 2 sin ( 2 φ )
Z 8 0 ( ρ , θ ) A 8 0 1 28 W 0 , 10 , 0 + 1 70 W 080 7th Spherical
Z 10 0 ( ρ , θ ) A 10 0 1 252 W 0 , 10 , 0 9th Spherical

References

  1. Piotrowski, L.W.; Batsch, T.; Czyrkowski, H.; Cwiok, M.; Dabrowski, R.; Kasprowicz, G.; Majcher, A.; Majczyna, A.; Malek, K.; Mankiewicz, L.; et al. PSF modelling for very wide-field CCD astronomy. Astron. Astrophys. 2013, 551, A119. [Google Scholar] [CrossRef]
  2. Zheng, G.; Ou, X.; Horstmeyer, R.; Yang, C. Characterization of spatially varying aberrations for wide field-of-view microscopy. Opt. Express 2013, 21, 15131–15143. [Google Scholar] [CrossRef] [PubMed]
  3. Zheng, G.; Horstmeyer, R.; Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 2013, 7, 739–745. [Google Scholar] [CrossRef] [PubMed]
  4. Lane, R.G.; Tallon, M. Wave-front reconstruction using a Shack–Hartmann sensor. Appl. Opt. 1992, 31, 6902–6908. [Google Scholar] [CrossRef] [PubMed]
  5. Ito, T.; Fujii, Y.; Ohta, N.; Saitoh, S.; Matsuura, T.; Yamamoto, T. Measurement of Space Variant PSF for Restoring Degraded Images by Security Cameras. In Proceedings of the 2016 SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 2542–2545.
  6. Řeřábek, M.; Páta, P. Modeling of the widefield space variant security systems. In Proceedings of the 42nd Annual IEEE International Carnahan Conference on Security Technology (ICCST 2008), Prague, Czech Republic, 13–16 October 2008; pp. 121–125.
  7. Ito, T.; Hoshino, H.; Fujii, Y.; Ohta, N. Reconstruction of face image from security camera based on a measurement of space variant PSF. In Proceedings of the 2009 ICCAS-SICE, Fukuoka, Japan, 18–21 August 2009; pp. 2301–2304.
  8. Pojmanski, G. The All Sky Automated Survey. Acta Astron. 1997, 47, 467–481. [Google Scholar]
  9. Burd, A.; Cwiok, M.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Grajda, M.; Husejko, M.; Jegier, M.; Kalicki, A.; Kasprowicz, G.; et al. Pi of the Sky—All-sky, real-time search for fast optical transients. New Astron. 2005, 10, 409–416. [Google Scholar] [CrossRef]
  10. Pickering, T.E. The MMT All-Sky Camera. In Ground-Based and Airborne Telescopes; Stepp, L.M., Ed.; Proc. SPIE: Washington, DC, USA, 2006; Volume 6267, p. 62671A-62671A-7. [Google Scholar]
  11. Martin, B.; Petr, P. Colour transformations and K-means segmentation for automatic cloud detection. Meteorol. Z. 2015, 24, 503–509. [Google Scholar]
  12. Anisimova, E.; Janout, P.; Blažek, M.; Bednář, M.; Fliegel, K.; Páta, P.; Vítek, S.; Švihlík, J. Analysis of images obtained from space-variant astronomical imaging systems. In SPIE Proceedings Vol. 8856: Applications of Digital Image Processing XXXVI; SPIE: Washington, DC, USA, 2013; p. 8856. [Google Scholar]
  13. Řeřábek, M. Advanced Processing of Images Obtained from Wide-field Astronomical Optical Systems. Acta Polytech. 2011, 51, 90–96. [Google Scholar]
  14. Řeřábek, M. Space Variant PSF Deconvolution of Wide-Field Astronomical Images. Acta Polytech. 2008, 48, 79–84. [Google Scholar]
  15. Trigo-Rodriguez, J.M.; Madiedo, J.M.; Gural, P.S.; Castro-Tirado, A.J.; Llorca, J.; Fabregat, J.; Vítek, S.; Pujols, P. Determination of Meteoroid Orbits and Spatial Fluxes by Using High-Resolution All-Sky CCD Cameras. Earth Moon Planet 2008, 102, 231–240. [Google Scholar] [CrossRef]
  16. Born, M.; Wolf, E. Principles of Optics, 7th ed.; Cambridge University Press: Cambridge, MA, USA, 1999. [Google Scholar]
  17. Zernike, F. von Beugungstheorie des schneidenver-fahrens und seiner verbesserten form, der phasenkontrastmethode. Physica 1934, 1, 689–704. [Google Scholar] [CrossRef]
  18. Noll, R.J. Zernike polynomials and atmospheric turbulence. J. Opt. Soc. Am. 1976, 66, 207–211. [Google Scholar] [CrossRef]
  19. Hopkins, H.H. Image formation by a general optical system 1: General theory. Appl. Opt. 1985, 24, 2491–2505. [Google Scholar] [CrossRef] [PubMed]
  20. Sasián, J. Theory of sixth-order wave aberrations. Appl. Opt. 2010, 49, 6502–6503. [Google Scholar] [CrossRef]
  21. Ye, J.; Gao, Z.; Wang, S.; Liu, X.; Yang, Z.; Zhang, C. Bi-Zernike polynomials for wavefront aberration function in rotationally symmetric optical systems. In Renewable Energy and the Environment, OSA Technical Digest (online); OSA: Washington, DC, USA, 2013; p. JM3A.6. [Google Scholar]
  22. Gray, R.W.; Dunn, C.; Thompson, K.P.; Rolland, J.P. An analytic expression for the field dependence of Zernike polynomials in rotationally symmetric optical systems. Opt. Express 2012, 20, 16436–16449. [Google Scholar] [CrossRef]
  23. Weddell, S.J.; Webb, R.Y. The restoration of extended astronomical images using the spatially-variant point spread function. In Proceedings of the 23rd International Conference on Image and Vision Computing New Zealand (IVCNZ), Christchurch, New Zealand, 26–28 November 2008; pp. 1–6.
  24. Janout, P.; Páta, P.; Bednář, J.; Anisimova, E.; Blažek, M.; Skala, P. Stellar objects identification using wide-field camera. In SPIE Proceedings Vol. 9450: Photonics, Devices, and Systems VI; Tománek, P., Senderáková, D., Páta, P., Eds.; SPIE: Washington, DC, USA, 2015; p. 94501I. [Google Scholar]
  25. Paczyński, B. Astronomy with Small Telescopes. Publ. Astron. Soc. Pac. 2006, 118, 1621–1625. [Google Scholar] [CrossRef]
  26. Vítek, S. Modeling of Astronomical Images. Balt. Astron. 2009, 18, 387–391. [Google Scholar]
  27. Hasan, S.Y.; Shaker, A.S. Study of Zernike Polynomials of an Elliptical Aperture Obscured with an Elliptical Obscuration. Available online: http://www.ncbi.nlm.nih.gov/pubmed/23262546 (accessed on 16 March 2016).
  28. Thompson, K.P.; Fuerschbach, K.; Rolland, J.P. An analytic expression for the field dependence of FRINGE Zernike polynomial coefficients in optical systems that are rotationally nonsymmetric. In SPIE Proceedings Vol. 7849: Optical Design and Testing IV; Wang, Y., Bentley, J., Du, C., Tatsuno, K., Urbach, H.P., Eds.; SPIE: Washington, DC, USA, 2010; p. 784906. [Google Scholar]
  29. Maksimenka, R.; Nuernberger, P.; Lee, K.F.; Bonvalet, A.; Milkiewicz, J.; Barta, C.; Klima, M.; Oksenhendler, T.; Tournois, P.; Kaplan, D.; et al. Direct mid-infrared femtosecond pulse shaping with a calomel acousto-optic programmable dispersive filter. Opt. Lett. 2010, 35, 3565–3567. [Google Scholar] [CrossRef] [PubMed]
  30. Pata, P.; Klima, M.; Bednar, J.; Janout, P.; Barta, C.; Hasal, R.; Maresi, L.; Grabarnik, S. OFT sectorization approach to analysis of optical scattering in mercurous chloride single crystals. Opt. Express 2015, 23, 21509–21526. [Google Scholar] [CrossRef] [PubMed]
  31. Yokozeki, S.; Ohnishi, K. Spherical Aberration Measurement with Shearing Interferometer Using Fourier Imaging and Moiré Method. Appl. Opt. 1975, 14, 623–627. [Google Scholar] [CrossRef] [PubMed]
  32. Sung, J.; Pitchumani, M.; Johnson, E.G. Aberration measurement of photolithographic lenses by use of hybrid diffractive photomasks. Appl. Opt. 2003, 42, 1987–1995. [Google Scholar] [CrossRef] [PubMed]
  33. Chung, J.; Kim, J.; Ou, X.; Horstmeyer, R.; Yang, C. Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography. Biomed. Opt. Express 2016, 7, 352–368. [Google Scholar] [CrossRef] [PubMed]
  34. Qi, Y.L.; Cheng, Z.Y. Linear Space-Variant Model and Restoration Algorithm of Imaging System. Appl. Mech. Mater. 2014, 608–609, 559–567. [Google Scholar] [CrossRef]
  35. Heide, F.; Rouf, M.; Hullin, M.B.; Labitzke, B.; Heidrich, W.; Kolb, A. High-quality Computational Imaging through Simple Lenses. ACM Trans. Graph. 2013, 32. [Google Scholar] [CrossRef]
  36. Shack, R.; Platt, B. Production and Use of a Lenticular Hartmann Screen. In Spring Meeting of the Optical Society of America; Chairman, D., Ed.; Optical Society of America: Washington, DC, USA, 1971. [Google Scholar]
  37. Navarro, R.; Arines, J.; Rivera, R. Direct and inverse discrete Zernike transform. Opt. Express 2009, 17, 24269–24281. [Google Scholar] [CrossRef] [PubMed]
  38. José, S. Introduction to Aberrations in Optical Imaging Systems; Cambridge University Press: Cambridge, MA, USA, 2013. [Google Scholar]
  39. Malacara, D. (Ed.) Optical Shop Testing, 3rd ed.; Wiley Series in Pure and Applied Optics; Wiley-Interscience: Hoboken, NJ, USA, 2007.
  40. Mahajan, V.N.; Díaz, J.A. Imaging characteristics of Zernike and annular polynomial aberrations. Appl. Opt. 2013, 52, 2062–2074. [Google Scholar] [CrossRef] [PubMed]
  41. Weddell, S.J.; Webb, R.Y. Reservoir Computing for Prediction of the Spatially-Variant Point Spread Function. IEEE J. Sel. Top. Signal Process. 2008, 2, 624–634. [Google Scholar] [CrossRef]
  42. Goodman, J.W. Introduction to Fourier Optics; Roberts and Company Publishers: Englewood, CO, USA, 2005. [Google Scholar]
  43. Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  44. CCD kamera G2-8300. Available online: http://www.gxccd.com/art?id=374&lang=405 (accessed on 7 July 2016).
  45. 10 mm F2.8 EX DC HSM Fisheye. Available online: http://www.sigmaphoto.com/10mm-f2-8-ex-dc-hsm-fisheye (accessed on 7 July 2016).
  46. Trujillo, I.; Aguerri, A.; Cepa, J.; Gutierrez, C.M. The effects of seeing on Sersic profiles. II. The Moffat PSF. Mon. Not. R. Astron. Soc. 2001, 328, 977–985. [Google Scholar] [CrossRef]
  47. Vítek, S.; Blažek, M. Notes on DSLR Photometry. Astron. Soc. India Conf. Ser. 2012, 7, 231. [Google Scholar]
  48. Lukeš, T.; Křížek, P.; Švindrych, Z.; Benda, J.; Ovesný, M.; Fliegel, K.; Klíma, M.; Hagen, G.M. Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation. Opt. Express 2014, 22, 29805–29817. [Google Scholar] [CrossRef] [PubMed]
  49. Lukeš, T.; Hagen, G.M.; Křížek, P.; Švindrych, Z.; Fliegel, K.; Klíma, M. Comparison of Image Reconstruction Methods for Structured Illumination Microscopy. In SPIE Proceedings Vol. 9129: Biophotonics—Photonic Solutions for Better Health Care IV; Popp, J., Tuchin, V.V., Matthews, D.L., Pavone, F.S., Garside, P., Eds.; SPIE: Washington, DC, USA, 2014; p. 91293J. [Google Scholar]
  50. Fliegel, K.; Janout, P.; Bednář, J.; Krasula, L.; Vítek, S.; Švihlík, J.; Páta, P. Performance Evaluation of Image Deconvolution Techniques in Space-Variant Astronomical Imaging Systems with Nonlinearities. In SPIE Proceedings Vol. 9599: Applications of Digital Image Processing XXXVIII; Tescher, A.G., Ed.; SPIE: Washington, DC, USA, 2015; p. 959927. [Google Scholar]
  51. Janout, P.; Páta, P.; Skala, P.; Fliegel, K.; Vítek, S.; Bednář, J. Application of Field Dependent Polynomial Model. In SPIE Proceedings Vol. 9971: Applications of Digital Image Processing XXXIX; Tescher, A.G., Ed.; SPIE: Washington, DC, USA, 2016; p. 99710F. [Google Scholar]
Figure 1. The difference between an ideal spherical wavefront and an aberrated wavefront.
Figure 1. The difference between an ideal spherical wavefront and an aberrated wavefront.
Applsci 07 00151 g001
Figure 2. Graphical representation of the adopted coordinate systems. Subfigure (a) illustrates all planes, subfigure (b) illustrates normalized exit pupil and (c) illustrates notation of normalized Image plane.
Figure 2. Graphical representation of the adopted coordinate systems. Subfigure (a) illustrates all planes, subfigure (b) illustrates normalized exit pupil and (c) illustrates notation of normalized Image plane.
Applsci 07 00151 g002
Figure 3. The set of Zernike polynomials used here. Table A1 includes the field dependency of these polynomials.
Figure 3. The set of Zernike polynomials used here. Table A1 includes the field dependency of these polynomials.
Applsci 07 00151 g003
Figure 4. Diagram of the proposed algorithm. Note that the W k l m coefficients from 4th order estimate are used as a start condition for the 6th order estimate, and the 6th order W k l m coefficients are used as a start condition for an estimate of the 8th order W k l m coefficients.
Figure 4. Diagram of the proposed algorithm. Note that the W k l m coefficients from 4th order estimate are used as a start condition for the 6th order estimate, and the 6th order W k l m coefficients are used as a start condition for an estimate of the 8th order W k l m coefficients.
Applsci 07 00151 g004
Figure 5. Field of simulated PSFs with randomly generated W k l m coefficients. We consider the simulated imaging system as rotationally symmetric. The points in the red circle were used for verifying the algorithm.
Figure 5. Field of simulated PSFs with randomly generated W k l m coefficients. We consider the simulated imaging system as rotationally symmetric. The points in the red circle were used for verifying the algorithm.
Applsci 07 00151 g005
Figure 6. The verification results contain all points used in the verification. The positions of all points are illustrated in Figure 5. MAXDIFF according to the normalized image distance.
Figure 6. The verification results contain all points used in the verification. The positions of all points are illustrated in Figure 5. MAXDIFF according to the normalized image distance.
Applsci 07 00151 g006
Figure 7. The verification results contain all points used in the verification. The positions of all points are illustrated in Figure 5. RMSE according to the normalized image distance.
Figure 7. The verification results contain all points used in the verification. The positions of all points are illustrated in Figure 5. RMSE according to the normalized image distance.
Applsci 07 00151 g007
Figure 8. The dependence of SNR on MAXDIFF. When SNR is lower than 20 dB, the error of the model increases rapidly.
Figure 8. The dependence of SNR on MAXDIFF. When SNR is lower than 20 dB, the error of the model increases rapidly.
Applsci 07 00151 g008
Figure 9. The result of fitting. A comparison between the original PSF and the PSF model of the system. The graph on the right shows the intensity of the difference between the original PSF and the model; it indicates a very good result for the goodness of fit.
Figure 9. The result of fitting. A comparison between the original PSF and the PSF model of the system. The graph on the right shows the intensity of the difference between the original PSF and the model; it indicates a very good result for the goodness of fit.
Applsci 07 00151 g009
Figure 10. The experimental setup, consisting of a camera stage and a light source with a small aperture.
Figure 10. The experimental setup, consisting of a camera stage and a light source with a small aperture.
Applsci 07 00151 g010
Figure 11. Experimental results for the MAXDIFF difference up to the 4th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Figure 11. Experimental results for the MAXDIFF difference up to the 4th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Applsci 07 00151 g011
Figure 12. Experimental results for the MAXDIFF difference up to the 6th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Figure 12. Experimental results for the MAXDIFF difference up to the 6th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Applsci 07 00151 g012
Figure 13. Experimental results for the MAXDIFF difference up to the 8th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Figure 13. Experimental results for the MAXDIFF difference up to the 8th order. The graph contains results from Table 4 and ten other points according to the normalized image distance H. The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5. The angle ϕ of these points differs from zero. These points are placed at positions covering the entire FOV.
Applsci 07 00151 g013
Figure 14. A comparison of different modeling of a single optical system. All models were fitted to the same set of PSFs.
Figure 14. A comparison of different modeling of a single optical system. All models were fitted to the same set of PSFs.
Applsci 07 00151 g014
Figure 15. Results of the fitting. The PSF model is shown with different orders of polynomials. The object was placed at 20° with respect to the optical axis (H = 0.33). In Table 4 this PSF is marked with the blue column. The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.
Figure 15. Results of the fitting. The PSF model is shown with different orders of polynomials. The object was placed at 20° with respect to the optical axis (H = 0.33). In Table 4 this PSF is marked with the blue column. The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.
Applsci 07 00151 g015
Table 1. The sensor size, the resolution and the optical parameters of the simulated artificial imaging system.
Table 1. The sensor size, the resolution and the optical parameters of the simulated artificial imaging system.
Resolution3358 × 2536 px
Sensor size18.1 × 13.7 mm
Pixel size5.39 µm
Lens focus distance10 mm
FOV110°
Table 2. Selected on-axis points results of verification of the algorithm, where normalized distance H is from 0 to 0.83 and ϕ is equal to zero. Image distance H is normalized according to the sensor, and it is related to an FOV of 110°; all optical parameters are summarized in Table 1. Differences between the original PSF and the model are within the order of thousandths. Thus, the proposed algorithm can find the exact W k l m coefficients used for generating the test pattern illustrated in Figure 5.
Table 2. Selected on-axis points results of verification of the algorithm, where normalized distance H is from 0 to 0.83 and ϕ is equal to zero. Image distance H is normalized according to the sensor, and it is related to an FOV of 110°; all optical parameters are summarized in Table 1. Differences between the original PSF and the model are within the order of thousandths. Thus, the proposed algorithm can find the exact W k l m coefficients used for generating the test pattern illustrated in Figure 5.
MetricsNormalized Image Distance H (-)
00.170.330.500.670.83
RMSE (10−5)8.27.38.81657270
MAXDIFF (‰)0.140.130.160.362.16.4
Table 3. The sensor size, the resolution and the optical parameters of the experimental imaging system used for acquiring the image dataset.
Table 3. The sensor size, the resolution and the optical parameters of the experimental imaging system used for acquiring the image dataset.
Resolution3358 × 2536 px
Sensor size18.1 × 13.7 mm
Pixel size5.39 µm
Lens focus distance10 mm
FOV110°
Table 4. Selected on-axis point results for the MAXDIFF difference between the original and the estimated model, where H is from 0 to 0.83 and ϕ is equal to zero. The MAXDIFF differences of the polynomial orders are given as a percentage. Highlighted columns contain results used later in the comparison.
Table 4. Selected on-axis point results for the MAXDIFF difference between the original and the estimated model, where H is from 0 to 0.83 and ϕ is equal to zero. The MAXDIFF differences of the polynomial orders are given as a percentage. Highlighted columns contain results used later in the comparison.
MetricsNormalized Image Distance H (-)
00.170.330.500.670.83
4th order4.65.15.37.87.59.4
6th order5.47.76.59.47.712.5
8th order56.46.17.77.610
Table 5. A direct comparison of results estimated by 4th, 6th and 8th order polynomials. This table is related to Figure 15. The results are calculated for one position of the light source at H = 0.33. The total flux difference was calculated for enumerating the overall intensity difference between the original PSF and the model.
Table 5. A direct comparison of results estimated by 4th, 6th and 8th order polynomials. This table is related to Figure 15. The results are calculated for one position of the light source at H = 0.33. The total flux difference was calculated for enumerating the overall intensity difference between the original PSF and the model.
Metrics4th Order6th Order8th Order
RMSE (-)0.0320.0390.036
MAXDIFF (%)5.36.56.1
Total flux difference (‰)0.310.350.37

Share and Cite

MDPI and ACS Style

Janout, P.; Páta, P.; Skala, P.; Bednář, J. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems. Appl. Sci. 2017, 7, 151. https://doi.org/10.3390/app7020151

AMA Style

Janout P, Páta P, Skala P, Bednář J. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems. Applied Sciences. 2017; 7(2):151. https://doi.org/10.3390/app7020151

Chicago/Turabian Style

Janout, Petr, Petr Páta, Petr Skala, and Jan Bednář. 2017. "PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems" Applied Sciences 7, no. 2: 151. https://doi.org/10.3390/app7020151

APA Style

Janout, P., Páta, P., Skala, P., & Bednář, J. (2017). PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems. Applied Sciences, 7(2), 151. https://doi.org/10.3390/app7020151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop