Next Article in Journal
Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks
Previous Article in Journal
Paperboard Coating Detection Based on Full-Stokes Imaging Polarimetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns

1
Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China
2
Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China
3
Department of Electronics Engineering, Guangdong Communication Polytechnic, Guangzhou 510650, China
4
Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510650, China
*
Author to whom correspondence should be addressed.
The authors contributed equally to this work.
Sensors 2021, 21(1), 209; https://doi.org/10.3390/s21010209
Submission received: 29 November 2020 / Revised: 26 December 2020 / Accepted: 27 December 2020 / Published: 31 December 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
The existing lens correction methods deal with the distortion correction by one or more specific image distortion models. However, distortion determination may fail when an unsuitable model is used. So, methods based on the distortion model would have some drawbacks. A model-free lens distortion correction based on the phase analysis of fringe-patterns is proposed in this paper. Firstly, the mathematical relationship of the distortion displacement and the modulated phase of the sinusoidal fringe-pattern are established in theory. By the phase demodulation analysis of the fringe-pattern, the distortion displacement map can be determined point by point for the whole distorted image. So, the image correction is achieved according to the distortion displacement map by a model-free approach. Furthermore, the distortion center, which is important in obtaining an optimal result, is measured by the instantaneous frequency distribution according to the character of distortion automatically. Numerical simulation and experiments performed by a wide-angle lens are carried out to validate the method.

1. Introduction

Camera lenses suffer from optical aberration; thus, the nonlinear distortion would be introduced into the captured image, especially for the lens with large field of view (FOV). Therefore, distortion correction is a significant problem in the analysis of digital images. The accurate distortion correction of lens is especially crucial for any computer vision task [1,2,3,4] that involves quantitative measurements in the geometric position determination, dimensional measurement, image recognition, and so on. Existing methods for distortion correction can be divided into two main categories: traditional vision measurement methods and learning-based methods.
For the traditional vision measurement methods, it falls into the following main types. One relies on a known measuring pattern [5,6,7], including straight lines, vanishing points, and a planar pattern. It estimates the parameters of the un-distortion function by a known pattern to achieve correction. It is simple and effective, but the distortion center in nonlinear optimization would lead to instabilities [8]. The second is the multiple view correction method [9,10,11], which utilizes the correspondences between points in different images to measure lens distortion parameters. It achieves auto-correction without any special pattern but requires a set of images captured from different views. The third is the plumb-line method [12,13,14,15], which makes the distortion parameter estimation by some distorted circular arcs. Accurate circular arcs detection is a very important aspect for the robustness and flexibility of this kind of method. Human supervision and some robust algorithms for circular arcs detection are developed. All the above-mentioned methods rely on some specific distortion models, such as the commonly used even-order polynomial model [16] proposed by Brown, division model [17] proposed by Fitzgibbon, and fisheye lens model [18]. The whole image achieves distortion correction by employing several characteristic points or lines for the analysis to find out the distortion parameters. It would have poor generalization abilities to other distortion models. Furthermore, it should be noted that all these distortion models achieve ideal circular symmetry.
For the learning-based methods, it can be divided into two kinds. The first one is the parameter-based method [19,20,21], which estimates the distortion parameters by using convolutional neural networks (CNNs) in terms of the single parameter division model or fisheye lens model. It would provide more accurate distortion parameters estimation. However, the networks are still trained by a synthesized distorted image dataset derived from some specific distortion models, which causes inferior results for other types of distortion models. Recently, some distortion correction methods demanding no specific distortion models by deep learning have been proposed. Liao et al. [22] introduced model-free distortion rectification by introducing a distortion distribution map. Li et al. [23] proposed bind geometric distortion correction by using the displacement field between distorted images and corrected images. For these methods, there are different types of distortion models involved into the synthesized distorted image dataset for training, and the distortion distribution map or displacement field are obtained according to these distortion models. It means that the distortion correction is still built on some distortion models, and the employed distortion models are circular symmetry. However, none of the existing mathematical distortion models can work well for all the real lenses with fabrication artifacts.
In addition, there are some model-free distortion correction methods. Munji [24] and Tecklenburg et al. [25] proposed a correction model based on finite elements. The remaining errors in the sensor space can be corrected by interpolation with the finite element. However, the interpolation effect will be reduced when the measured image points are not enough. Grompone von Gioi et al. [26] designed a model-free distortion correction method. It involves great computation and is time consuming, because the optimization algorithm and loop validation are used for high precision.
Fringe-pattern phase analysis [27,28,29], due to its advantage of highly automated full-field analysis, is widely used in various optical measurement technologies, such as interferometry, digital holography, moire fringe measurement, and so on. It is also used for lens distortion determination. Bräuer-Burchardt et al. [30] achieved lens distortion correction by Phasogrammetry. The experiment system and image processing are a bit complicated because both the projector lens distortion and camera lens distortion are involved. Li et al. [31] eliminated the projector lens distortion by employing the Levenberg–Marquardt algorithm for improving the measurement accuracy of fringe projection profilometry, where the lens distortion is described by a polynomial distortion model. We employed the phase analysis of one-dimensional measuring fringe-pattern and polynomial fitting for simple lens distortion elimination by assuming that the distortion is ideal circular symmetry [32].
In this paper, a model-free lens distortion correction based on the phase analysis of a fringe-pattern is proposed. Unlike the method in [32], the proposed method does not rely on circular symmetry assumption. In order to avoid using the distortion model and circular symmetry assumption, the proposed method uses two sets of directional orthogonal fringe patterns to obtain the distortion displacement of all points in the distorted image. Moreover, considering that the distortion center may be displaced from the image center by many factors, such as an offset of the lens center from the sensor center of the camera, a slight tilt of the sensor plane with respect to the lens, a misalignment of the individual components of a compound lens, and so on, the distortion center should be measured in accordance with specific conditions instead of being assumed as the image center directly. For the proposed method, the distortion center is measured by the instantaneous frequency distribution according to the character of distortion automatically. The theoretical description of distortion measurement based on the phase analysis of the sinusoidal fringe-pattern is introduced. The numerical simulation results and experimental results show the validity and advantages of the proposed method.

2. Principle of Model-Free Lens Distortion Correction Method

2.1. Lens Distortion

A simple grid chart of the negative (barrel) distortion, as shown in Figure 1, is employed to present the theoretical description of lens distortion simply and clearly. The blue lines and the red lines are corresponding to the lines before and after distortion, respectively. It is easy to find that the undistorted point P comes to the distorted point Q after distortion. According to the geometry shown in Figure 1, P Q ¯ is the distortion displacement Δ r according to the distorted point Q with Δ r > 0 for the barrel distortion and Δ r < 0 for the pincushion distortion. So, the point Q ( x Q , y Q ) should be corrected to the point P ( x P , y P ) , which satisfies:
| Δ r | = Δ x 2 + Δ y 2
with Δ x = x P x Q and Δ y = y P y Q . For image distortion correction, the most important thing is to decide the distortion displacement Δ r , i.e., Δ x and Δ y , at each point. Symbols denotation is given in nomenclature, as shown in Table 1.

2.2. Phase Analysis of Fringe-Pattern Measuring Template

Two sets of sinusoidal fringe-patterns parallel to the y-axis and x-axis of the display coordinate system, i.e., the longitudinal and transverse fringe-patterns, are employed as measuring templates for phase demodulation analysis to obtain the distortion displacement Δ x and Δ y , respectively. Figure 2 shows the sinusoidal fringe-patterns before and after barrel distortion, respectively. The undistorted longitudinal and transverse fringe-patterns are expressed as:
{ I x u ( x , y ) = A + B cos [ 2 π f x o x + ϕ x o ] I y u ( x , y ) = A + B cos [ 2 π f y o y + ϕ y o ] .
The corresponding distorted fringe-patterns are:
{ I x d ( x , y ) = A + B cos [ 2 π f x o x + ϕ x ( x , y ) + ϕ x o ] I y d ( x , y ) = A + B cos [ 2 π f y o y + ϕ y ( x , y ) + ϕ y o ]
where A is the background intensity; B / A is the visibility of the fringe-pattern; f x o and f y o are the fundamental spatial frequency of the longitudinal and transverse sinusoidal fringe-pattern, respectively; ϕ x ( x , y ) and ϕ y ( x , y ) are the modulated phase caused by the lens distortion; ϕ x o and ϕ y o are the initial phase. By the analysis of the fringe-patterns, the modulated phase can be obtained point by point, so the distortion displacement Δ x and Δ y at each point is decided.

2.2.1. Phase of Distorted Fringe-Pattens

The phase-shifting method [33], providing high precision point-to-point phase retrieval from fringe-patterns due to its best spatial localization merit, is employed for phase demodulation. The intensity distributions of the longitudinal and transverse sinusoidal fringe-patterns after distortion are:
{ I x , n d ( x , y ) = A + B cos [ φ x ( x , y ) + 2 π ( n 1 ) N ] I y , n d ( x , y ) = A + B cos [ φ y ( x , y ) + 2 π ( n 1 ) N ]
where φ x ( x , y ) = 2 π f x o x + ϕ x ( x , y ) + ϕ x o and φ y ( x , y ) = 2 π f y o y + ϕ y ( x , y ) + ϕ y o are the phase of longitudinal and transverse fringe-pattern, respectively; n = 1 , 2 , , N and N = 4 . By employing the four-step phase-shifting method, the wrapped phase distribution can be acquired from the distorted fringe-patterns as:
{ φ x ( x , y ) = arctan [ I x , 4 d ( x , y ) I x , 2 d ( x , y ) I x , 1 d ( x , y ) I x , 3 d ( x , y ) ] φ y ( x , y ) = arctan [ I y , 4 d ( x , y ) I y , 2 d ( x , y ) I y , 1 d ( x , y ) I y , 3 d ( x , y ) ] .
The calculated phase is wrapped in [ π , π ] by arctangent calculation. So, the unwrapping algorithm [34] is performed, and the continuous phase distribution of the distorted fringe-pattern is obtained. More number of phase shifts would reduce the distortion of the cosine signal and improve the precision of phase demodulation.

2.2.2. Modulated Phase Calculation by Distortion Center Detection

The modulated phases ϕ x ( x , y ) and ϕ y ( x , y ) , which contain the distortion information, are calculated by subtracting the phase ( 2 π f x o x + ϕ x 0 ) and ( 2 π f y o y + ϕ y 0 ) from φ x ( x , y ) and φ y ( x , y ) , respectively. So, in order to obtain the modulated phase, the fundamental spatial frequency f x o and f y o should be detected. According to the distortion character, there is no distortion at the distortion center. Therefore, the fundamental spatial frequency of the fringe-pattern is detected at the position of the distortion center. It should be noticed that the distortion center may be significantly displaced from the center of the image by many factors. For the proposed method, the distortion center can be measured by the instantaneous frequency distribution automatically. It is at the position where the minimum instantaneous frequency appears for the barrel distortion and the maximum instantaneous frequency appears for the pincushion distortion. So, we perform the partial derivative operation along the x and y direction of the phase φ x ( x , y ) and φ y ( x , y ) to get the instantaneous frequency f x and f y respectively as:
{ f x ( x , y ) = 1 2 π d d x φ x ( x , y ) f y ( x , y ) = 1 2 π d d y φ y ( x , y ) .
By judging the variation trend of instantaneous frequency, the type of distortion could be determined. When the instantaneous frequency increases along the radial direction from the distortion center, it is the barrel distortion. Otherwise, it is the pincushion distortion. Then, by detecting the position of the minimum f x and f y along the x and y direction for the barrel distortion, or the position of the maximum f x and f y for the pincushion distortion, the distortion center can be decided as ( x 0 , y 0 ) . So, the corresponding fundamental frequencies are determined at ( x 0 , y 0 ) as:
{ f x o = 1 2 π φ x ( x , y ) x | x = x 0 y = y 0 f y o = 1 2 π φ y ( x , y ) y | x = x 0 y = y 0 .
According to f x o and f y o , the phase distribution 2 π f x o x and 2 π f y o y can be calculated, and the modulated phase can be rewritten as Δ φ x ( x , y ) and Δ φ y ( x , y ) :
{ Δ φ x ( x , y ) = φ x ( x , y ) [ 2 π f x o ( x x 0 ) + φ x ( x 0 , y 0 ) ] Δ φ y ( x , y ) = φ y ( x , y ) [ 2 π f y o ( x x 0 ) + φ y ( x 0 , y 0 ) ]
where φ x ( x 0 , y 0 ) and φ y ( x 0 , y 0 ) are the phase of the longitudinal and transverse sinusoidal fringe-patterns at the distortion center.

2.2.3. Relationship of Modulated Phase and Distortion Displacement

The distortion displacement Δ x ( x , y ) and Δ y ( x , y ) are obtained by the modulated phase as:
{ Δ x ( x , y ) = Δ φ x ( x , y ) 2 π f x o Δ y ( x , y ) = Δ φ y ( x , y ) 2 π f y o       .  
Therefore, the measurement of the distortion displacement is transferred into the calculation of the modulated phase by the fringe-pattern analysis. In a word, the distortion displacement Δ x ( x , y ) and Δ y ( x , y ) can be measured point by point according to the phase demodulation analysis of the distorted sinusoidal fringe-patterns.

2.2.4. Distortion Correction

The inverse mapping method with the bilinear interpolation is employed for the image distortion correction. It should be noticed that the distortion displacement obtained by Equation (9) corresponds to the points on the distorted fringe-pattern. So, we recalculate the distortion displacement Δ x and Δ y , which correspond to the points on the corrected fringe-pattern. Firstly, we establish the discrete numerical correspondences of the new distortion displacement Δ x and Δ y with the points ( x + Δ x , y + Δ y ) on the corrected fringe-pattern. It should be noticed that the calculated coordinate value of the points ( x + Δ x , y + Δ y ) on the corrected fringe-pattern may not be integer. So, we calculate the distortion displacement Δ x ( m , n ) and Δ y ( m , n ) by performing the bicubic interpolation algorithm, where ( m , n ) is the integer coordinate value of the corrected point. Therefore, the distribution of the distortion displacement corresponding to the points on the corrected fringe-pattern is decided, and the image distortion correction can be achieved by adopting the inverse mapping method directly by:
{ x d m , n = m Δ x ( m , n ) y d m , n = n Δ y ( m , n )
where ( x d m , n , y d m , n ) is the corresponding distorted point. Finally, the bilinear interpolation algorithm is employed for the image interpolation not only because of the simple and convenient calculation process but also due to its ability of overcoming the problem of gray-scale discontinuity.

3. Numerical Simulation

Numerical simulation is performed to verify the validity of the method we introduced. Firstly, two sets of longitudinal and transverse sinusoidal fringe-patterns of 512   ×   512 pixels with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are employed as measuring patterns. The spatial period of the undistorted fringe-pattern is 16 pixels. Then, the single parameter division model [17] with distortion parameter λ   =   1   ×   10 6 , given by Equation (11), is employed to generate the barrel distorted fringe-patterns.
r u = r d 1 + λ r d 2
where r u and r d are the Euclidean distance of the distorted and undistorted point to the distortion center, respectively. Moreover, the distortion center is shifted away from the image center ( 256 , 256 ) to ( 273 , 289 ) . Figure 3 shows the corresponding distorted fringe-patterns, respectively. It should be noticed that the proposed method is model-free. The single parameter division model employed here, which can be replaced by any other distortion model, is just to generate a simulated distorted image.
The analysis process and the corresponding results can be described as follows.
Step 1: Employ the four-step phase-shifting method for the phase demodulation and perform the unwrapping algorithm to obtain the phase distribution of the distorted longitudinal and transverse fringe-patterns. Figure 4a,b are the corresponding wrapped phase.
Step 2: Calculate the instantaneous frequency f x and f y . It is determined as barrel distortion because the instantaneous frequency increases along the radial direction from the distortion center. By detecting the minimum f x along the x direction, we can find that the distortion center is at the 273th column. Similarly, by detecting the minimum f y along the y direction, we can find that the distortion center is at the 289th row. So, the distortion center is at ( 273 , 289 ) . Figure 5a,b f x and f y , where the red dotted lines are at the 289th row and 273th column, respectively. Figure 5c,d are the corresponding distribution of f x at the 289th row and f y at the 273th column, respectively. The fundamental frequency is f x o = f y o = 0.0625 and the phase φ x ( x 0 , y 0 ) and φ y ( x 0 , y 0 ) at the distortion center are obtained.
Step3: Modulated phase calculation. According to Equation (8), the modulated phase Δ φ x ( x , y ) and Δ φ y ( x , y ) are obtained as shown in Figure 6a,b. So, the distortion displacement Δ x ( x , y ) and Δ y ( x , y ) are obtained point by point according to Equation (9), as shown in Figure 6c,d. The distortion displacement Δ r can be obtained by Equation (1), and the maximum error of Δ r is 0.24 pixels. In order to further validate the proposed method, distortion parameter estimation according to the single parameter division model is performed. The estimated distortion parameter is λ   =   0.9920   ×   10 6 with the relative error of 0.8%.
Step4: Distortion displacement map calculation. We establish the discrete numerical correspondences of the new distortion displacement Δ x and Δ y with the points ( x + Δ x , y + Δ y ) on the corrected fringe-pattern. Table 2 shows some of the distortion displacement of the points on the distorted fringe-pattern and the corresponding points on the corrected fringe-pattern. We find that the calculated coordinates of the points on the corrected fringe-pattern ( x + Δ x , y + Δ y ) are not integers. So, we calculate the distortion displacement Δ x ( m , n ) and Δ y ( m , n ) by performing the bicubic interpolation algorithm, where ( m , n ) is the integer coordinate value of the corrected point.
Step5: According to the distortion displacement map, image distortion correction can be performed by employing the inverse mapping with the bilinear interpolation directly.
A numerical simulation of the distorted checkerboard is performed. Figure 7a shows the distorted checkerboard image with the same distortion parameter of λ = 1   ×   10 6 and distortion center ( 273 , 289 ) . Figure 7b is the corrected result by the proposed method, where the red points represent the corners. Figure 8 is the corresponding corners image, where the red asterisks represent the corners of the distorted checkerboard image and the blue points represent the corners of the corrected checkerboard image. The distortion displacement at the left top point is Δ r   =   27.30 pixel. The curvature radius of the red line formed by the left points on the distorted checkerboard image is 2.2148   ×   10 3 pixels, and that of the blue line corresponding to the corrected checkerboard image is 1.4731   ×   10 5 pixels. It means that the circular arc is corrected to be straight.

4. Experiment and Results

An experimental setup shown in Figure 9 is employed to perform distortion correction of a wide-angle lens. A flat-panel liquid crystal display (LCD) is used to display two sets of longitudinal and transverse sinusoidal fringe-patterns with phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 . The images of these fringe-patterns are captured by a charge-couple device (CCD) camera with a wide-angle lens. The LCD plane can be regarded as an ideal plane, and the optical axis of the camera is perpendicular to the LCD plane. The distorted fringe-patterns captured by the camera are shown in Figure 10. Figure 11a,b show the intensity distributions of the central row and column of the longitudinal and transverse fringe-patterns with a phase shift of π / 2 , respectively.
By performing the four-step phase-shifting analysis, the wrapped phases of the distorted longitudinal and transverse fringe-patterns are obtained as shown in Figure 12a,b. It shows that the phase of the distorted fringe-pattern can be demodulated well even when the intensity distribution of the fringe-patterns is low within some areas. The corresponding unwrapped phase can be obtained by performing the unwrapping algorithm. First, we perform the partial derivative operation of the phase of the distorted longitudinal and transverse fringe-pattern to get the instantaneous frequency f x and f y , respectively. The type of distortion is determined as barrel distortion for the instantaneous frequency increases along the radial direction. Then, the distortion center is decided at ( 1224 , 1008 ) according to the distribution of the instantaneous frequency f x and f y . Considering the fluctuation of the analyzed phase caused by the noise in the experiment, we perform numerical linear fitting to calculate the phase of the undistorted fringe-pattern by employing the central 25 points of the phase of the distorted longitudinal and transverse fringe-pattern respectively, instead of performing the calculation by the instantaneous frequency and phase at the distortion center. The points of { φ x ( x 0 12 , y 0 ) , φ x ( x 0 , y 0 ) , φ x ( x 0 + 12 , y 0 ) } and { φ y ( x 0 , y 0 12 ) , φ y ( x 0 , y 0 ) , φ y ( x 0 , y 0 + 12 ) } are employed for calculation. The fundamental frequencies are f x o = f y o = 0.0133 . The modulated phase distribution Δ φ x ( x , y ) and Δ φ y ( x , y ) are obtained as shown in Figure 13a,b respectively. Figure 13c,d show the numerical distortion displacement map Δ x ( m , n ) and Δ y ( m , n ) of size 2496   ×   2984 pixels. Finally, by employing the inverse mapping with the bilinear interpolation, the image distortion correction can be achieved.
Figure 14 shows the experiment of checkerboard images, where Figure 14a is the distorted image and Figure 14b is the corrected image with the red points representing the corners by the proposed model-free method. Figure 15 is the corresponding corners image, where the red asterisks represent the corners of the distorted checkerboard image and the blue points represent the corners of the corrected checkerboard image. The distortion displacement at the left top point is Δ r = 198.94 pixels. The curvature radius of the red line formed by the left points on the distorted checkerboard image is 3.2457   ×   10 3 pixels, and that of the blue line corresponding to the corrected checkerboard image is 6.8656   ×   10 4 pixels. The larger the curvature radius, the closer the curve is to the straight line, i.e., the better the correction effect.
Firstly, we perform distortion correction by the plumb-line method with a single parameter division model [12] for comparison. The red line shown in Figure 15 is employed for the estimation of the distortion parameter. The estimated λ is 2.1469   ×   10 7 . According to Equation (11), the distortion displacement can be numerically calculated. The corrected image is shown in Figure 14c, where the red points represent the corners. The curvature radius of the red line is 3.2821   ×   10 4 pixels, compared with the corresponding curvature radius of 6.8656 × 10 4 pixels by the proposed method. Moreover, the square within the central region is not the same size as the square at the external region in the corrected image as shown in Figure 14c. We select two white squares at the central and external region to show the difference. The pixel count of the square within the central green rectangular region is 46,988 pixels compared that of the square within the external blue rectangular region being 57,490 pixels. By the proposed method, the pixel counts of these two corresponding squares are 49,033 and 48,460 pixels as shown in Figure 14b. It means that the estimated distortion lambda of the single parameter division model by this characteristic circular arc does not fit for the whole image correction.
On the other hand, in order to make comparison with the method employing some distortion models, we employ the method in [32] for distortion displacement detection. We take the distortion center point as the origin of the coordinate and perform the numerical curve fitting according to the discrete distortion displacement from the 1224th to the 2556th point at the 1008th row by three different distortion models, which are an even-order polynomial model with one and two distortion parameters and single parameter division model. The even-order polynomial model [16] is described as:
r u = r d ( 1 + λ 1 r d 2 + λ 2 r d 4 + λ 3 r d 6 + ) .
Figure 16 shows the curve fitting results, where the black line is the analyzed radial distortion displacement, the green line is the curve fitting result of the even-order polynomial model with { λ 1 = 1.6186 × 10 7 } , the red line is that of the even-order polynomial model with { λ 1 = 2.1680 × 10 8 , λ 2 = 2.2025 × 10 13 } , and the blue line is that of the single parameter division model with { λ = 1.7198 × 10 7 } . We can find that the fitting result by the even-order polynomial model with two distortion parameters is better.
Figure 17 shows the corresponding correction results of the checkerboard image respectively, where Figure 17a,b are by the even-order polynomial model with one and two distortion parameters, respectively, and Figure 17c is by the single parameter division model. The curvature radius of the line formed by the left points on the corrected checkerboard image is 6.9263 × 10 3 pixels, 2.3763 × 10 4 pixels, and 1.1747 × 10 4 pixels respectively compared with the corresponding curvature radius of 6.8656 × 10 4 pixels by the proposed method. The pixel counts of the above-mentioned squares at the central and external regions in the corrected image by the even-order polynomial model with two distortion parameters are 48,094 and 48,694 pixels, as shown in Figure 17b. We can find that the curve fitting results rely on the distortion model greatly. Distortion determination may fail using an unsuitable model or by estimation of too few distortion parameters. However, the more distortion parameters there are, the more complicated the solution of the reverse process.
Furthermore, the indoor and outdoor scenes are also employed for the experiment to show the practicality of the proposed method. Figure 18 and Figure 19 show the corresponding distorted and corrected image, respectively. The experimental results show that the distorted images achieve distortion correction effectively by the proposed model-free method.

5. Discussion

Experimental correction results of the checkerboard image by different methods are given for the comparison with the proposed model-free correction method. The original straight line is distorted into a circular arc with a curvature radius of 3.2457 × 10 3 pixels, as shown in Figure 15. By the plumb-line method with the single parameter division model, the phase analysis method by the even-order polynomial model with one and two distortion parameters and the single parameter division model, and the proposed model-free method, the curvature radiuses of the corresponding corrected lines on the corrected checkerboard images are 3.2821 × 10 4 , 6.9263 × 10 3 , 2.3763 × 10 4 , 1.1747 × 10 4 , and 6.8656 × 10 4 pixels, respectively. The circular arc is corrected to be straighter by the proposed method. It means that the proposed method provides a superior result, and it shows that distortion determination may fail using an unsuitable model firstly. Moreover, from the comparison of the curvature radius of the corresponding corrected lines, it seems that the corrected result of the plumb-line method is better than that by the phase analysis method by the even-order polynomial model with two distortion parameters. The reason for this is that the distortion parameter estimation by the plumb-line method is performed by the distorted circular arc at this position. However, for the corrected checkerboard image by the plumb-line method, the square within the central region of size 46,988 pixels is not the same as the square at the external region of size 57,490 pixels. It means that the estimated distortion parameter does not fit for the whole image correction. Therefore, according to the analysis result, we should take more characteristic points and lines or some other more complicated algorithm or distortion models into account. However, the more distortion parameters there are, the more complicated the solution of the reverse process. For the proposed model-free method, all points of the distorted fringe-pattern are employed for the establishment of the distortion displacement map, which demands none of the distortion model. So, the image achieve distortion correction point by point with a more effective and satisfactory result.
In the experiment of distortion displacement measurement, the errors caused by the nonideal LCD plane and imperfect perpendicular arrangement of the optical axis of camera and the LCD plane should be considered.

6. Conclusions

In this paper, a model-free lens distortion correction method based on the distortion displacement map by the phase analysis of fringe-patterns is proposed. For the image distortion correction, the most important thing is to decide the distortion displacement. So, the mathematical relationship of the distortion displacement and the modulated phase of the fringe-pattern is established in theory firstly. Then, two sets of longitudinal and transverse fringe-patterns are employed for phase demodulation analysis to obtain the distortion displacement Δ x and Δ y respectively by the phase-shifting method. The distortion displacement map can be determined point by point for the whole distorted image to achieve distortion correction. It would be effective even when the circular symmetry condition is not satisfied. Moreover, it detects the radial distortion type and the distortion center automatically according to the instantaneous frequency, which is important in obtaining optimal result. The correction results of the numerical simulation, experiments, and comparison show the effectiveness and superiority of the proposed method.
There are some prospects of our further works. Firstly, the relationship of the modulated phase and the distortion displacement described by the proposed method would still hold for the mix distortion with the radial and tangential type. However, if the tangential distortion is severe, the distortion center would not be the corresponding position where the minimum or maximum instantaneous frequency appears. So, how to decide the distortion center automatically in this case should be considered. Secondly, the optimal frequency of measuring fringe-patterns for accurate modulated phase analysis should be considered. Thirdly, the application of the proposed method should be implemented.

Author Contributions

Conceptualization, J.Z. and J.W.; software, J.W., S.M. and P.Q.; investigation, W.Z.; writing, J.W. and J.Z.; supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61875074 and 61971201; the Natural Science Foundation of Guangdong Province, grant number 2018A030313912; the Open Fund of the Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications (Jinan University).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Collins, T.; Bartoli, A. Planar structure-from-motion with affine camera models: Closed-form solutions, ambiguities and degeneracy analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1237–1255. [Google Scholar] [CrossRef]
  2. Guan, H.; Smith, W.A.P. Structure-from-motion in spherical video using the von Mises-Fisher distribution. IEEE Trans. Image Process. 2017, 26, 711–723. [Google Scholar] [CrossRef]
  3. Herrera, J.L.; Del-Blanco, C.R.; Garcia, N. Automatic depth extraction from 2D images using a cluster-based learning framework. IEEE Trans. Image Process. 2018, 27, 3288–3299. [Google Scholar] [CrossRef]
  4. Wang, Y.; Deng, W. Generative model with coordinate metric learning for object recognition based on 3D models. IEEE Trans. Image Process. 2018, 27, 5813–5826. [Google Scholar] [CrossRef] [Green Version]
  5. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
  6. Ahmed, M.; Farag, A. Non-metric calibration of camera lens distortion: Differential methods and robust estimation. IEEE Trans. Image Process. 2005, 14, 1215–1230. [Google Scholar] [CrossRef]
  7. Cai, B.; Wang, Y.; Wu, J.; Wang, M.; Li, F.; Ma, M.; Chen, X.; Wang, K. An effective method for camera calibration in defocus scene with circular gratings. Opt. Lasers Eng. 2019, 114, 44–49. [Google Scholar] [CrossRef]
  8. Swaminathan, R.; Nayar, S. Non metric calibration of wide-angle lenses and polycameras. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1172–1178. [Google Scholar] [CrossRef]
  9. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef]
  10. Kukelova, Z.; Pajdla, T. A minimal solution to radial distortion autocalibration. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2410–2422. [Google Scholar] [CrossRef]
  11. Gao, Y.; Lin, C.; Zhao, Y.; Wang, X.; Wei, S.; Huang, Q. 3-D surround view for advanced driver assistance systems. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 19, 320–328. [Google Scholar] [CrossRef]
  12. Bukhari, F.; Dailey, M.N. Automatic radial distortion estimation from a single image. J. Math. Imaging Vis. 2013, 45, 31–45. [Google Scholar] [CrossRef]
  13. Alemán-Flores, M.; Alvarez, L.; Gómez, L.; Santana-Cedrés, D. Automatic lens distortion correction using one-parameter division models. Image Process. Line 2014, 4, 327–343. [Google Scholar] [CrossRef]
  14. Santana-Cedrés, D.; Gómez, L.; Alemán-Flores, M. An iterative optimization algorithm for lens distortion correction using two-parameter models. Image Process. Line 2016, 5, 326–364. [Google Scholar] [CrossRef] [Green Version]
  15. Li, L.; Liu, W.; Xing, W. Robust radial distortion correction from a single image. In Proceedings of the 2017 IEEE 15th International Conference on Dependable, Autonomic and Secure Computing, 15th International Conference on Pervasive Intelligence and Computing, 3rd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech), Orlando, FL, USA, 6–10 November 2017; pp. 766–772. [Google Scholar]
  16. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–886. [Google Scholar]
  17. Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; pp. I125–I132. [Google Scholar]
  18. Kannala, J.; Brandt, S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef] [Green Version]
  19. Rong, J.; Huang, S.; Shang, Z.; Ying, X. Radial lens distortion correction using convolutional neural networks trained with synthesized images. In Asian Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 35–49. [Google Scholar]
  20. Yin, X.; Wang, X.; Yu, J.; Zhang, M.; Fua, P.; Tao, D. FishEyeRecNet: A multi-context collaborative deep network for fisheye image rectification. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 475–490. [Google Scholar]
  21. Xue, Z.; Xue, N.; Xia, G.; Shen, W. Learning to Calibrate Straight Lines for Fisheye Image Rectification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1643–1651. [Google Scholar]
  22. Liao, K.; Lin, C.; Zhao, Y.; Xu, M. Model-Free Distortion Rectification Framework Bridged by Distortion Distribution Map. IEEE Trans. Image Process. 2020, 29, 3707–3717. [Google Scholar] [CrossRef]
  23. Li, X.; Zhang, B.; Sander, P.V.; Liao, J. Blind Geometric Distortion Correction on Images through Deep Learning. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4850–4859. [Google Scholar]
  24. Munjy, R.A.H. Self-Calibration using the finite element approach. Photogramm. Eng. Remote Sens. 1986, 52, 411–418. [Google Scholar]
  25. Tecklenburg, W.; Luhmann, T.; Heidi, H. Camera Modelling with Image-variant Parameters and Finite Elements. In Optical 3-D Measurement Techniques V; Wichmann Verlag: Heidelberg, Germany, 2001. [Google Scholar]
  26. Gioi, R.G.V.; Monasse, P.; Morel, J.M.; Tang, Z. Towards high-precision lens distortion correction. In Proceedings of the International Conference on Image Processing, ICIP 2010, Hong Kong, China, 12–15 September 2010; pp. 4237–4240. [Google Scholar]
  27. Takeda, M.; Mutoh, K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Opt. 1983, 22, 3977–3982. [Google Scholar] [CrossRef]
  28. Zhong, J.; Weng, J. Phase retrieval of optical fringe patterns from the ridge of a wavelet transform. Opt. Lett. 2005, 30, 2560–2562. [Google Scholar] [CrossRef]
  29. Weng, J.; Zhong, J.; Hu, C. Digital reconstruction based on angular spectrum diffraction with the ridge of wavelet transform in holographic phase-contrast microscopy. Opt. Express 2008, 16, 21971–21981. [Google Scholar] [CrossRef] [PubMed]
  30. Braeuer-Burchardt, C. Correcting lens distortion in 3D measuring systems using fringe projection. Proc. SPIE Int. Soc. Opt. Eng. 2005, 5962, 155–165. [Google Scholar]
  31. Li, K.; Bu, J.; Zhang, D. Lens distortion elimination for improving measurement accuracy of fringe projection profilometry. Opt. Lasers Eng. 2016, 85, 53–64. [Google Scholar] [CrossRef]
  32. Zhou, W.; Weng, J.; Peng, J.; Zhong, J. Wide-angle lenses distortion calibration using phase demodulation of phase-shifting fringe-patterns. Infrared Laser Eng. 2020, 49, 20200039. [Google Scholar] [CrossRef]
  33. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  34. Ghiglia, D.C.; Pritt, M.D. Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software; Wiley: Hoboken, NJ, USA, 1998; pp. 181–215. [Google Scholar]
Figure 1. Distortion schematic diagram for the negative (barrel) distortion, where the blue and red lines are corresponding to the fringe-pattern stripes before and after distortion, respectively.
Figure 1. Distortion schematic diagram for the negative (barrel) distortion, where the blue and red lines are corresponding to the fringe-pattern stripes before and after distortion, respectively.
Sensors 21 00209 g001
Figure 2. Sinusoidal fringe-patterns before and after barrel distortion. (a) Undistorted longitudinal fringe-pattern; (b) Undistorted transverse fringe-pattern; (c) Distorted longitudinal fringe-pattern; (d) Distorted transverse fringe-pattern.
Figure 2. Sinusoidal fringe-patterns before and after barrel distortion. (a) Undistorted longitudinal fringe-pattern; (b) Undistorted transverse fringe-pattern; (c) Distorted longitudinal fringe-pattern; (d) Distorted transverse fringe-pattern.
Sensors 21 00209 g002
Figure 3. Simulated distorted fringe-patterns of size 512 × 512 pixels by a single parameter division model with a distortion parameter λ   =   1   ×   10 6 and distortion center ( 273 , 289 ) . The longitudinal fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are at the first row and the transverse fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are at the second row.
Figure 3. Simulated distorted fringe-patterns of size 512 × 512 pixels by a single parameter division model with a distortion parameter λ   =   1   ×   10 6 and distortion center ( 273 , 289 ) . The longitudinal fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are at the first row and the transverse fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are at the second row.
Sensors 21 00209 g003
Figure 4. Wrapped phase. (a) Wrapped phase of distorted longitudinal fringe-pattern; (b) Wrapped phase of a distorted transverse fringe-pattern transverse.
Figure 4. Wrapped phase. (a) Wrapped phase of distorted longitudinal fringe-pattern; (b) Wrapped phase of a distorted transverse fringe-pattern transverse.
Sensors 21 00209 g004
Figure 5. Instantaneous frequency. (a) f x ; (b) f y ; (c) f x at the 289th row with the minimum point at the 173th column; (d) f y at the 273th column with the minimum point at the 289th row.
Figure 5. Instantaneous frequency. (a) f x ; (b) f y ; (c) f x at the 289th row with the minimum point at the 173th column; (d) f y at the 273th column with the minimum point at the 289th row.
Sensors 21 00209 g005
Figure 6. Modulated phase and distortion displacement. (a) Δ φ x ( x , y ) ; (b) Δ φ y ( x , y ) ; (c) Δ x ( x , y ) ; (d) Δ y ( x , y ) .
Figure 6. Modulated phase and distortion displacement. (a) Δ φ x ( x , y ) ; (b) Δ φ y ( x , y ) ; (c) Δ x ( x , y ) ; (d) Δ y ( x , y ) .
Sensors 21 00209 g006
Figure 7. Analyzed results of checkerboard image, where the red points represent the corners. (a) Distorted checkerboard image with distortion parameter λ   =   1   ×   10 6 and distortion center ( 289 , 273 ) ; (b) Corrected image.
Figure 7. Analyzed results of checkerboard image, where the red points represent the corners. (a) Distorted checkerboard image with distortion parameter λ   =   1   ×   10 6 and distortion center ( 289 , 273 ) ; (b) Corrected image.
Sensors 21 00209 g007
Figure 8. Corners image with the red asterisks representing the corners of the distorted checkerboard image and the blue points representing the corners of the corrected checkerboard image. The curvature radius of the red line is 2.2148   ×   10 3 pixels, and that of the blue line is 1.4731   ×   10 5 pixels.
Figure 8. Corners image with the red asterisks representing the corners of the distorted checkerboard image and the blue points representing the corners of the corrected checkerboard image. The curvature radius of the red line is 2.2148   ×   10 3 pixels, and that of the blue line is 1.4731   ×   10 5 pixels.
Sensors 21 00209 g008
Figure 9. Experimental setup. Liquid crystal display (LCD): FunTV D49Y; wide-angle lens: Theia MY125M, FOV1370; charge-couple device (CCD) camera: PointGrey CM3-U3-50S5M-CS, 2048 × 2448 pixels, 3.45 μm pixel size.
Figure 9. Experimental setup. Liquid crystal display (LCD): FunTV D49Y; wide-angle lens: Theia MY125M, FOV1370; charge-couple device (CCD) camera: PointGrey CM3-U3-50S5M-CS, 2048 × 2448 pixels, 3.45 μm pixel size.
Sensors 21 00209 g009
Figure 10. Distorted fringe-patterns of size 2048 × 2448 pixels. The longitudinal fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are in the first row and the transverse with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are in the second row.
Figure 10. Distorted fringe-patterns of size 2048 × 2448 pixels. The longitudinal fringe-patterns with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are in the first row and the transverse with a phase shift of 0 ,   π / 2 ,   π ,   3 π / 2 are in the second row.
Sensors 21 00209 g010
Figure 11. Intensity distribution of the distorted fringe-patters. (a) Intensity of the central row of the longitudinal fringe-pattern with a phase shift of π / 2 ; (b) Intensity of the central column of the transverse fringe-pattern with a phase shift of π / 2 .
Figure 11. Intensity distribution of the distorted fringe-patters. (a) Intensity of the central row of the longitudinal fringe-pattern with a phase shift of π / 2 ; (b) Intensity of the central column of the transverse fringe-pattern with a phase shift of π / 2 .
Sensors 21 00209 g011
Figure 12. Wrapped phase. (a) Wrapped phase of distorted longitudinal fringe-pattern; (b) Wrapped phase of distorted transverse fringe-pattern transverse.
Figure 12. Wrapped phase. (a) Wrapped phase of distorted longitudinal fringe-pattern; (b) Wrapped phase of distorted transverse fringe-pattern transverse.
Sensors 21 00209 g012
Figure 13. Modulated phase and distortion displacement map. (a) Δ φ x ( x , y ) of size 2048 × 2448 pixels; (b) Δ φ y ( x , y ) of size 2048 × 2448 pixels; (c) Δ x ( m , n ) of size 2496 × 2984 pixels; (d) Δ y ( m , n ) of size 2496 × 2984 pixels.
Figure 13. Modulated phase and distortion displacement map. (a) Δ φ x ( x , y ) of size 2048 × 2448 pixels; (b) Δ φ y ( x , y ) of size 2048 × 2448 pixels; (c) Δ x ( m , n ) of size 2496 × 2984 pixels; (d) Δ y ( m , n ) of size 2496 × 2984 pixels.
Sensors 21 00209 g013
Figure 14. Experimental results of checkerboard images. (a) Distorted checkerboard image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels with the red points representing the corners by the proposed model-free method. The pixel count of the square within the central green rectangular region is 49,033 pixels, and that of the square within the external blue rectangular region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature radius of the red line is 3.2821   ×   10 4 pixels. The pixel count of the square within the central green rectangular region is 46988 pixels, and that of the square within the external blue rectangular region is 57,490 pixels.
Figure 14. Experimental results of checkerboard images. (a) Distorted checkerboard image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels with the red points representing the corners by the proposed model-free method. The pixel count of the square within the central green rectangular region is 49,033 pixels, and that of the square within the external blue rectangular region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature radius of the red line is 3.2821   ×   10 4 pixels. The pixel count of the square within the central green rectangular region is 46988 pixels, and that of the square within the external blue rectangular region is 57,490 pixels.
Sensors 21 00209 g014
Figure 15. Corners image with the red asterisks representing the corners of the distorted checkerboard image and the blue points representing the corners of the corrected checkerboard image corresponding to those shown in Figure 14a,b. The curvature radius of the red line is 3.2457   ×   10 3 pixels, and that of the blue line is 6.8656   ×   10 4 pixels.
Figure 15. Corners image with the red asterisks representing the corners of the distorted checkerboard image and the blue points representing the corners of the corrected checkerboard image corresponding to those shown in Figure 14a,b. The curvature radius of the red line is 3.2457   ×   10 3 pixels, and that of the blue line is 6.8656   ×   10 4 pixels.
Sensors 21 00209 g015
Figure 16. Radial distortion displacement and curve fitting results.
Figure 16. Radial distortion displacement and curve fitting results.
Sensors 21 00209 g016
Figure 17. Experimental results of checkerboard images. (a) Corrected image by a one-parameter even-order polynomial model with a green line curvature radius of 6.9263 × 10 3 pixels. (b) Corrected image by a two-parameter even-order polynomial model with a red line curvature radius of 2.3763 × 10 4 pixels. The pixel count of the square within the central green rectangular region is 48,094 pixels, and that of the square within the external blue rectangular region is 48,694 pixels. (c) Corrected image by a single parameter division model with a blue line curvature radius of 1.1747 × 10 4 pixels.
Figure 17. Experimental results of checkerboard images. (a) Corrected image by a one-parameter even-order polynomial model with a green line curvature radius of 6.9263 × 10 3 pixels. (b) Corrected image by a two-parameter even-order polynomial model with a red line curvature radius of 2.3763 × 10 4 pixels. The pixel count of the square within the central green rectangular region is 48,094 pixels, and that of the square within the external blue rectangular region is 48,694 pixels. (c) Corrected image by a single parameter division model with a blue line curvature radius of 1.1747 × 10 4 pixels.
Sensors 21 00209 g017
Figure 18. Experimental results of the indoor scene. (a) Distorted image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels.
Figure 18. Experimental results of the indoor scene. (a) Distorted image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels.
Sensors 21 00209 g018
Figure 19. Experimental results of the outdoor scene. (a) Distorted image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels.
Figure 19. Experimental results of the outdoor scene. (a) Distorted image of size 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels.
Sensors 21 00209 g019
Table 1. Nomenclature.
Table 1. Nomenclature.
TermDescription
I x u ; I y u undistorted longitudinal and transverse fringe-patterns
I x d ; I y d distorted longitudinal and transverse fringe-patterns
I x , n d ; I y , n d distorted longitudinal and transverse fringe-patterns with phase shift
φ x ; φ y phase of distorted longitudinal and transverse fringe-patterns
ϕ x ,   Δ φ x ; ϕ y ,   Δ φ y modulated phase of distorted longitudinal and transverse fringe-patterns
ϕ x o ; ϕ y o initial phase of distorted longitudinal and transverse fringe-patterns
f x ; f y instantaneous frequency of distorted longitudinal and transverse fringe-patterns
f x o ; f y o fundamental frequency of distorted longitudinal and transverse fringe-patterns
Δ x ; Δ y distortion displacement of points on distorted fringe-pattern
Δ x ; Δ y distortion displacement of points on corrected fringe-pattern
( x d m , n , y d m , n ) point on distorted image corresponding to point on corrected image with (m, n) being integer
Table 2. Some calculated distortion displacement (pixel).
Table 2. Some calculated distortion displacement (pixel).
distortion displacement of points on distorted fringe-pattern Δ x ( x , y ) 10.92 (197,121)11.06 (198,121)11.20 (199,121)
Δ y ( x , y ) 6.71 (197,121)6.76 (198,121)6.81 (199,121)
distortion displacement of points on corrected fringe-pattern Δ x ( x + Δ x , y + Δ y ) 10.92 (207.92,127.71)11.06 (209.06,127.76)11.20 (210.20,127.81)
Δ y ( x + Δ x , y + Δ y ) 6.71 (207.92,127.71)6.76 (209.06,127.76)6.81 (210.20,127.81)
distortion displacement of integer points on corrected fringe-pattern Δ x ( m , n ) 10.94 (208,128)11.06 (209,128)11.19 (210,128)
Δ y ( m , n ) 6.73 (208,128)6.78 (209,128)6.82 (210,128)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Weng, J.; Zhou, W.; Ma, S.; Qi, P.; Zhong, J. Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns. Sensors 2021, 21, 209. https://doi.org/10.3390/s21010209

AMA Style

Weng J, Zhou W, Ma S, Qi P, Zhong J. Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns. Sensors. 2021; 21(1):209. https://doi.org/10.3390/s21010209

Chicago/Turabian Style

Weng, Jiawen, Weishuai Zhou, Simin Ma, Pan Qi, and Jingang Zhong. 2021. "Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns" Sensors 21, no. 1: 209. https://doi.org/10.3390/s21010209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop