Next Article in Journal
Quality of Colour Rendering in Photographic Scenes Illuminated by Light Sources with Light-Shaping Attachments
Previous Article in Journal
Tree-Based Modeling for Large-Scale Management in Agriculture: Explaining Organic Matter Content in Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Camera Calibration Method Based on Concentric Circle Grids

1
School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
2
Signal and Communication Research Institute, China Academy of Railway Sciences, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(5), 1813; https://doi.org/10.3390/app14051813
Submission received: 20 January 2024 / Revised: 16 February 2024 / Accepted: 20 February 2024 / Published: 22 February 2024

Abstract

:
A concentric circle target is commonly used in the vision measurement system for its detection accuracy and robustness. To enhance the camera calibration accuracy, this paper proposes an improved calibration method that utilizes concentric circle grids as the calibration target. The method involves accurately locating the imaged center and optimizing camera parameters. The imaged concentric circle center obtained by cross-ratio invariance is not affected by perspective projection, which ensures the location accuracy of the feature point. Subsequently, the impact of lens distortion on camera calibration is comprehensively investigated. The sub-pixel coordinates of imaged centers are taken into the iterative calibration method, and camera parameters are updated. Through simulations and real experiments, the proposed method effectively reduces the residual error and improves the accuracy of camera parameters.

1. Introduction

Camera calibration is a crucial technology in the fields of vision measurement [1,2,3,4,5], robot navigation [6,7,8,9,10,11], and 3D reconstruction [12,13,14,15,16]. To acquire precise measurement information, the accurate estimation of camera parameters is essential. Camera calibration parameters include intrinsic parameters and extrinsic parameters. Intrinsic parameters include main point coordinates, effective focal length, and distortion parameters, which represent the characteristics of the camera. Extrinsic parameters include the translation vector and rotation matrix, which represent the relative position relationship between the world coordinate system and the camera coordinate system. Schemes for calibration can be classified into two categories: self-calibration [17,18] and target-based calibration. Self-calibration methods obtain the camera parameters by establishing multivariate equations of corresponding feature points from different viewpoints. These methods are flexible and do not require calibration targets. However, they are only effective in well-textured scenes. Moreover, in scenarios where the camera motion constraint is absent, the self-calibration method is has been proven to be inadequate for engineering applications. Additionally, this approach can hardly achieve the precise sub-pixel extraction of feature points in calibration images. On the contrary, the target-based calibration method can yield reliable calibration results and can be applied to various scenes.
Calibration targets can be divided into the following three categories: 1D targets [19,20,21], 2D targets [22,23,24], and 3D targets [25,26]. The 1D target consists of several collinear points. Due to the ease of capturing feature points on the target using multiple cameras simultaneously, it is commonly used in multi-camera calibration. However, the 1D target lacks strong spatial geometric constraints, resulting in low calibration accuracy. Traditional 3D targets typically consist of two or three orthogonal planes, which aid in achieving higher calibration accuracy. However, designing such targets may be challenging and expensive. Especially in the case of some large measurement scenes, this calibration method is more restricted by space. The sphere is used as a simple 3D target due to its strong contour continuity, which helps compensate for distortion caused by large viewing angles [27]. It is important to note that the size and distance of the sphere significantly affect calibration results. Additionally, there are only a few feature points involved in the calibration process. Compared to 1D and 3D targets, designing a 2D target is relatively easier. This is because calibration patterns can be printed on a planar board, allowing for the easy manufacturing of high-precision targets.
Zhang’s chessboard calibration method [28] is considered one of the most representative 2D target-based calibration methods. This approach realizes the conversion between the world coordinate system and the image coordinate system by extracting Harris corners from chessboard images. The initial values of the camera parameters are obtained using a linear model. Subsequently, the reprojection error function is established, and the Levenberg–Marquardt algorithm is utilized for optimization. However, the detection accuracy of chessboard corner points is easily affected by image noise, distortion, and resolution. On the other hand, circular feature points offer the advantages of easy recognition and noise resistance, making them commonly employed in camera calibration and measurement [29,30,31,32,33,34]. In perspective projection, a circle is imaged as an ellipse. The projection of the space of the circle center does not coincide with the ellipse center. To address this issue, many methods have been investigated to accurately locate the imaged circle center. Jiang et al. [35] introduced a novel constructive geometric method for estimating the imaged center and optimizing it using homological constraints. The method iteratively finds the solutions without any parameter information. Nevertheless, it is important to verify the efficiency and stability of this method. Based on the characteristics of tangent lines and the Hough transform, Ying et al. [36] utilized four tangent lines of the concentric circle to determine a straight line, where the projected center is located on this line. By finding several straight lines, the intersection point of these lines can be identified as the projected center. In [37], the imaged circle center and the vanishing line were recovered by exploring the properties of the common self-polar triangle of concentric circles. But this method requires calculating the inversion and eigenvectors of the matrix. Kim et al. [38] investigated the geometric and algebraic constraints associated with the projected concentric circle. They introduced a rank-1 constraint to determine the center by solving a quartic equation. However, this approach may lead to numerical instability. And Zhang et al. [39] formulated the solution of the imaged circle center into a first-order polynomial eigenvalue problem by considering the pole–polar relationship based on the concentric circle. Compared with the previous work of Jiang, this technique reduces computation complexity. To minimize the perspective projection error, Shao et al. [40] confirmed the position of the imaged circle center by computing the eigenvectors of the concentric circle projection matrix.
The aforementioned research address the problem of center deviation of circular feature points in the linear perspective projection. However, due to lens distortion, the reprojection error function established in calibration is nonlinear [41,42]. Thus, it is crucial to consider the influence of lens distortion on calibration accuracy. Hartley [43] proposed a calibration method that takes into account radial distortion, which is rapid and not affected by the local minimum problem. While the radial component of lens distortion is dominant, it is also coupled with the tangential component, and thus both should be considered. Ricolfe-Viala [44] corrected the images and calculated the lens distortion separately from the intrinsic and extrinsic parameters of the camera. However, this algorithm is more complex. Yang et al. [41] discussed the impact of lens distortion and presented a robust geometric camera calibration method. The objective of compensation is to reduce the distance between the model and the actual position by adjusting all camera parameters. Zhao et al. [41] analyzed lens distortion in camera imaging and designed a template that consists of a line through the center of the concentric circle. Orthogonal vanishing points were utilized to solve the camera’s intrinsic parameters. Additionally, they optimized the positions of the distortion points to approach the ideal points, thereby enhancing the accuracy of camera calibration.
In the circle grids target-based calibration method, two sources affect the accuracy of camera calibration, namely the high-precision location of the circle center and the optimization of camera parameters. To address the above limitations, this paper proposes an improved camera calibration method that combines a concentric circle center location algorithm and an iterative compensation algorithm for the calibration parameters. The imaged concentric circle center obtained by cross-ratio invariance is not affected by the perspective projection, which ensures the location accuracy of the feature point. In addition, to achieve higher calibration accuracy, an iterative compensation framework is developed to refine the calibration results.
The remainder of this paper is organized as follows. Section 2 presents the concentric circle center location method, which is based on the invariance of the cross-ratio, and introduces a novel iterative approach for camera calibration that takes into account the influence of lens distortion. To validate the robustness and effectiveness of the method, simulations and practical experiments are conducted, as presented in Section 3. Section 4 discusses the differences and improvements of the proposed method. Finally, Section 5 gives the conclusion.

2. Materials and Methods

2.1. Concentric Circle Center Location Method Based on Cross-Ratio Invariance

The pinhole camera model is the most commonly used in computer vision, as shown in Figure 1. The relationship between the space point P w x w , y w , z w in the world coordinate system and the projection point P u u , v on the image plane can be expressed as follows [28]:
s u v 1 = K R T x w y w z w 1 = f u 0 u 0 0 0 f v v 0 0 0 0 1 0 R T 0 1 x w y w z w 1 .
where, s is a scale factor; K is the intrinsic parameter matrix of the camera; f u , f v denote the focal length coordinate of the camera; u 0 , v 0 is the principal point coordinate of the camera; and R and T respectively represent a 3 × 3 rotation matrix and a 3 × 1 translation vector between the world coordinate system and the camera coordinate system.
The coordinate of the feature point on the planar calibration target z w = 0 , let us use r i to denote the i-th column of the rotation matrix R . According to Equation (1), we have:
s u v 1 = K r 1 r 2 r 3 T x w y w 0 1 = K r 1 r 2 T x w y w 1 .
Therefore, Equation (2) can be expressed as:
s u v 1 = H x w y w 1 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 1 x w y w 1 .
where, H is a homography matrix, H = K r 1 r 2 T .
On the planar target, let the circle center be at the origin of the world coordinate system and r be the radius. Then, the circle can be represented in the matrix form as:
C = 1 0 0 0 1 0 0 0 r 2
In the perspective projection, the circle is imaged into an ellipse, we have:
s E = H T C H 1 .
where, E = A C / 2 D / 2 C / 2 B E / 2 D / 2 E / 2 F is the conic coefficient matrix of the ellipse.
According to Equations (3)–(5), the parameters of the conic coefficient matrix can be obtained:
A = h 11 2 + h 21 2 h 31 2 r 2 B = h 12 2 + h 22 2 h 32 2 r 2 C = 2 h 11 h 12 + h 21 h 22 h 31 h 32 r 2 D = 2 h 11 h 13 + h 21 h 23 h 31 h 33 r 2 E = 2 h 12 h 13 + h 22 h 23 h 32 h 33 r 2 F = h 13 2 + h 23 2 h 33 2 r 2
And the ellipse center e x , e y is:
e x = C E 2 B D / 4 A B C 2 = m 1 r 2 + n 1 / m r 2 + n e y = C D 2 A E / 4 A B C 2 = m 2 r 2 + n 2 / m r 2 + n .
The factors m , m 1 , m 2 , n , n 1 , n 2 can be calculated with the parameters in the homography matrix H . When r is set to 0, let c x , c y be the imaged circle center. The slope k of the straight line determined by e x , e y and c x , c y can be calculated as follows:
k = e y c y e x c x = m 2 n n 2 m m 1 n n 1 m
In Equation (8), k is a constant that is not related to the radii of the circle. Therefore, the centers of the ellipses projected by concentric circles and the imaged centers are collinear points.
Figure 2 shows a concentric circle feature point and the projection on the image plane. P u e 1 , P u e 2 represent the centers of two ellipses, and P u is the projection of the concentric circle center P w . A line l u is established connecting P u , P u e 1 , P u e 2 , and this line intersects both ellipses at four different points P u 1 , P u 2 , P u 3 , and P u 4 . Based on the characteristics of projective geometry, there is a line l w that passes through P w and intersects the concentric circle at four points P w 1 , P w 2 , P w 3 , and P w 4 . P w is the vanishing point on the line l w . Additionally, P w 1 , P w 2 , P w 3 , P w 4 , P w demonstrate the perspective projection relationship with P u 1 , P u 2 , P u 3 , P u 4 , P u .
With the properties of the harmonic conjugated points, the cross-ratio equations can be developed as follows:
P w 1 , P w 2 ; P w , P w = P w 3 , P w 4 ; P w , P w = 1 .
Because the cross-ratio remains unchanged under projective transformation, we obtain:
P u 1 , P u 2 ; P u , P u = P u 3 , P u 4 ; P u , P u = 1 .
In Equation (10), P u 1 , P u 2 , P u 3 , P u 4 can be obtained by solving equations of the line and ellipses, and P u is the imaged center. We choose the point that lies inside the concentric circle as the imaged center. In the perspective projection, when the image plane and the circle target plane are not parallel, a circle is imaged as an ellipse. There is a deviation between the ellipse center and the projection of the circle center, which is called the eccentricity error. The eccentricity error is only related to the angle between the image plane and the target plane. Figure 3 shows the mathematical model of the concentric circle; r i n , r o u t are the radius of the concentric circle; e r r = P u P u e 1 is the eccentricity error of the concentric circle.
The process of locating the imaged center of a concentric circle can be summarized in the following steps:
(I)
Apply the sub-pixel edge detection algorithm to fit the projection conics and estimate the coefficient matrices.
Traditional edge detection methods include the Roberts operator, Laplace operator, Prewitt operator, and Canny operator. These methods can only locate to the pixel level and cannot meet the requirements of accurate location. To meet the high precision location requirements of the camera calibration system, we use a Canny–Zernike algorithm to detect the edge points. Firstly, the Canny operator is used to preliminarily locate the edge points. Then, the Zernike moment algorithm is applied in sub-pixel edge extraction which improves the location accuracy. The centers and coefficient matrices of ellipses are obtained using the direct least square method.
(II)
Identify the intersection points between the line that traverses the ellipses’ centers and the two ellipses.
Because the centers of the ellipses projected by concentric circles and the imaged centers are collinear points, we connect the two ellipses centers to establish a straight line. By combining the line equation and the projection conic equations, we obtain four intersection points.
(III)
Solve the location of the imaged center according to Equation (10).
Using the principle of harmonic conjugated points and the cross-ratio invariance, the solution for the imaged center is determined.

2.2. Iterative Camera Calibration Algorithm Considering Lens Distortion

In camera calibration, the existence of lens distortion leads to the projection of a circle as a distorted ellipse, rather than a true ellipse. The center of the distorted ellipse, determined by fitting the boundary points with an ellipse, may vary from the center of the perspective ellipse. As shown in Figure 4, the distorted ellipse differs significantly from the perspective ellipse. Consequently, P u does not coincide with the actual projection P ^ u of the concentric circle center.
The distorted image coordinate of the feature point on the normalized plane can be expressed as:
x d = x c + x c k 1 r c 2 + k 2 r c 4 + 2 P 1 x c y c + P 2 r c 2 + 2 x c 2 y d = y c + y c k 1 r c 2 + k 2 r c 4 + P 1 r c 2 + 2 y c 2 + 2 P 2 x c y c
where x c , y c are the undistorted image coordinate of the feature point; k 1 , k 2 are the radial distortion coefficients; P 1 , P 2 are the tangential distortion coefficients; r c 2 = x c 2 + y c 2 is the distance between the image point and the principal point.
x c , y c can be calculated using:
λ x c y c 1 = R T x w y w z w 1
Then, the real camera model can be expressed by:
u v 1 = K x d y d 1 .
The Zhang calibration method involves obtaining the camera parameters through the minimization of the reprojection constraint:
J = i = 1 n j = 1 m P u i j P u i j K , D , R i , T i , P w i 2 .
where P u i j is the imaged center of the j-th feature point on the i-th calibration image, i = 1 , 2 , , n , j = 1 , 2 , , m ; P u i j K , D , R i , T i , P w i is the projection of the concentric circle center P w i ; and D represents the distortion coefficients.
The intrinsic and extrinsic parameters of the camera are first obtained by calculating the homography matrix H of several calibration images and the Levenberg–Marquardt optimization algorithm is used to solve Equation (14).
This section presents a method for iteratively adjusting the calibration parameters. Give the radius r i n , r o u t and the center P w i of the concentric circle. The calibration consists of the following steps:
Step 1: Calculate the imaged concentric circle center P u i j and by Equation (10) and exploit the correspondence between P u i j and P w i . According to the Zhang calibration method [28], the initial camera parameters K 0 , D 0 are determined.
Step 2: Select k uniformly distributed points on each circle and calculate the projection points.
Step 3: Fit the ellipse by using the projection points that belong to the same circle and obtain the fitted concentric circle center u i j f , v i j f according to Equation (10). Meanwhile, the projection imaged center u i j P , v i j P of P w i is calculated using Equations (11)–(13).
Step 4: Calculate the position deviation Δ u i j , Δ v i j between u i j f , v i j f and u i j P , v i j P , and compensate for P u i j .
Step 5: Update the calibration values K , D using the optimized feature points P ^ u i j u ^ i j , v ^ i j .
Step 6: Continue performing Steps 2 to 5 repeatedly until the difference Δ between the two consecutive iterations is smaller than the threshold or the number of iterations reaches the maximum limit. In the iterative camera calibration algorithm, the position deviation threshold is set to 0.02 pixels and the maximum number of iterations is set to 10.
In Algorithm 1 (Pseudocode of the iterative camera calibration algorithm), we assume that there are n calibration images with m feature points on each calibration image. Camera _ calibration function means the Zhang calibration method; Calculate _ deviation function means the position deviation.
Algorithm 1. Iterative camera calibration
Input:  P u i j u i j , v i j , P w i x w i , y w i , r i n , r o u t
Output:  K , D
   1:  K 0 , D 0 = Camera _ calibration u i j , v i j , x w i , y w i
   2:  t ← 0, Δ u i j 0 , Δ v i j 0 ← 0, i = 1 , 2 , , m , j = 1 , 2 , , n
   3: repeat
   4:  Δ u i j t + 1 , Δ v i j t + 1 = Calculate _ deviation K t , D t , x w i , y w i , r i n , r o u t
   5:  u ^ i j , v ^ i j = u i j , v i j + Δ u i j t + 1 , Δ v i j t + 1
   6:  K t + 1 , D t + 1 = Camera _ calibration u ^ i j , v ^ i j , x w i , y w i
   7:  Δ = max Δ u i j t + 1 , Δ v i j t + 1 Δ u i j t , Δ v i j t
   8:  t = t + 1
   9: until  Δ < threshold or t > limit
return  K , D
Figure 5 shows the flow chart of the iterative camera calibration method in this paper. Firstly, we fit the projection conics of the ellipses and the line that traverses the ellipses’ centers. By combining the line equation and the projection conic equations, four intersection points are calculated. Then, according to the cross-ratio invariance of collinear points, the location of the imaged centers is obtained. Furthermore, based on the Zhang calibration method, the initial camera parameters are determined. Ultimately, we calculate the position deviation and compensate for the imaged center to update the calibration values repeatedly.
In this paper, we use the concentric circle grid as the calibration target, which provides more constraints for the center location. Compared to previous research, this paper makes the following contributions. Firstly, an imaged center compensation method for concentric circles is proposed which provides a detailed description of calculating position deviation under lens distortion. Secondly, a generalized compensation framework is proposed which can refine the calibration results.

3. Experimental Results and Analysis

In this section, the advantages of the concentric circle center location method and the iterative camera calibration method are verified by simulations and actual experiments.

3.1. Synthetic Experiments

We first tested the effect of the deflection angle between the object plane and the image plane on circle center location accuracy. The camera settings were u 0 = 1045 pixels, v 0 = 1010 pixels, f u = 1507 pixels, and f v = 1507 pixels, and the radii of the simulation concentric circle feature point were 10mm and 20mm. The pitch angle θ x , yaw angle θ y and rotation angle θ z represented the object plane deflections within the range of −1 rad to 1 rad along the x-axis, y-axis, and z-axis of the camera coordinate system, and the step was 0.1 rad. The translation vector T was set to 0 , 0 , 1500 T and the rotation matrix R was calculated using θ x , θ y , and θ z . After selecting 10 points on each circle, based on the camera model, the projection points were calculated, which were used to fit the projection conics. Therefore, the boundary points and the fitted ellipses in the simulation images were obtained. The program was implemented in Python. The residual error between the calculated projection and the real projection of the circle center was used as the evaluation criteria to evaluate the accuracy of the location method. Figure 6a, Figure 7a, and Figure 8 show the residual error in the u and v directions by using the conventional method [34] and the proposed method.
As indicated in Figure 6a and Figure 7a, the conventional method is greatly affected by the deflection angles of the object plane around the x-axis, y-axis, and the maximum residual error can reach 0.39 pixels and 0.45 pixels, respectively. Figure 6b and Figure 7b present the residual error of the proposed method. They are independent of the deflection angle. In Figure 8, the influence of the deflection angle around the z-axis on the two methods can be ignored.
Subsequently, the rotation angles were all set to 1 rad. The simulated image was subjected to the addition of Gaussian noise with a mean of zero and a standard deviation that σ varies between 0 and 2 pixels. The final result was obtained by taking the average error from 100 independent experiments conducted for each noise level.
As we can see from Figure 9, the location accuracy of the two methods decreases with the increase in image noise. Compared with the conventional method, the proposed method can ensure the center location accuracy under a large rotation angle and high image noise, and the residual error of the proposed method is always less than 0.26 pixels.
Then, a concentric circle planar calibration target was generated. The camera parameters were kept unchanged, and the lens distortion was set at k 1 = 0.1 , k 2 = 0.08 , P 1 = 0.0005 , P 2 = 0.0005 as a slight distortion and at k 1 = 0.4 , k 2 = 0.3 , P 1 = 0.002 , P 2 = 0.002 as a heavy distortion. We simulated the calibration images under the same poses and Gaussian noise. On this basis, the Zhang method, the conventional method, and the proposed method were compared. The calibration results are listed in Table 1 and Table 2. The proposed method does not exhibit a noticeable improvement effect in the presence of slight lens distortion. As the lens distortion becomes heavier, the result achieved using the proposed approach becomes more accurate.

3.2. Experiments with Real Images

To verify the accuracy of the circle center location method, a real template was designed, as shown in Figure 10. We show the center location results and the enlarged view. A ‘+’ sign represents the center of the concentric circle feature point on the plane, with radii measuring 10 mm and 20 mm. The camera and the template were separated by a distance of 1500 mm, and the rotation angle was roughly 1 rad. Real images were taken using a 4M140MCX camera with a resolution of 2048 × 2048 pixels, the pixel size was d x = d y = 5.5   μ m , and the focal length was f = 12.5   mm .
In Figure 10, the conventional method is represented by the green ‘+’ for the obtained imaged center, while the red ‘+’ indicates the imaged center obtained using the proposed method. It can be seen that the projection determined using our approach aligned closely with the actual value.
Then, the concentric circle grid was generated on an A4 sheet of paper and pasted onto the board. The board was moved to capture images at five different poses and the intrinsic parameters are shown in Table 3.
Reprojection errors are usually the evaluation standard of camera calibration accuracy. Feature points are reprojected to the image space and the reprojection errors are shown in Figure 11. The concentration of the reprojection error distribution of the proposed method is evident.
The mean reprojection error for the Zhang calibration method, the conventional calibration method, and the proposed method is 0.092 pixels, 0.0791 pixels, and 0.0523 pixels, respectively. The results indicate that the proposed method offers specific benefits in enhancing the precision of calibration under the same conditions.

4. Discussion

Concentric circle targets are widely used in camera calibration due to their high detection accuracy. To improve the calibration accuracy, traditional methods mainly focus on the high-precision location of the imaged center. However, the presence of position deviation in a concentric circle caused by lens distortion was commonly ignored by previous researchers. In the proposed method, the position deviation is calculated to refine the image points iteratively thus improving the calibration accuracy. Compared with previous research, the proposed method greatly improves camera calibration accuracy and can provide more accurate camera calibration results for computer vision tasks.

5. Conclusions

This paper proposes an iterative camera calibration method that considers lens distortion by using concentric circle grids as the calibration target. The cross-ratio equations were established based on the principle of collinearity, and a designed compensation framework was used to reduce the influence of lens distortion on calibration accuracy. Simulations and real experiments confirm that the proposed method has significantly enhanced both the precision of positioning and calibration performance. In practical experiments, the proposed method demonstrates an average reprojection error of 0.0523 pixels, which is 43.15% and 33.88% less than the reprojection error obtained using the Zhang calibration method and the conventional calibration method, respectively.

Author Contributions

Writing—original draft, L.W.; writing—review and editing, J.H. and L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61473100.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of the data. Data can be obtained from the first author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guo, S.; Zhao, Z.; Guo, L.; Wei, M. A method for measuring the absolute position and attitude parameters of a moving rigid body using a monocular camera. Appl. Sci. 2023, 13, 11863. [Google Scholar] [CrossRef]
  2. Cui, J.S.; Min, C.W.; Feng, D.Z. Research on pose estimation for stereo vision measurement system by an improved method: Uncertainty weighted stereopsis pose solution method based on projection vector. Opt. Express 2020, 28, 5470–5491. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, K.; Ye, H.; Gu, J.; Chen, H. A novel method for intrinsic and extrinsic parameters estimation by solving perspective-three-point problem with known camera position. Appl. Sci. 2021, 11, 6014. [Google Scholar] [CrossRef]
  4. Liu, S.D.; Xing, C.C.; Zhou, G.H. Measuring precision analysis of binocular vision system in remote three-dimensional coordinate measurement. Laser Optoelectron. Prog. 2021, 58, 1415007. [Google Scholar]
  5. Ibrahim, M.; Wagdy, M.; AlHarithi, F.S.; Qahtani, A.M.; Elkilani, W.S.; Zarif, S. An efficient method for document correction based on checkerboard calibration pattern. Appl. Sci. 2022, 12, 9014. [Google Scholar] [CrossRef]
  6. Guan, W.P.; Huang, L.Y.; Wen, S.S.; Yan, Z.H.; Liang, W.L.; Yang, C.; Liu, Z.Y. Robot localization and navigation using visible light positioning and SLAM fusion. J. Light. Technol. 2021, 39, 7040–7051. [Google Scholar] [CrossRef]
  7. Song, S.; Chandraker, M.; Guest, C.C. High accuracy monocular SFM and scale correction for autonomous driving. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 730–743. [Google Scholar] [CrossRef] [PubMed]
  8. Li, J.S.; Chu, J.K.; Zhang, R.; Hu, H.P.; Tong, K.; Li, J. Biomimetic navigation system using a polarization sensor and a binocular camera. J. Opt. Soc. Am. A 2022, 39, 847–854. [Google Scholar] [CrossRef]
  9. Khan, M.F.; Dannoun, E.M.A.; Nofal, M.M.; Mursaleen, M. Significance of camera pixel error in the calibration process of a robotic vision system. Appl. Sci. 2022, 12, 6406. [Google Scholar] [CrossRef]
  10. Mueggler, E.; Rebecq, H.; Gallego, G.; Delbruck, T.; Scaramuzza, D. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. Int. J. Rob. Res. 2017, 36, 142–149. [Google Scholar] [CrossRef]
  11. Alhmiedat, T.; Marei, A.M.; Messoudi, W.; Albelwi, S.; Bushnag, A.; Bassfar, Z.; Alnajjar, F.; Elfaki, A.O. A SLAM-Based localization and navigation system for social robots: The pepper robot case. Machines 2023, 11, 158. [Google Scholar] [CrossRef]
  12. Herrera-Granda, E.P.; Torres-Cantero, J.C.; Rosales, A.; Peluffo-Ordóñez, D.H. A comparison of monocular visual SLAM and visual odometry methods applied to 3D reconstruction. Appl. Sci. 2023, 13, 8837. [Google Scholar] [CrossRef]
  13. Cheng, T.; Li, D.G. Image reconstruction based on a single-pixel camera and left optimization. J. Opt. Technol. 2022, 89, 269–276. [Google Scholar] [CrossRef]
  14. Kim, S.; Park, Y. 3D reconstruction of celadon from a 2D image: Application to path tracing and VR. Appl. Sci. 2023, 13, 6848. [Google Scholar] [CrossRef]
  15. Xu, L.; Bai, L.; Li, L. The effect of 3D image virtual reconstruction based on visual communication. Wirel. Commun. Mob. Comput. 2022, 2022, 6404493. [Google Scholar] [CrossRef]
  16. Xu, D.; Xing, M.D.; Xia, X.G.; Sun, G.C.; Fu, J.X.; Su, T. A multi-perspective 3D reconstruction method with single perspective instantaneous target attitude estimation. Remote Sens. 2019, 11, 1277. [Google Scholar] [CrossRef]
  17. Guan, B.; Yu, Y.; Su, A.; Shang, Y.; Yu, Q. Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint. Appl. Opt. 2019, 58, 8511–8521. [Google Scholar] [CrossRef] [PubMed]
  18. Sun, J.; Cheng, X.; Fan, Q. Camera calibration based on two-cylinder target. Opt. Express 2019, 27, 29319–29331. [Google Scholar] [CrossRef] [PubMed]
  19. Duan, Y.; Yu, Y.L.; Li, P.; Jiang, S.Y. High-precision camera calibration based on a 1D target. Opt. Express 2022, 12, 36873–36888. [Google Scholar] [CrossRef] [PubMed]
  20. Lv, Y.W.; Liu, W.; Xu, X.P. Methods based on 1D homography for camera calibration with 1D objects. Appl. Opt. 2018, 57, 2155–2164. [Google Scholar] [CrossRef] [PubMed]
  21. Jiang, T.; Cheng, X.; Cui, H. Calibration method for binocular vision with large FOV based on normalized 1D homography. Optik 2020, 202, 163556. [Google Scholar] [CrossRef]
  22. Yu, J.; Liu, Y.; Zhang, Z.H.; Gao, F.; Gao, N.; Meng, Z.Z.; Jiang, X.Q. High-accuracy camera calibration method based on coded concentric ring center extraction. Opt. Express 2022, 30, 42454–42469. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, S.R.; Liu, M.; Yin, S.B.; Guo, Y.; Ren, Y.J.; Zhu, J.G. An improved method for location of concentric circles in vision measurement. Measurement 2017, 100, 243–251. [Google Scholar] [CrossRef]
  24. Bu, L.B.; Huo, H.T.; Liu, X.Y.; Bu, F.L. Concentric circle grids for camera calibration with considering lens distortion. Opt. Lasers Eng. 2021, 140, 106527. [Google Scholar] [CrossRef]
  25. Yin, Y.L.; Zhu, H.B.; Yang, P.; Yang, Z.H.; Liu, K.; Fu, H.W. High-precision and rapid binocular camera calibration method using a single image per camera. Opt. Express 2022, 30, 118781–118799. [Google Scholar] [CrossRef] [PubMed]
  26. Abedi, F.; Yang, Y.; Liu, Q. Group geometric calibration and rectification for circular multi-camera imaging system. Opt. Express 2018, 26, 30596–30613. [Google Scholar] [CrossRef]
  27. Zhang, X.W.; Ren, Y.F.; Zhen, G.Y.; Shan, Y.H.; Chu, C.Q.; Liang, F. Camera calibration method for solid spheres based on triangular primitives. Precis. Eng.-J. Int. Soc. Precis. Eng. Nanotechnol. 2020, 65, 91–102. [Google Scholar]
  28. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  29. Zhao, Z.J.; Liu, Y.C. Applications of projected circle centers in camera calibration. Mach. Vis. Appl. 2010, 21, 301–307. [Google Scholar] [CrossRef]
  30. Wei, L.; Zhang, G.Y.; Huo, J.; Xue, M.Y. Novel camera calibration method based on invariance of collinear points and pole–polar constraint. J. Syst. Eng. Electron. 2023, 34, 744–753. [Google Scholar] [CrossRef]
  31. Liang, S.X.; Zhao, Y. Camera calibration based on the common pole-polar properties between two coplanar circles with various positions. Appl. Opt. 2020, 59, 5167–5178. [Google Scholar] [CrossRef]
  32. Hao, F.; Su, J.; Shi, J.; Zhu, C.; Song, J.; Hu, Y. Conic tangents based high precision extraction method of concentric circle centers and its application in camera parameters calibration. Sci. Rep. 2021, 11, 20686. [Google Scholar] [CrossRef]
  33. Yu, Z.Y.; Shen, G.T.; Zhao, Z.Y.; Wu, Z.W.; Liu, Y. An improved method of concentric circle positioning in visual measurement. Opt. Commun. 2023, 544, 129620. [Google Scholar] [CrossRef]
  34. Cui, J.S.; Huo, J.; Yang, M. The circular mark projection error compensation in camera calibration. Optik 2015, 126, 2458–2463. [Google Scholar] [CrossRef]
  35. Jiang, G.; Quan, L. Detection of concentric circles for camera calibration. In Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005. [Google Scholar]
  36. Ying, X.H.; Zha, H.B. An efficient method for the detection of projected concentric circles. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007. [Google Scholar]
  37. Huang, H.F.; Zhang, H.; Cheung, Y.M. The common self-polar triangle of concentric circles and its application to camera calibration. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  38. Kim, J.S.; Gurdjos, P.; Kweon, I.S. Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 637–642. [Google Scholar]
  39. Zhang, B.W.; Li, Y.F.; Chen, S.Y. Concentric-circle-based camera calibration. IET Image Process 2012, 6, 870–876. [Google Scholar] [CrossRef]
  40. Shao, M.W.; Dong, J.Y.; Madessa, A.H. A new calibration method for line-structured light vision sensors based on concentric circle feature. J. Eur. Opt. Soc. Rapid Publ. 2019, 15, 1. [Google Scholar] [CrossRef]
  41. Yang, X.L.; Fang, S.P. Eccentricity error compensation for geometric camera calibration based on circular features. Meas. Sci. Technol. 2014, 25, 025007. [Google Scholar] [CrossRef]
  42. Shen, Y.J.; Zhang, X.; Cheng, W.; Zhu, L.M. Quasi-eccentricity error modeling and compensation in vision metrology. Meas. Sci. Technol. 2018, 29, 045006. [Google Scholar] [CrossRef]
  43. Hartley, R.I.; Kang, S.B. Parameter-free radial distortion correction with centre of distortion estimation. In Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005. [Google Scholar]
  44. Ricolfe-Viala, C.; Sanchez-Salmeron, A.J. Robust metric calibration of non-linear camera lens distortion. Pattern Recognit. 2010, 43, 1688–1699. [Google Scholar] [CrossRef]
Figure 1. Pinhole camera model.
Figure 1. Pinhole camera model.
Applsci 14 01813 g001
Figure 2. Concentric circle feature point.
Figure 2. Concentric circle feature point.
Applsci 14 01813 g002
Figure 3. Mathematical model of the concentric circle.
Figure 3. Mathematical model of the concentric circle.
Applsci 14 01813 g003
Figure 4. Difference between distorted ellipse and perspective ellipse.
Figure 4. Difference between distorted ellipse and perspective ellipse.
Applsci 14 01813 g004
Figure 5. Flow chart of the iterative camera calibration method.
Figure 5. Flow chart of the iterative camera calibration method.
Applsci 14 01813 g005
Figure 6. Effect of pitch angle on location accuracy. (a) The conventional and the proposed method; (b) enlarged view of the proposed method.
Figure 6. Effect of pitch angle on location accuracy. (a) The conventional and the proposed method; (b) enlarged view of the proposed method.
Applsci 14 01813 g006
Figure 7. Effect of yaw angle on location accuracy. (a) The conventional and the proposed method; (b) enlarge view of the proposed method.
Figure 7. Effect of yaw angle on location accuracy. (a) The conventional and the proposed method; (b) enlarge view of the proposed method.
Applsci 14 01813 g007
Figure 8. Effect of rotation angle on location accuracy.
Figure 8. Effect of rotation angle on location accuracy.
Applsci 14 01813 g008
Figure 9. Effect of Gaussian noise on location accuracy.
Figure 9. Effect of Gaussian noise on location accuracy.
Applsci 14 01813 g009
Figure 10. Circle center location on the real template.
Figure 10. Circle center location on the real template.
Applsci 14 01813 g010
Figure 11. Reprojection errors distribution using different methods. (a) Reprojection errors distribution of Zhang calibration method; (b) Reprojection errors distribution of conventional calibration method; (c) Reprojection errors distribution of proposed calibration method.
Figure 11. Reprojection errors distribution using different methods. (a) Reprojection errors distribution of Zhang calibration method; (b) Reprojection errors distribution of conventional calibration method; (c) Reprojection errors distribution of proposed calibration method.
Applsci 14 01813 g011
Table 1. Results of calibration using various methods in the presence of slight lens distortion(pixel).
Table 1. Results of calibration using various methods in the presence of slight lens distortion(pixel).
ParameterGround TruthZhangConventionalProposed
f x 15071506.9891506.99831507
f y 15071506.98891506.99521507
u 0 10451044.92531044.92681045.014
v 0 10101010.13941010.13851010.0848
k 1 −0.1−0.0998−0.1001−0.1
k 2 0.080.07490.08230.08
P 1 −0.0005−0.0005−0.0005−0.0005
P 2 −0.0005−0.0005−0.0005−0.0005
Table 2. Results of calibration using various methods in the presence of heavy lens distortion(pixel).
Table 2. Results of calibration using various methods in the presence of heavy lens distortion(pixel).
ParameterGround TruthZhangConventionalProposed
f x 15071507.05571507.05521507
f y 15071507.04021507.03961507
u 0 10451044.9321044.93291044.9838
v 0 10101010.23731010.23591010.1539
k 1 −0.4−0.3993−0.4005−0.4
k 2 0.30.28150.30790.3
P 1 −0.002−0.002−0.002−0.002
P 2 −0.002−0.002−0.002−0.002
Table 3. Evaluating intrinsic parameters using various methods in a real experiment(pixel).
Table 3. Evaluating intrinsic parameters using various methods in a real experiment(pixel).
ParameterZhangConventionalProposed
f x 2281.23592248.06392260.9497
f y 2280.79662247.81062260.8361
u 0 1040.55971037.84231029.0437
v 0 1020.55491022.64041026.9882
k 1 −0.1532−0.1539−0.1544
k 2 0.08350.17230.1791
P 1 −0.0004−0.0004−0.0006
P 2 −0.0007−0.0005−0.0006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, L.; Huo, J.; Yue, L. Iterative Camera Calibration Method Based on Concentric Circle Grids. Appl. Sci. 2024, 14, 1813. https://doi.org/10.3390/app14051813

AMA Style

Wei L, Huo J, Yue L. Iterative Camera Calibration Method Based on Concentric Circle Grids. Applied Sciences. 2024; 14(5):1813. https://doi.org/10.3390/app14051813

Chicago/Turabian Style

Wei, Liang, Ju Huo, and Lin Yue. 2024. "Iterative Camera Calibration Method Based on Concentric Circle Grids" Applied Sciences 14, no. 5: 1813. https://doi.org/10.3390/app14051813

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop