Next Article in Journal
An Approach for Easy Detection of Buried FRP Composite/Non-Metallic Pipes Using Ground-Penetrating Radar
Previous Article in Journal
Six-Lead Electrocardiography Enables Identification of Rhythm and Conduction Anomalies of Patients in the Telemedicine-Based, Hospital-at-Home Setting: A Prospective Validation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements

1
State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China
2
China North Engine Research Institute, Tianjin 30040, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(20), 8466; https://doi.org/10.3390/s23208466
Submission received: 12 August 2023 / Revised: 10 October 2023 / Accepted: 12 October 2023 / Published: 14 October 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Traditional non-coplanar calibration methods, represented by Tsai’s method, are difficult to apply in multi-camera-based stereo vision measurements because of insufficient calibration accuracy, inconvenient operation, etc. Based on projective theory and matrix transformation theory, a novel mathematical model is established to characterize the transformation from targets’ 3D affine coordinates to cameras’ image coordinates. Then, novel non-coplanar calibration methods for both monocular and binocular camera systems are proposed in this paper. To further improve the stability and accuracy of calibration methods, a novel circular feature points extraction method based on region Otsu algorithm and radial section scanning method is proposed to precisely extract the circular feature points. Experiments verify that our novel calibration methods are easy to operate, and have better accuracy than several classical methods, including Tsai’s and Zhang’s methods. Intrinsic and extrinsic parameters of multi-camera-systems can be calibrated simultaneously by our methods. Our novel circular feature points extraction algorithm is stable, and with high precision can effectively improve calibration accuracy for coplanar and non-coplanar methods. Real stereo measurement experiments demonstrate that the proposed calibration method and feature extraction method have high accuracy and stability, and can further serve for complicated shape and deformation measurements, for instance, stereo-DIC measurements, etc.

1. Introduction

Camera calibration is the necessary process to determine the unknown basic parameters of camera imaging and the transformation parameters from world coordinate system to camera coordinate system. The mapping relationship between the 3D world coordinates of targets’ feature points and the 2D image coordinates can be used to acquire unknown parameters based on an ideal camera imaging model [1]. Precise calibration parameters of a camera-based measurement system are the prerequisite for the image-based 3D reconstruction, because the calibration parameters directly participate in the mapping progress from 3D reference coordinates to 2D image coordinates or the remapping progress [2,3]. Therefore, developing a high-precision camera calibration method is of great significance.
According to the geometry characteristics of the calibration targets, the existing camera calibration methods can be divided into the following three categories: 3D stereo target calibration methods, 2D planar target calibration methods, and self-calibration methods.
The representative traditional 3D stereo methods are the Direct Linear Transformation (DLT) method developed by Abdel-Aziz [4] and the “Two-Stage” Non-coplanar method developed by Tsai [5]. DLT bridges the gap between photogrammetry and computer vision. Shi [6] developed DLT methods and proposed a DLT-Lines method wherein the camera’s intrinsic and extrinsic parameters can be extracted from the matrix linearly and operate non-linear optimization with distortion coefficients. Tsai gave a two-step calibration method based on radial alignment constraint (RAC). Tsai’s method has a moderate amount of calculation and high accuracy. But the implementation of Tsai’s non-coplanar method is cumbersome. The extrinsic parameters calibrated by Tsai’s method are inaccurate and the tangential distortion parameters of lenses cannot be calibrated by Tsai’s method because of the RAC model’s insufficiency. J. Zhang [7] and Zheng [8] et al. noticed the drawbacks of Tsai’s method and they gave some corrective action for Tsai’s non-coplanar methods. However, their research is still not impeccable and cannot efficiently apply in multi-camera calibration tasks. The deficiencies of Tsai’s method and some incomplete improvements will be further discussed in Section 2.3.
It is the case that 2D planar target calibration methods are also called coplanar calibration methods [5,8,9,10,11]. The representative coplanar method was proposed by Zhang [9]. As a milestone in camera calibration, Zhang’s method is easy to use in practical calibration and provides sufficient accuracy for most applications. Zhang’s method needs to take images of 2D targets in multi-viewing multiplane position, and then calculate the intrinsic and extrinsic parameters and distortion coefficients through linearly initial parameters solving and non-linear optimization. Zhu et al. [10], Sels et al. [11], Chen et al. [12], and many other scholars further developed Zhang’s method. Most of the listed scholars made improvements by changing the calibration patterns, increasing the numbers of targets’ feature points, and increasing the extraction accuracy of feature points in images. It must be admitted that the calibration accuracy and uncertainty of calibrated parameters are accorded certain promotions by the above scholars’ research. However, the above promotions are mostly established at the expense of applying complicated time-consuming feature extraction algorithms. Meanwhile, there is rarely limited innovation in the calibration model and mathematical operation process of Zhang’s original calibration methods.
Self-calibration methods only use the corresponding relationship between the surrounding images and the images during the movement to perform the calibration. Hartley [13] and Maybank and Faugeras et al. [14] first proposed the idea of camera self-calibration. Due to the unstable characteristics of the nature feature and feature extraction algorithms, self-calibration methods are hard to maintain with high accuracy and robustness. Li et al. [15] and Li et al. [16] and other scholars tried to use manually selecting features to replace nature textural features. Calibration accuracy is relatively improved at the huge expanse of time-consuming feature extraction and increasing mathematical complexity. In general, existing self-calibration methods have poor accuracy, low efficiency, and low robustness, so that they are difficult to use in high-precision stereo measurements. This paper focus more on developing a stable, efficient, and accurate calibration method for multi-camera-based high-precision stereo measurements. As a result, we will not discuss the self-calibration method in the rest of this paper.
Camera-sensing-systems which are used for stereo measurements can be roughly divided into two types. The first type is a monocular camera measurement system. A single camera cannot directly acquire spatial depth information, but a monocular camera system can often contain extra feature projection devices, e.g., line-structured light camera-based sensors, laser light structure camera-based sensors, etc. Calibration for structured-light-based monocular camera measurement systems mainly include two progresses, which are intrinsic parameters calibration for the monocular camera and the extrinsic parameters calibration for the connecting structure of the monocular camera and light projection device. Calibration for structured-light-based monocular camera measurement has been studied by many scholars [17,18]. This paper focus more on the calibration of the second camera sensing system, which is the multi-camera-based stereo measurement system. A binocular camera system is the most typical and fundamental multi-camera system. The calibration of a binocular camera system also contains two steps, which are separate intrinsic parameters calibration for two single cameras, and the extrinsic structural parameters’ calibration of these two cameras.
Camera-based stereo measurements are built on the 3D reconstruction of features from images. The 3D reconstruction of image features mainly contains three steps, i.e., features’ extraction, features’ stereo matching, and features’ stereo reconstruction based on triangulation. There are different suitable feature extraction and matching methods for different kinds of image features. For the discrete and limited artificial geometrical features, i.e., points, circles, corners, lines, etc., scholars use specific geometrical features extraction algorithms to extract and match the features. For the local features with stubborn scale, orientation, and illumination invariance, scholars use local feature detectors and descriptors, e.g., scale invariant feature transform (SIFT) [19], speeded up robust features (SURF) [20], oriented FAST and rotated BRIEF (ORB) [21], etc., which have been introduced to distract, descript, and match the related local key points. For the full-field measurements of surface shape, displacement, and deformation, etc., if we have lower demands of measurements accuracy, then Block Matching (BM), Semi-Global Block Matching (SGBM), Graph Cut (GC), and Dynamic Programming (DP), etc., can be used to perform global matching. If we have higher demands of measurements accuracy, then digital image correlation (DIC) has been demonstrated as one of the most effective technologies to quantitatively extract full-field displacement and strain responses of materials, components, structures, or biological tissues through correlation-based matching strategies [22].
In this paper, we propose a novel stable, efficient, and accurate non-coplanar calibration method which can be well applied for camera-based stereo vision measurements. This method can significantly correct existing non-coplanar calibration methods’ weaknesses, i.e., cumbersome operation, insufficient accuracy, unstable fitting of different measurement scenes, etc. Specifically, the contributions of this study are summarized as follows:
  • This paper establishes a novel improved affine coordinates correction mathematical model for non-coplanar calibration. A novel calibration method based on this model is established for both monocular and binocular camera systems. Simulation and real experiments verified that our novel methods have better accuracy, stability, and efficiency than controlled methods.
  • For further improving the accuracy and stability of existing calibration methods, a novel simple circle feature points extraction algorithm based on the combining of local OTSU and gradient-based radial section scanning for edges is proposed in this paper. Simulations and real experiments demonstrated our algorithm has better performance in extraction accuracy and stability for illumination and viewing angle changes than the traditional algorithm from OpenCV.
  • Real all-process 3D reconstruction experiments of both discrete feature points and object surface’s full-field region of interest (ROI) have been operated from the stereo system’s calibration, features extraction, features’ stereo matching, to the final features’ stereo reconstruction. Experiments demonstrate the feasibility of our calibration methods for real measurement scenes, and the stereo measurements with this paper’s calibration parameters have better accuracy than controlled methods.
The layout of this paper is organized as follows. Section 2 formulates related works, models, strategies, and restrictions of current camera calibration methods and stereo measurements. Section 3 and Section 4 present the methodology of this paper’s research. The experiments and results are presented in Section 5. Finally, Section 6 concludes the paper and indicates future directions.

2. Related Works and Problems Formulation

2.1. Mathematical Model and Some Developments of Tsai’s Non-Coplanar Calibration Method

We have summed up Tsai’s non-coplanar mathematical model [5] in Table 1. Based on radial alignment constraint (RAC), an overdetermined equation can be used to solve seven independent intermediate variables [ a 1 a 2 a 7 ] . Then, with two orthogonal constraints of rotation constrains, [ s x R 3 × 3 T x T y ] can be solved first. Correspondingly, the initial values of [ T z f ] can be solved linearly with calculated parameters and another overdetermined equation. At last, use nonlinear optimization to refine part of the calibration parameters and radial distortion coefficients. The calibration with Tsai’s method for a binocular camera system is the combination of two separate calibrations for single cameras.
Tsai’s calibration method has a simple mathematical model and it is very easy to operate in an algorithm. The algorithm has good efficiency for most of the parameters are computed by linear overdetermined equations and very few elements are brought into non-linear optimization. However, Tsai’s calibration mathematical model has some drawbacks. Firstly, RAC constraints take no account of the tangential distortion of the lens. J. Zhang et al. [7] and Xu et al. [23] introduced the tangential distortion model into Tsai’s method and significantly improved the calibration accuracy. Tang et al. [24] also considered tangential distortion with Tsai’s model and verified that Tsai’s method has better efficiency in an algorithm than Zhang’s method.
Despite the better efficiency of Tsai’s method, there are fewer scholars and engineers applying Tsai’s method in real measurements than there are applying Zhang’s method. This is because Tsai’s method is less about accuracy and is inconvenient to implement which can be seen in detail in Section 2.3 and Section 2.5.

2.2. Mathematical Model and Recent Development of Coplanar Calibration Method

As a milestone of calibration methods, Zhang’s coplanar calibration method has been applied in many computer vision tasks for its convenience, accuracy, and stability. The mathematical model in [9] is summarized in Table 2.
Using P ~ I and P ~ W respectively represents the matched homogeneous coordinates from the 2D image pixel coordinate system and world coordinate system. The calibration process of coplanar methods based on the transformation from 2D world coordinates to 2D image pixel coordinates, which can be described as an 3 × 3 homography matrix, is shown as H3×3 in Table 2. However, one H3×3 matrix can only supply two constraints for the linear solution of parameters, yet there are five intrinsic parameters to be solved. Thus, at least two images from different orientations are needed to evaluate the four initial values of intrinsic parameters if impose γ = 0. Then, the initial intrinsic parameters are used to calculate the rotation and translation vectors between each planar target’s world coordinate system and camera system. It is worth mentioning that if the initial KZhang and H3×3 are directly used to compute R3×3, the R3×3 cannot strictly satisfy the orthogonal properties of a rotation matrix. Singular value decomposition is used to approximate the relatively best rotation matrix in [9]. Then, R3×3 rotation matrices are transferred to R3×1 rotation vectors. For the DOF of a certain rotation is three, obviously, R3×1 rotation vectors can better meet the demands of follow-up nonlinear optimization. R3×1 and R3×3 are related by the Rodrigues formula. Using R3×1 vector rather than R3×3 matrix in nonlinear optimization can avoid the problem of insufficient orthogonality of a certain rotation matrix.
Recently, scholars have developed different calibration methods based on Zhang’s calibration model. Yin et al. [25] improved the binocular calibration accuracy by timing correction of two consecutive frames; Cheng et al. [26] used the perspective correction and phase estimation method together to help increase the accuracy of control point localization and consequently of camera calibration; Chen et al. [27] applied sub-pixel edge detection and cross ratio invariance to refine the circular control points’ image position and then increase the accuracy of calibration; Wang et al. [28] extended Pascal’s theorem to the affine plane to obtain the constraints of the circular points in images and used properties of circle and infinite line to calibrate the intrinsic parameters of camera; Dong et al. [29] developed a confidence-based camera calibration method with modified census transform for chessboard patterns whereby this method is effective to achieve accurate calibration results; Zhang et al. [30] designed particular stereo targets with multiple feature planes to help simultaneously identify the intrinsic and extrinsic parameters of a camera system by a single captured image.
Obviously, scholars focus more on the improvements of feature points extraction methods and accuracy rather than the mathematical model of coplanar methods. This research tendency partly reflects the accuracy of Zhang’s calibration model which has been confirmed effective by most scholars. The improvements nonetheless can only emerge in other aspects except the mathematical model.

2.3. Deficiencies of Tsai’s Calibration Model and Recent Research of Non-Coplanar Calibration Model Based on Affine Coordinate Correction

Zheng et al. [8] pointed out how the uncorrected sliding direction of planar calibration target would greatly influent the accuracy of Tsai’s non-coplanar method. This inference can be confirmed by the simulation in Figure 1. Assume there is an uncorrected yaw angle between sliding direction and ideal world coordinate system decided by planar target and its normal vector direction. If the sliding shifts are erroneously assumed to be the zw in a world system, using Tsai’s method will obtain the results in Figure 1. The absolute value of focal length’s relative error increases to 20.47% when the yaw angle between sliding direction and planar targets increases to 4°, and they will continue increasing along with the increasing angle.
Moreover, Zheng et al. [8] also pointed out that Tsai’s mathematical model cannot obtain a strictly orthogonal rotation matrix, for the following reason: The 1st step of linear initial values solving uses seven independent intermediate variables to acquire [ s x R 3 × 3 T x T y ] of six DOF. This is essentially an overdetermined equation solving procedure and can only obtain an approximate solution. Assume r1, r2 are the first two rows of R3×3. In this procedure, Tsai can only use the two single inner product properties of r1 and r2 to calculate sx, Ty. If one needs to obtain an orthonormal matrix by this way, the other property of r1 × r2 = 0 should be applied to ensure the orthogonality of R3×3. However, Tsai’s method missed this constraint and did not make any compensation in the following nonlinear optimization.
Zheng et al. [8] proposed a novel non-coplanar method based on an affine coordinate correction (ACC) model as shown in Table 3. This method is called the ACC method in this paper based on mathematical characteristics.
The ACC method introduced a 2D normalized vector η = ( η x η x ) T to correct the planar target’s two axis, and a 3D normalized vector β = ( β x β y β z ) T to correct the sliding direction vertical to the planar. The optical center is assumed fixed at the center of the image. Since the number of intermediate parameters is equal to the DOF of parameters to be calibrated, take 11 intermediate parameters solved by overdetermined equations into nonlinear optimization together with distortion coefficients. With enough orthonormal constraints of the rotation matrix and the properties of normalized vectors, the final parameters can easily obtain analytical solutions from optimized intermediate parameters.
The ACC calibration model works well and can obtain accurate calibration results for a monocular camera system. However, this model cannot fit the calibration well for a binocular camera system as shown in Table 4.
Firstly, Table 3 illustrates that the DOF of the intermediate parameters of a single camera is 11. If the ACC model is extended to a binocular camera system such as in Table 4, one will obtain two separate intermediate matrices of which the sum DOF is 22. But the physical meanings of η and β illustrate that they should be equal for each single camera when operating a non-coplanar calibration for a binocular system. This means that the DOF of the parameters to be calibrated reduces to 19 and the original unconstrained nonlinear optimization of the intermediate parameters cannot apply for the binocular system. Constraints from the rotation matrix and correction vectors were introduced to be the penalty constraints and construct nonlinear optimization. To further simplify the binocular calibration model based on the ACC model and maintain the orthogonality of the rotation matrix, Zheng et al. [8] chose to calibrate the intrinsic parameters for the single camera first in order to reduce the DOF of the parameters to be calibrated and introduce enough penalty constraints.
Unfortunately, this mathematical compromise did not bring an advance of calibration accuracy but rather introduced an extra workload for determining the penalty coefficients of each penalty constraint.
To overcome the above problems either existing in Tsai’s method or in the ACC method, this paper proposes a novel improved affine coordinate correction (IACC) calibration method for both monocular and binocular camera systems.

2.4. Local Spatial Optimality of Calibration Parameters

No matter what method is used to evaluate the accuracy of calibration parameters, the optimality of these parameters is only mathematically optimal, and approximately physically l optimal. The approximation properties make calibration parameters lack strictly physical significance. This means that the calibration parameters from different calibration methods must have optimum properties in particular time and space domains.
The above analysis indicates that measurement objects which occur at different positions within the FOV of camera systems have their own optimal calibration parameters for the best measurement results. Experience suggests that the more calibration targets cover the measurement position, the more fitting calibration parameters can be acquired for measurements. These deductions can be reflected in Figure 2. This kind of local optimality may also happen in domains of time. Compared to the spatial optimality, current camera sensors’ hardware exhibits more stability in a short time interval. Thus, this paper focuses more on the local spatial optimality of calibration parameters. Experiments in Section 5.5 can support these deductions about the local spatial optimality of calibration parameters.
The depth of field (DOV) within FOV of camera system can be roughly evaluated by the nominal value of lens’ focal length, aperture, allowable dispersion circle’s diameter, and object distance. Then, measurement objects can be placed at the range of DOV. If we want to acquire the best measurement accuracy, calibration targets’ feature points should come close to the measurement position and cover the limited measurement depth as much as possible. It is worth noting that the greater covered depth of planar targets is not better than the accurate smaller covered depth decided by measured objects’ actual 3D information, especially for high precision close-range photogrammetry.

2.5. Implementation and Restrictions of Coplanar Calibration Methods and Non-Coplanar Calibration Methods

As shown in Figure 3a, coplanar methods for monocular camera systems need to take images of planar targets from various orientations (at least two). If planar targets were set in mono-viewing positions, coplanar methods would lose efficacy. As shown in Figure 3b, non-coplanar methods need to take images of planar targets from one fixed orientation. At least two images and one known shift of targets are needed to carry out a continued calibration process. Correspondingly, the implementation process of binocular camera system calibration by coplanar methods and non-coplanar methods are shown in Figure 3c,d. Basically, non-coplanar method is an improved calibration method using virtual 3D stereo targets. For guaranteeing the geometric accuracy of patterns’ world coordinates, extra equipment (Tsai’s method) or mathematical model (ACC and IACC methods) should be introduced to make the correction of world coordinates. This may bring in extra workloads, but the mathematical method is obviously more efficient than the manual adjustment method with extra instruments.
In most actual measurements, the rough 3D information of measured objects is known, and the structure of a multi-camera measurement system is specially designed for the measured object. If the depth’s changing range is not too large, it is better to perform calibration over the depth’s changing range than over the whole FOV. On this occasion, non-coplanar calibration methods are more applicable than coplanar methods, for tiny shifts of planar targets in one fixed direction could generate large amounts of feature points for calibration tasks. On the contrast, considering the size of planar targets and the demands of multi-viewing images, the equivalent amount of feature points needs larger depth range than measured objects’ depth range on most occasions. Otherwise, minor inclination angles’ change of planar targets may not support coplanar methods to obtain the right calibration parameters. Thus, the restriction of coplanar methods comes from the demands of image acquisition from different viewing angles, and the restriction of non-coplanar methods comes from the measurement and correction of sliding shifts.

2.6. Strategies of Enhancing Calibration Performance by Improving Feature Points Extraction Algorithms

Alongside calibration models, the improvements of calibration targets’ 3D production and corresponding 2D image feature extraction can directly enhance calibration results.
The introduction of scholars’ strategies to improve the calibration performance by enhancing 2D image feature extraction algorithm is put in Appendix A.
This paper concentrates more on the improvements of extracting accuracy and stability for symmetric circles. A novel extraction algorithm for symmetric circles’ pattern is proposed in this paper, which has better extraction accuracy, stability of illumination, and targets’ orientation changes than OpenCV’s traditional algorithm. Correspondingly, the performance of calibration accuracy and stability can further be enhanced by using the proposed algorithm.

3. Novel Calibration Mathematical Model

3.1. Present Novel Improved Affine Coordinate Correction Mathematical Model for Non-Coplanar Calibration

With the analysis in Section 2.1, Section 2.2, Section 2.3 and Section 2.4, a novel improved affine coordinate correction (IACC) mathematical model for non-coplanar calibration is proposed. As shown in Table 5, using P ~ I and P ~ p respectively represent the matched homogeneous coordinates of the 2D image pixel coordinate system and 3D affine coordinate system built from uncorrected planar target’s two axes and 1D sliding direction.

3.2. Coordinate Space Transformation from Target Affine Space to Orthogonal World Coordinate Space

Figure 4’s left part shows the corrections for the calibration target’s affine coordinates from the ACC model, in which normalized 2D vector η corrects the planar target’s vertical axis and horizontal axis, and in which normalized 3D vector β corrects the sliding direction into orthogonal world coordinate system Ow-XwYwZw. In this case, η and β are introduced to describe the skews of the planar target’s two axes and stage’s sliding direction, if the planar target and sliding direction remain fixed, η = ( η x , η y ) T and β = ( β x , β y , β z ) T should keep unchanged. However, when we set calibration experiments with the ACC method for different cameras with the same sliding stage and planar target (remain fixed), η and β do not always stay the same. The change of η is more remarkable than β with the switch of calibrating different cameras. It is more likely that the 1 DOF from η should transfer to characterize some intrinsic properties of different cameras. In fact, our planar targets are fabricated with optical glass and high precision (close to 1 μm) lithography process, which means η should be infinitely close to ( 0 , 1 ) T .
Based on the actual physical reality, we propose our novel improved affine coordinate correction model as shown in Figure 4’s right part. In our calibration model, the planar target’s two axes are considered strictly orthogonal. This means original η should be adjusted to ( 0 , 1 ) T . As shown in Figure 4’s right part, Ow-XwYwZw is the ideal orthogonal 3D world coordinate system of the planar target. Normalized 3D vector β remains in our calibration model to correct the sliding direction to the ideal.
Correspondingly, the transformation equation should be adjusted as follow:
[ X w Y w Z w ] = [ 1 η x β x 0 η y β y 0 0 β z ] [ X p Y p Z p ] [ X w Y w Z w ] = [ 1 0 β x 0 1 β y 0 0 β z ] [ X p Y p Z p ]
Figure 5 shows the pinhole imaging model of the camera. O-XYZ is the camera coordinate system, of which the unit is mm. OR-UV is the camera image sensor’s two-dimensional pixel coordinate system, of which the origin point is in the upper left corner of the image sensor and the unit is Pixel. Ou-xuyu is the image-plane coordinate system, of which the unit is mm.
According to the theory of rigid transformation, the transformation relationship between the camera coordinate system O-XYZ and the calibration target object world coordinate system Ow-XwYwZw can be expressed as:
[ X Y Z ] = R [ X w Y w Z w ] + T
In which R = [ r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 ] , and T = [ T x T y T z ] .
The transformation of points’ coordinates between system OR-UV and O-XYZ can be expressed by Equation (3):
ρ P ~ I = ρ [ U V 1 ] = [ f x γ U 0 0 f y V 0 0 0 1 ] [ X Y Z ]
Focal length in (3) is expressed as fx and fy, which separately express the focal length’s equivalent pixel numbers in the sensor’s horizontal and vertical direction. For the use of homogeneous coordinates transformation, ρ in this paper denotes the proportionality coefficients of transformation and have no strict physical meanings.
Thus, the ideal process from the affine space coordinate system Op-XpYpZp to the camera image sensor’s two-dimensional pixel coordinate system OR-UV can be expressed as:
ρ ( P ~ I P ~ c ) = ρ [ U U 0 V V 0 1 ] = [ f x γ 0 0 f y 0 0 0 1 ] [ r 1 r 2 r 3 T x r 4 r 5 r 6 T y r 7 r 8 r 9 T z ] [ 1 0 β x 0 0 1 β y 0 0 0 β z 0 0 0 0 1 ] [ X p Y p Z p 1 ] = A 3 × 4 P ~ p
Compared with the ACC model in Table 3, our novel model IACC introduces γ to describe the skewness of the image sensor’s two axes. If the actual included angle of the imaging sensor’s two axes is θ, there is γ = f y cot θ in physical meaning and the fy in (4) illustrates f y = f y / sin θ . The physical meaning of γ illustrates that when θ is close to 90°, γ should be close to 0. Thus, many scholars and engineers choose to regard γ as 0 in actual scenes based on the manufacturing level of current industrial cameras.
But what really appeals to us is that γ can supply 1 DOF for the intermediate matrix A3×4, which we lost when we abandon η, and γ is an intrinsic parameter for a camera. We will further give our explanation about the significance of γ for IACC in Section 4.1.

3.3. Processing of Lens’ Distortion

There are two main types of distortion errors for lens due to the inevitable processing and assembly error, i.e., radial distortion and tangential distortion.
The radial distortion is symmetrical about the main optical axis of the camera, and its mathematical model can be expressed as:
{ δ x r = x u ( k 1 q 2 + k 2 q 4 + k 3 q 6 + ) δ y r = y u ( k 1 q 2 + k 2 q 4 + k 3 q 6 + )
In which, q = x u 2 + y u 2 , ( x u , y u ) expresses the ideal (undistorted) normalized coordinates of the image-plane coordinate system Ou-xuyu and the distortion center is Ou. k1, while k2… are the radial distortion coefficients in which generally only the first two order coefficients play a major role.
The tangential distortion is not symmetrical about the main optical axis of the camera lens, and its mathematical model is:
{ δ x t = p 1 ( q 2 + 2 x u 2 ) + 2 p 2 x u y u δ y t = 2 p 1 x u y u + p 2 ( q 2 + 2 y u 2 )
In this formula, p1 and p2 express the first two order tangential distortion coefficients.
In the image plane system Ou-xuyu, the mathematical relationship between the ideal imaging point’s normalized coordinate ( x u , y u ) and the actual imaging point’s normalized coordinate ( x d , y d ) can be expressed by Equation (9). Note that the subscript u in this paper represents the ideal coordinate value, the subscript d represents the coordinate value with distortion, the superscript “ ’ ” represents normalized coordinates, and superscript “^”represents ideal image points’ coordinates from reprojection.
{ x d = x u + δ xr + δ xt y d = y u + δ yr + δ yt
Combining Equation (3) with known calibrated parameters, the ideal image points’ coordinates ( U ^ d , V ^ d ) in OR-UV should be expressed as:
{ U ^ d = f x x d + γ y d + U 0 V ^ d = f y y d + V 0

4. Key Procedures of IACC Calibration Method

4.1. Initial Value Linear Solving and Parameters Separation Method

The relationship between A3×4 from Equation (4) and parameters to be calibrated can be expressed as:
A 3 × 4 = [ a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 1 0 a 11 1 ] = [ f x r 1 + γ r 4 T z f x r 2 + γ r 5 T z f x ( β x r 1 + β y r 2 + β z r 3 ) + γ ( β x r 4 + β y r 5 + β z r 6 ) T z f x T x + γ T y T z f y T z r 4 f y T z r 5 f y ( β x r 4 + β y r 5 + β z r 6 ) T z f y T z T y 1 T z r 7 1 T z r 8 β x r 7 + β y r 8 + β z r 9 T z 1 ]
In which A3×4 can supply 11 DOF. And the DOF of final parameters (the intrinsic and extrinsic parameters, excepted for P c = ( U 0 , V 0 ) ) to be calibrated is 11. Theoretically, the analytical solution of the camera’s intrinsic and extrinsic parameters can be solved directly from a1~a11 by the algebraic solving method.
Firstly, assume Pc is at the image center, solve the initial value of a1~a11 by the linear least squares method. With Equations (4) and (9), there are:
{ U i U 0 = a 1 X p i + a 2 Y p i + a 3 Z p i + a 4 a 9 X p i + a 10 Y p i + a 11 Z p i + 1 V i V 0 = a 5 X p i + a 6 Y p i + a 7 Z p i + a 8 a 9 X p i + a 10 Y p i + a 11 Z p i + 1
With N pairs of corresponding calibration feature points, we can obtain the least squares solution a = [ a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 a 11 ] T .
With enough orthogonal constraints and properties of normalized vector β shown in Equation (11)’s left part, Equation (11)’s right part can be deduced as follows:
{ r 1 2 + r 4 2 + r 7 2 = 1 r 2 2 + r 5 2 + r 8 2 = 1 r 3 2 + r 6 2 + r 9 2 = 1 r 1 r 2 + r 4 r 5 + r 7 r 8 = 0 r 2 r 3 + r 5 r 6 + r 8 r 9 = 0 r 1 r 3 + r 4 r 6 + r 7 r 9 = 0 β x 2 + β y 2 + β z 2 = 1 { a 1 2 T z 2 f x 2 + a 5 2 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) 2 a 1 a 5 ( T z 2 γ f x 2 f y ) + a 9 2 T z 2 = 1 a 2 2 T z 2 f x 2 + a 6 2 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) 2 a 2 a 6 ( T z 2 γ f x 2 f y ) + a 10 2 T z 2 = 1 a 3 2 T z 2 f x 2 + a 7 2 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) 2 a 3 a 7 ( T z 2 γ f x 2 f y ) + a 11 2 T z 2 = 1 a 1 a 2 T z 2 f x 2 + a 5 a 6 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) ( a 1 a 6 + a 2 a 5 ) ( T z 2 γ f x 2 f y ) + a 9 a 10 T z 2 = 0 a 1 a 3 T z 2 f x 2 + a 5 a 7 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) ( a 1 a 7 + a 3 a 5 ) ( T z 2 γ f x 2 f y ) + a 9 a 11 T z 2 = β x a 2 a 3 T z 2 f x 2 + a 6 a 7 ( T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 ) ( a 2 a 7 + a 3 a 6 ) ( T z 2 γ f x 2 f y ) + a 10 a 11 T z 2 = β y
In the calibration process, Tz > 0, fx > 0, fy > 0 and β z > 0 are specified. The analytical solutions of T z 2 f x 2 , T z 2 γ 2 + f x 2 T z 2 f x 2 f y 2 , T z 2 γ f x 2 f y and T z 2 can be obtained by solving the first four equations in Equation (11). Further, we can solve these four parameters, i.e., fx, fy, γ, and Tz. β x and β y can be solved by the Equations (5) and (6).
In physical meanings, the introduction of γ supplies an intrinsic parameter for individual cameras. And in mathematical meanings, γ can supply 1 DOF for the intermediate matrix A3×4, which we lost when we abandoned η. Since the DOF of the final parameters (the intrinsic and extrinsic parameters, excepted for the DOF of final parameters (the intrinsic and extrinsic parameters, excepted for P c = ( U 0 , V 0 ) )) is equal to the DOF of A3×4. The orthogonality of R3×3’s analytical solution can be guaranteed.
Bringing the above parameters back into Equation (9), the remaining extrinsic parameters can be obtained as follows:
{ r 4 = a 5 T z f y r 7 = a 9 T z r 1 = a 1 T z γ r 4 f x T y = a 8 T z f y T x = a 4 T z γ T y f x { r 5 = a 6 T z f y r 8 = a 10 T z r 2 = a 2 T z γ r 5 f x ( r 3 r 6 r 9 ) = ( r 1 r 4 r 7 ) × ( r 2 r 5 r 8 )
So far, all the final parameters’ initial values without distortion coefficients have been solved linearly. The geometry constraints have fully confirmed the orthogonality of the rotation matrix. There is no need to further approximate the rotation matrix’s initial value.

4.2. Parameters’ Nonlinear Optimization

Section 4.1 has given the method of parameters’ linear initial values solving. Combining Equations (4)–(8), parameters to be calibrated are summed as follows:
{ β = ( β x β y ) T K IACC = [ f x γ 0 0 f y 0 0 0 1 ] P c = ( U 0 , V 0 ) R vector 3 × 1 = R o d r i g u e s ( [ r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 ] ) T = ( T x T y T z ) T D = ( k 1 k 2 p 1 p 2 k 3 ) T
With the improvement of the lenses’ manufacturing process, the distortion of today’s non-wide-angle lenses of cameras is very small. The initial guess of D can be simply set to 0. And the initial guess of (U0, V0) can be set to the centre of the collected images. As one of the non-coplanar calibration methods’ advantages, there is only one set of intrinsic and extrinsic parameters for all collected images. Equation (13) illustrates the Rodrigues transformation, which supplies the interconversion of rotation R3×3 matrix and R3×1 vector.
The minimum error squared sum objective function for pixel coordinates can be established:
{ f N U i = U i U ^ d i ( K IACC , P c , R vector 3 × 1 , T vector 3 × 1 , D , β x , β y ) f N V i = V i V ^ d i ( K IACC , P c , R vector 3 × 1 , T vector 3 × 1 , D , β x , β y ) I ( K IACC ,   P c   , R vector 3 × 1 ,   T vector 3 × 1 ,   D ,   β x ,   β y ) = i = 1 N f N U i 2 + f N V i 2 = min
In which, ( U ^ d i , V ^ d i ) is the target’s feature point projection from affine space coordinate system Op-XpYpZp to the camera image sensor’s two-dimensional pixel coordinate system OR-UV. (Ui, Vi) is the corresponding feature point’s coordinate extracted from the image.
This paper uses the Levenberg-Marquardt algorithm to solve this nonlinear minimization problem of the monocular camera system as shown in Equation (14). Experiments verify that our method can well converge to provide optimum values when the initial guess of the parameters is well estimated.

4.3. Binocular Camera System Calibration Method

A binocular camera system can be seen as two related monocular cameras. Thus, one of the strategies to calibrate a binocular camera system is the combination of two related monocular camera calibrations. As shown in Figure 3d, when using the binocular camera system to take some mono-view non-coplanar 2D targets’ images in a common viewing field, the collected images can then be used to implement binocular camera system calibration. The same as the monocular calibration mentioned before, there is no need for extra equipment making a sliding direction vertical to a target’s plane by our novel method.
At first, for each single camera, repeat the process from Equations (10)–(12) to calculate the initial value of parameters. The parameters to be calibrated in binocular systems can be summed as:
Left   Camera : { β = ( β x β y ) T K a - IACC = [ f a x γ a 0 0 f a y 0 0 0 1 ] P a - c = ( U a 0 , V a 0 ) R a - vector 3 × 1 = R o d r i g u e s ( [ r a 1 r a 2 r a 3 r a 4 r a 5 r a 6 r a 7 r a 8 r a 9 ] ) T a = ( T a x T a y T a z ) T D a = ( k a 1 k a 2 p a 1 p a 2 k a 3 ) T   Right   Camera :   { β = ( β x β y ) T K b - IACC = [ f b x γ b 0 0 f b y 0 0 0 1 ] P b - c = ( U b 0 , V b 0 ) R b - vector 3 × 1 = R o d r i g u e s ( [ r b 1 r b 2 r b 3 r b 4 r b 5 r b 6 r b 7 r b 8 r b 9 ] ) T b = ( T b x T b y T b z ) T D b = ( k b 1 k b 2 p b 1 p b 2 k b 3 ) T
The careful reader may notice that we have repeatedly calculated β x , β y , and β z separately in two single cameras’ initial value solving processes. Theoretically, β x , β y , and β z should remain the same for each single camera in a binocular system.
Our solution for this problem is to use either solution of ( β x , β y ) as the initial value of the binocular system. Then, take the other initial values of final parameters as shown in Equation (15), along with ( β x , β y ), into the nonlinear optimization procedure.
Then, the minimum error squared sum objective function for pixel coordinates can be established:
{ f N U a i = U a i U ^ a d i ( K a - IACC ,   P a - C ,   R a - vector 3 × 1 ,   T a - vector 3 × 1 ,   D a , β x ,   β y ) f N V a i = V a i V ^ a d i ( K a - IACC ,   P a - C ,   R a - vector 3 × 1 ,   T a - vector 3 × 1 ,   D a , β x ,   β y ) f N U b i = U b i U ^ b d i ( K b - IACC ,   P b - C ,   R b - vector 3 × 1 ,   T b - vector 3 × 1 ,   D b ,   β x ,   β y ) f N V b i = V b i V ^ b d i ( K b - IACC ,   P b - C ,   R b - vector 3 × 1 ,   T b - vector 3 × 1 ,   D b ,   β x ,   β y ) I ( K a - IACC ,   P a - C ,   R a - vector 3 × 1 ,   T a - vector 3 × 1 ,   D a ,   K b - IACC ,   P b - C ,   R b - vector 3 × 1 ,   T b - vector 3 × 1 ,   D b ,   β x ,   β y ) = i = 1 N f N U a i 2 + f N V a i 2 + f N U b i 2 + f N V b i 2 = min
In which, ( U ^ a d i , V ^ a d i ) and ( U ^ b d i , V ^ b d i ) are the target’s feature point projection separately from affine space coordinate system Op-XpYpZp to Left and Right camera image sensor’s two-dimensional pixel coordinate system ORa-UaVa and ORb-UbVb. (Uai, Vai) and (Ubi, Vbi) are the corresponding feature points’ coordinates extracted from images of different cameras.
While familiar with monocular calibration, this paper uses the Levenberg-Marquardt algorithm to solve the nonlinear minimization problem of the binocular camera system as shown in Equation (16). Experiments verify that our method can well converge to the optimum value when the initial guess of parameters is well estimated. It is worth mentioning that the present IACC calibration method has good universality and stability for conventional binocular camera systems.

4.4. Novel Simple Circle Feature Points Extraction Algorithm with High Accuracy and Stability Based on Local-ROI-OTSU and Radial Section Scanning Method

Datta et al. [31] and other scholars have verified that using a circles pattern can obtain better calibration accuracy than a chessboard pattern in most instances. And refinement based on iterative method [31], inverse rendering [32], or image rectify [26], etc., can really help improve the extraction accuracy of features. However, the above strategies are built on prior knowledge of the special information between camera and targets. The implementation of the above strategies is not simple. Also, they do not consider an algorithms’ stability to illumination, rotations, etc.
This paper proposes a simple circle feature points extraction algorithm with high accuracy and stability based on Local-ROI-OTSU and radial section scanning method. The introduction and deduction are in Appendix B, and the improvements in accuracy and stability brought with our novel algorithm are verified in Section 5.2, Section 5.3, Section 5.4 and Section 5.5.

5. Experiments Results and Discussion

Several experiments are set to test the performance of the proposed methods in this paper. The model of camera used in this paper is Basler acA1300-60gm, for which resolution is 1280 × 1024, for which pixel size is 5.3 µm × 5.3 µm, and for which there are matching 12 mm lenses. Three classical and typical calibration methods—Tsai’s method [5], Zhang’s method [9], and ACC method [8]—are used as the contrast methods in the experiments.
Firstly, carry out simulation experiments in Section 5.1 to analyze the performance of our calibration method with respect to the noise level, the number of planes, and different yaw angles. Stability simulations of the proposed novel circle feature points extraction algorithm are carried out in Section 5.2 to evaluate the stability with respect to illumination and viewing angle changes.
Then, carry out the real calibration experiments for the monocular camera in Section 5.3 and Section 5.4. Calibrate the intrinsic and extrinsic parameters of the two cameras respectively by the proposed IACC method and the three contrast methods. Evaluate the accuracy of the resulting parameters from multiple aspects.
Further, carry out the real calibration experiments for the binocular camera system in Section 5.5. Zhang’s method, ACC method, and the proposed IACC method are separately used to calibrate the binocular camera system’s unknown parameters. Then, take 3D reconstruction experiments with calibrated parameters for discrete feature points to test the actual measurement accuracy. Compare measured distance between points with the actual values. Experiments also verify the deduction in Section 2.3 about the local spatial optimality of calibration parameters.
In Section 5.6, use stereo-DIC method with calibrated binocular systems to carry out full-field stereo measurements based on 3D reconstruction. The results show the feasibility of applying our IACC calibration method for both discrete feature points and surface full-field measurements.
To ensure objectivity, we use the same calibration targets of different patterns to set experiments comparing different calibration methods. The patterns processing accuracy of the targets is 1 μm. Different feature points extraction algorithms are used to obtain points’ sub-pixel coordinates and make contrast experiments. Zolix KA50 motorized linear stage with MC600 controller is used to generate displacements of fixed direction for non-coplanar methods. Attocube IDS3010 laser interferometer is used to monitor the 1D out-of-plane shifts of targets’ plane.

5.1. Performance Simulations of Proposed IACC Calibration Method with Respect to the Noise Level, the Number of Calibration Images, and the Rotation Angle of Targets’ Plane

The simulated camera has the following properties: fx = 2255.0, fy = 2254.8, γ = 0.05, (U0, V0) = (640, 512), (k1, k2) = (−0.005, 0.005), and (p1, p2) = (0.001, 0.001). The target’s plane contains 11 × 8 = 88 feature points, and the distance between nearby feature points is 10 mm. The image solution is 1280 × 1024.
Performance with respect to the noise level. In this experiment, we use 10 planes in mono-viewing multiplane position to simulate the monocular camera calibration. The extrinsic parameters are set as follows: (Rv1, Rv2, Rv3) = (0, 0, 0), (Tx, Ty, Tz) = (−64.0, −51.2, 225.0), and (βx, βy) = (0.087, 0.000). Gaussian noise with 0 mean and σ standard deviation is added to the projected image points. We vary the noise level from 0.1 pixels to 2.0 pixels. For each noise level, we perform 100 independent trials, and the results shown are the average. As we can see from Figure 6b, the relative errors in fx and fy are less than 0.4%. For the most simulated noise level, they are less than 0.3%. Other intrinsic parameters, i.e., γ, U0 and V0 show similar properties as fx and fy. Just as shown in Figure 6, they have very good accuracy and stability. The intrinsic parameters’ average calibration results are not as sensitive to the noise level as Zhang’s method. Reference [9] mentioned that the simulated intrinsic parameters’ errors by Zhang’s method increase linearly with the noise level. For σ = 0.5, the errors in fx and fy with Zhang’s method are less than 0.3%. Thus, our method shows better stability than Zhang’s when the noise level is less than 2.0 pixels. For the extrinsic parameters, their errors also remain within a reasonable range. When the noise level is lower than 1.4 pixels, the rotation angle’s error is less than 0.02°, as well as the max error of (Tx, Ty, Tz) is less than 3 mm. The error of (βx, βy) is less than 0.015, which means the translation direction’s calibration error is less than 0.015°. The distortion coefficients’ error remains low when the noise level is lower than 2 pixels, especially when it is lower than 1.4 pixels. It is worth mentioning that the reprojection error of the proposed calibration method could well converge on the ground truth error value we set before when the noise level is less than 2.0 pixels, as shown in Figure 6a.
The standard deviation of the parameters’ calibration result is used to characterize the uncertainty of the results. As shown in Figure 6e, the uncertainty of fx and fy keeps increasing with the rising noise level. Other parameters’ uncertainty does not show in the figure which has similar characteristics with fx and fy. Thus, the noise level of images has a directly negative effect on the uncertainty of the proposed calibration method, which should be noted in practical applications.
Performance with respect to the number of planes. In this experiment, the simulated camera has the same intrinsic and extrinsic parameters as the experiment with respect to the noise level. We vary the number of planes in mono-viewing multiplane position from 2 to 20. For each number, we perform 100 independent trials. Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is conducted in the trials. The results are the average as shown in Figure 7. We can learn from Figure 7b that average relative errors of fx and fy decrease significantly when the number of planes increase from 2 to 3. Then, they become quite stable, and the relative error remains lower than 0.3%. The other intrinsic and extrinsic parameters’ errors show similar properties as fx and fy. The absolute errors of main distortion coefficients k1 and p1 stay close to 0, which shows favourable stability with the number of planes increasing. The errors of high-order radial distortion coefficient k2 seem like changing more dramatically. In numerical terms, the error is still small, and has little effect on the results.
The data of the reprojection error shown in Figure 7a illustrates that the proposed calibration method could well converge on the ground truth error value. And the number of planes has little effect on the reprojection error. Further, the uncertainty of fx and fy shown in Figure 7e decrease significantly when the number of planes increase from 2 to 7, and then decrease more slightly as the number increases from 7 to 20.
Performance with respect to the rotation angle of targets’ plane. In this experiment, the displacement direction of the calibration target remains parallel to the optical axis of the camera. To examine the influence of the orientation of the target’s plane with respect to the imaging plane, we firstly set the target’s plane parallel to the imaging plane. The target’s plane is then rotated around the Yw-axis with angle θ. The angle θ varies from 10° to 50°. From the θ could we obtain the R_vec(Rv1, Rv2, Rv3) = (0, −θ(rad), 0), (βx, βy, βz) = (sin(θ), 0, cos(θ)). The other extrinsic parameters and intrinsic parameters remain the same as the above two experiments. Then, these parameters are used to generate simulation datasets. Independent Gaussian noise with mean 0 and standard deviation 0.5 pixels is added to the projected points. Ten images of simulated feature points-pairs are used to calibrate the camera with different angle θ. We repeated this progress 100 times and computed average errors. The results are shown in Figure 8. The data in Figure 8b,d illustrate that the rotation angle has little effect on fx, fy, U0, and V0 when θ increases from 0° to 50°. When θ is increasing larger than 40°, the relative error of fx grows faster. When θ increases to 50°, the relative error of fx increases to around 0.3%, along with the relative error of fy which is still less than 0.1%. The rotation angle has a relatively large effective on γ, especially when θ is increasing larger than 20°. Even if the value of γ increases to six, it means the angle between the image sensor’s two axes is 89.847° in our simulated camera. The result is very close to 90°, which can be accepted in real situations. As for the extrinsic parameters, Tx and Ty seem relatively more sensitive to the rotation angle than Tz. This can be explained by the simulated rotation direction. Then, the rotation vector’s simulated value is quite close to the ground truth value, and the error of the rotation vector is less than 0.1° for most simulated rotation angles. The distortion coefficients’ errors are low, which shows favourable stability with the increasing rotation angle.
The data of the reprojection error shown in Figure 8a illustrates that the proposed calibration method could well converge on the ground truth error value. And the rotation angle has little effect on the reprojection error. Further, the uncertainty of fx and fy as shown in Figure 8e increase relatively significantly when the angles increase from 0° to 50°, and the uncertainty of fx increases more distinctly than the uncertainty of fy. This can be explained by the simulated rotation direction. Other parameters’ uncertainty does not show in the figure but has similar characteristics with fx and fy. Obviously, the increasing rotation angle may bring in more uncertainty of the calibration parameters.
The experiments in Section 5.1 can summarize some useful conclusions:
  • The proposed IACC calibration method can fit different levels of noise in images. From the simulation experiment result, our method shows better accuracy and stability than Zhang’s method. However, the increasing noise level will bring in more uncertainty of the calibrated parameters. Thus, it is necessary enhance the certainty of the parameters by reducing the noise level of the feature points’ coordinates.
  • The more images used in the calibration, the less uncertainty the parameters will have. Note that in practice, taking more images means we need more displacement data of 2D targets, which may bring in new uncertainty. Thus, combining with our simulation experiments, the suggested number of images is around 10.
  • The proposed calibration method can fit 2D targets’ plane at different angles with the image plane. Compared with the simulation data in [8], our improved method shows better accuracy and stability than the ACC method with respect to the rotation angle. However, increasing the angle may bring in the difficulty of extracting feature points precisely and the uncertainty of calibration parameters. Thus, try to avoid taking images from a large angle, and experience and data indicate that an angle of less than 45° is suggested.

5.2. Stability Simulations of the Proposed Novel Circle Feature Points Extraction Algorithm

Illumination conditions are very important to visual measurements. The edges of image features may occur with 1–2 Pixels offsets while the illumination intensity has a 10~20% change. In actual measurements, it is hard to put forward a uniform standard to evaluate the illumination’s sufficiency and suitability. Illumination conditions are often set according to the experience of the operators. Thus, the stability of feature extraction algorithms with respect to illumination change is quite important for high-precision measurements.
Planar targets of a symmetric circle pattern with a back light source are chosen to be the measurement object. The illumination intensity of the back light source is constant, and engineering parts are used to keep the light source and planar target fixed. For simulating the scenes from insufficient illumination to sufficient illumination, we vary the exposure value of the camera from 600 to 3100 and take images from the front of the target. Then, take the image of 3100 exposure value as the reference image. Separately use present novel circle feature points extraction algorithm in Appendix B and OpenCV’s findCirclesGrid to extract the circles’ centers pixel information. RMS errors in Pixels between the test images and the reference image are used to evaluate the stability of the above two algorithms with respect to illumination changes. The result is shown in Figure 9a. Further test the stability of these two algorithms at different viewing angles. We hold the camera still and separately rotate the target around the central axis with 20° and 45°. Repeat the above procedures to test the stability of the two algorithms at different viewing angles. The test results are separately shown in Figure 9b,c.
The results in Figure 9 clearly show the better performance our novel circle feature points extraction algorithm has with respect to illumination and viewing angles changes than OpenCV’s traditional algorithm findCirclesGrid. Simulations at different angles and illumination levels verify that our novel algorithm can stably extract symmetric circles’ features at different viewing angles and illumination conditions.

5.3. Real Monocular Camera Calibration Experiments

Planar chessboard pattern and symmetric circle pattern are chosen to be the calibration targets’ patterns. First, monocular camera calibration experiments are performed. The same 2D calibration targets are used to perform experiments. Some machined parts are used to fix the targets, stage, and cameras. The information of the calibration targets’ pattern is shown in Table 6.
Use stage and targets to generate a virtual 3D points array. Machine parts are used to keep the sliding direction approximately vertical to the targets’ plane. Take images of both patterns in different positions. Feature point extract functions findChessboardCorners and findCirclesGrid from OpenCV 3.3.0 are used in this section to extract the corresponding image pixel coordinates.
First, 11 × 8 chessboard pattern is used to perform the monocular camera calibration experiments, whereby 1760 point-pairs from 20 images are used to generate the datasets in Table 7.
Then, use the above datasets to calibrate two monocular cameras separately by the mentioned four methods in Table 7. The reprojection RMS errors in pixels and the errors in world system between detected feature points and projected ones (also called as reprojection error, but the unit is mm) are used to evaluate the accuracy of these four methods. Table 8 and Table 9 show the calibration results of two different cameras by mentioning four methods with chessboard datasets in Table 7.
The results in Table 8 and Table 9 show that the present IACC method with the traditional chessboard pattern has better accuracy than Tsai’s method, ACC method, and Zhang’s method.
The other target of 9 × 7 symmetric circle pattern is used to perform the monocular camera calibration experiments with the above four methods. In all, 630 feature points-pairs data from 10 images is used to calibrate the unknown parameters. The data sets of the symmetric circle pattern in this section are made by using findCirclesGrid from OpenCV 3.3.0. The calibration results are shown in Table 10 and Table 11. The results in Table 10 and Table 11 show that the present IACC method with the traditional symmetric circle pattern also has better accuracy than Tsai’s method, ACC method, and Zhang’s method.
Clearly, Table 8, Table 9, Table 10 and Table 11 also reflect how different calibration methods using symmetric circle pattern with OpenCV’s findCirclesGrid has better performance than using chessboard pattern with findChessboardCorners. The simulation results in Section 5.1 verify that the reprojection error may be convergent to the added noise of feature points. Certainly, there are other noises in actual images. But the accuracy of the feature points’ extraction algorithm plays a major part in added noise. Thus, the reprojection error could reflect the accuracy of the points’ extraction algorithm. From Table 9 and Table 10’s data, the accuracy of findCirclesGrid can achieve close to 0.02 pixels, and the accuracy of findChessboardCorners can only come close to 0.06 pixels in our real experiments.
Comparing the calibration results for different cameras in Table 10 and Table 11, we can also notice that the improvements brought by IACC for Single_R camera is relatively lower than for Single_L camera. After the examination, we found that there are some imperceptible stains on the surface of Single_R camera’s imaging sensor, which may affect the quality of the calibration images. The feature extraction algorithm findCirclesGrid is easily affected by these stains because findCirclesGrid is a gray-centroid-based blob detect algorithm. Clearly, alongside the calibration model, the accuracy of feature extraction will more directly affect the accuracy of calibration. Thus, the stability of algorithms for different measurement environments is important, and our new algorithm in Appendix B has better performance than findCirclesGrid. This has been verified in simulations in Section 5.2 and in real calibration experiments in Section 5.4.
As for distortion coefficients, Tsai’s method assumes tangential distortion can be ignored to satisfy RAC constraint, so Tsai’s method cannot calibrate the tangential distortion. The present IACC method, ACC method, and Zhang’s method can calibrate the radial and tangential distortion coefficients through nonlinear optimization. It needs to be explained that different coefficients definition expression in [8] and in this paper caused difference in values in Table 8, Table 9, Table 10 and Table 11. Both coefficients definition expression can meet the physical model and can reflect the distortion level. For convenience, the coefficients definition expression in this paper is in accordance with Zhang’s method [9].

5.4. Performance of Present New Algorithm in Appendix B for Improving Calibration Accuracy

We further apply the present new algorithm in Appendix B to generate feature point-pairs data sets with the same calibration images in Table 5 and Table 6. The present IACC calibration method and Zhang’s method are chosen to verify that our novel circle feature points extraction algorithm can make improvements of calibration accuracy for both non-coplanar calibration methods and coplanar calibration methods.
The same as Table 10 and Table 11, 630 feature points-pairs data from 10 images are used to calibrate the unknown parameters. Next, Δ θ is set to 1° and the searching step length in radial direction is set to 0.1 pixel. The calibration results are shown in Table 12 and Table 13.
The results data in Table 12 and Table 13 clearly verified that our novel circle feature points extraction algorithm effectively improves the calibration accuracy of both the coplanar method and the non-coplanar method, represented by Zhang’s method and the present IACC method. From previous deduction that the reprojection error may be convergent to the accuracy of feature points extraction algorithm with the present method, we could further reckon that the present new algorithm in Appendix B has better accuracy than findCirclesGrid from OpenCV 3.3.0. The accuracy of our algorithm can reach within 0.02 Pixels in actual application.
As we mentioned before, the implementation of coplanar calibration methods needs to take images of 2D targets in multi-viewing multiplane position as shown in Figure 3. The data in Table 12 and Table 13 also illustrates the stability of the proposed novel circle feature points extraction algorithm for the rotation of planar targets. As shown in Figure 10, our algorithm and strategy to sort the key points can fit the situations when angular deflection between planar targets and imaging sensor remains at a relatively rational level. Usually, this angle should be below 45° to keep the accuracy of calibration results.
Combining the simulations in Section 5.2, the present new algorithm in Appendix B has better extraction accuracy and stability than the traditional extraction algorithm findCirclesGrid from OpenCV 3.3.0, whereby our novel algorithm can effectively improve the calibration accuracy for both non-coplanar methods and coplanar methods.

5.5. Real Binocular Camera System Calibration and 3D Reconstruction Experiments for Discrete Feature Points

We set experiments to test the performance of the proposed binocular camera system calibration method. As a supplement, a 3D reconstruction experiment for discrete feature points is set to evaluate the actual measurement accuracy of the binocular measurement system calibrated by the proposed method.
Firstly, use the binocular camera system to acquire a set of images to be the test images. Take part of the test images to be the parameter calibration images and the other part to be the accuracy test images. Then, measure the distance between different feature points on the target surface and compare the measured distance data with actual values.
Similar to the monocular calibration process, we fixed a one-dimensional displacement stage with the planar target of high precision on the optical platform, moved the planar target in a fixed direction, and used the laser interferometer to measure the moving shifts of the calibration target in this direction. Then, we established the affine space coordinate sequence of the calibration target. The intrinsic and extrinsic parameters of the binocular system can be calibrated through the target’s image sequence captured by the two cameras simultaneously.
The binocular system is shown as Figure 11. The machine part is designed to fix two cameras. The designed horizontal baseline between two cameras is 156 mm, and the included angle between the camera optical axis and the baseline is about 75°. Five out of ten image-pairs of the symmetric circle pattern acquired at mono-viewing positions are taken to calibrate the parameters of the binocular system. The other five image-pairs are retained for the distance measurement experiment.
After the calculation process mentioned in Section 4.3, the intrinsic and extrinsic parameters of the binocular camera system have been calibrated simultaneously. The extrinsic parameters of the two cameras in the system can be expressed in more universal format by Equation (17):
{ R a 3 × 3 = R o d r i g u e s ( R a - vector 3 × 1 ) R b 3 × 3 = R o d r i g u e s ( R b - vector 3 × 1 ) R a - b 3 × 3 = R b - 3 × 3 R a - 3 × 3 1 T a - b = T b R a - b 3 × 3 T a
The calibration results of the proposed method are shown in the bottom part of Table 14.
It is worth noting that reference [8]’s binocular system calibration method (ACC method) can only calibrate extrinsic parameters with the prerequisite known monocular camera’s intrinsic parameters. The ACC binocular system calibration method cannot calibrate the intrinsic and extrinsic parameters simultaneously. And the ACC binocular system calibration method introduced a penalty function to ensure the orthogonality of the rotation matrix. This means that the penalty factors should be adjusted along with the changes in the binocular cameras’ structure, calibration targets’ pattern, monocular calibration accuracy, etc. Above all, in actual applications, the loss of universality makes the ACC method too cumbersome to carry out the binocular camera systems’ calibration. The calibrated extrinsic parameters of the binocular system and the prerequisite intrinsic parameters with the ACC method are shown in the top-left of Table 14.
Similar with the ACC method, the complicated procedure to adjusting the targets’ sliding direction of Tsai’s non-coplanar method also make it inefficient to calibrate either monocular or binocular camera systems with high accuracy. Considering how the accuracy of the previous monocular calibration with Tsai’s method under current conditions is much more worthwhile than the other three methods, Tsai’s method is not considered to be the contrast method in this section.
The same as the present IACC method, Zhang’s method can calibrate the intrinsic and extrinsic parameters of the binocular simultaneously. Zhang’s binocular calibration method which contained in OpenCV 3.3.0 function stereoCalibrate is chosen as a comparative method. Without loss of generality, five out of ten image-pairs of the symmetric circle pattern from multi-viewing positions are taken to calibrate the parameters of the binocular system with Zhang’s method. The other five image-pairs are retained for the distance measurement experiment. The calibration results of Zhang’s method are shown in the top-right part of Table 14.
The key data reprojection errors in Table 14 reflect how our binocular calibration method has the best calibration accuracy among the mentioned three methods. It is worth mentioning that the feature points of the symmetric circle pattern used in the above experiments are extracted by the present new algorithm in Appendix B for all three binocular calibration methods.
As mentioned in Section 2.4, calibration parameters always present local spatial optimality. Thus, we should not discuss the calibration accuracy of the binocular camera system in isolation ignoring the actual measurement position. This means that we should combine the actual measurement accuracy with calibration accuracy to evaluate the actual accuracy of calibrated parameters.
Camera-based stereo measurements are built on the 3D reconstruction of features in images acquired by camera-sensing-systems. The 3D reconstruction of image features mainly contains three steps, i.e., features’ extraction, features’ stereo matching, and features’ stereo reconstruction based on triangulation. The accuracy of the camera-based stereo measurements mostly depends on the accuracy of the features’ matching and multi-camera system’s calibration. With precise camera calibration parameters and matching point-pairs’ coordinates, based on the 3D reconstruction algorithm, high-precision measurements can be realized.
Clearances measurements for discrete circular feature points based on 3D reconstruction are set for further testing the accuracy of the calibrated parameters. The clearances to be measured can be divided into two types as shown in Figure 12. The clearances between feature points in high precision calibration planar targets are used as the measurement objects. As shown in Figure 12, a specific image-pair of 9 × 7 symmetric circle pattern corresponds to a planar target at one specific position in a world coordinates system. For one specific position, each target can acquire 56 sets of horizontal clearances and 54 sets of vertical clearances.
In this paper, the least squares method is used to solve the 3D reconstruction for discrete circular feature points, since this method can directly use the original matched feature point-pairs’ pixel information and calibrated parameters without extra image affine transformation and interpolation.
Different camera parameters calibrated by the mentioned three methods are used to perform the 3D reconstruction of the discrete feature points with the above retained image-pairs. The error data is shown in Table 15 and Table 16.
Data in Table 14, Table 15 and Table 16 illustrate that the parameters of the binocular camera system calibrated by the present IACC method not only have the best calibration accuracy among three contrast methods, but also can supply the best measurements accuracy for circular feature points at nearby positions of where the calibration images are acquired. Data in Table 15 and Table 16 can also reflect that parameters calibrated by different methods achieve their best measurements accuracy at around the calibration position, but cannot achieve equivalent measurements accuracy away from the calibration position. As we can see from the root mean square error data in Table 15 and Table 16, the binocular system with IACC’s calibration parameters can achieve 2.6 μm measurement accuracy using images taken from the mono-viewing multiplane position, but can only achieve 59.1 μm measurement accuracy using images taken from the multi-viewing multiplane position. Similarly, the binocular system with Zhang’s calibration parameters can achieve 21.7 μm measurement accuracy using images taken from the multi-viewing multiplane position, but can only achieve 52.0 μm measurement accuracy using images taken from the mono-viewing multiplane position. Referring to Figure 13, the mono-viewing position and the multi-viewing position in this experiment are clearly distributed in different depths of the same world coordinates system. These results have verified the local spatial optimality of the calibration parameters mentioned in Section 2.4.
The parameters either from non-coplanar or coplanar methods obtain the best effect when their calibration positions are close to and cover most of the measurement space. Overall, the present IACC calibration method for binocular camera systems shows prominent advantages in calibration accuracy and measurements accuracy than both Zhang’s method and the ACC method.
The left camera’s coordinates system is chosen to be the world coordinates system, using IACC’s calibration parameters and feature extraction algorithm, wherein the calculated 3D coordinates of the measured feature points’ centers are drawn in Figure 13. Figure 13 can clearly show the difference of the measured targets’ positions in the world coordinates system.

5.6. Full-Field Stereo Measurement Experiments by Stereo-DIC Technologies with the Proposed Calibration Method

In the last few decades, stereo-Digital Image Correlation (stereo-DIC) has been widely accepted as a powerful and versatile tool for non-contact full-field 3D shape and surface deformation measurement in experimental solid mechanics [22,33]. Stereo-DIC relies on the image correlation analysis of image-pairs obtained from a calibrated stereo-vision system. Stereo-DIC is still far from reaching its full potential. This is mainly due to three major challenges that Sutton and associates [34,35] identified as follows: (1) surface patterning; (2) imaging of the structure (i.e., appropriately selecting lens and stereo-angle); (3) calibrating the stereo-DIC measurement system.
Among the various calibration techniques used in the computer vision community, the two traditional methods presented by Zhang [9] and Tsai [5] are still commonly taken as key-methods for stereo-DIC system calibration with 2D and 3D targets, respectively. Research on the calibration methods applied for the stereo-DIC system is still valuable, and we will further testify the feasible applications of the proposed novel calibration methods in full-field stereo measurements.
The calibrated binocular camera system in Section 5.5 and three acrylic hollow cylinders with artificial speckle patterns are used to carry out full-field 3D shape measurements. The speckle patterns are arranged into cylinders’ surfaces by hydro transfer printing.
Two classical stereo-DIC methods, Newton–Raphson (NR) [36] method and inverse-compositional Gauss–Newton (IC-GN) [37] algorithm, are used to carry out stereo matching for the speckle patterns’ subsets region.
The Newton–Raphson (NR) method has been integrated into an open-source digital image correlation (DIC) tool DICe [38] from Sandia National Laboratories. Its primary capabilities are computing full-field displacements and strains from sequences of digital images and rigid body motion tracking of objects.
A calibrated binocular camera system is used to take three image-pairs of cylinder objects with different radii as shown in Figure 14.
Calibrated parameters from Zhang’s method and the IACC method are separately taken into the DICe to supply the basic parameters of the binocular stereo-DIC system. Every image-pair from single capturing at same moments are used to be both the reference image and the deformed image. Thus, the calculated displacement and deformation of the objects should be zero theoretically. The measurement results separately calculated with Zhang’s parameters and with IACC’s parameters are very close, and the similarity of these results can be reflected by colormaps in Figure 15. Since colormaps can only reflect the rough tendency, if the accuracy of the calibration parameters is close enough, with the same high-precision DIC matching method, one can barely tell the difference between the top and bottom parts of Figure 15.
Figure 15 shows the static measurement results of one cylinder. Figure 15a–c separately demonstrates the measured ROI’s z-coordinates, displacement of x-direction, and normal strain of x-direction with the binocular system calibrated with Paper’s parameters. As we expected, the calculated displacement and deformation of the measured ROI are very close to 0. And the z-coordinates of the measured ROI accord with the actual situation of the measured position. It is worth noting that the static measured displacement data of ROI is quite close to zero and the absolute error value is mostly within 20 picometers. This level of absolute error reflects that both the matched accuracy of DICe and the accuracy of our calibration method remain at elevated levels. We also noted that the region of slightly larger error appeared around the circular ring at the middle of the ROI. This can also meet expectations because there is no speckle distribution inside the ring as we designed. As a testified method by many scholars which can be used in the calibration stereo-DIC, the calibration parameters from Zhang’s method are also used to calculate the same image-pair. The results are shown in Figure 15d–f, which shows similar properties as the results achieved with IACC’s parameters.
Then, full-field 3D reconstruction experiments for surface ROIs’ matched subsets from three different cylinders are carried out by our self-designed IC-GN-based program. The IC-GN method, first-order shape function, and Bicubic interpolation method are used in our program to complete the correlated subsets’ sub-pixel matching. A seed-point-based neighbor-region-generation calculation path is applied to complete the ROI’s full-field stereo matching calculation. Stereo-rectify of image-pairs based on our calibration parameters has been implemented before the ROI subsets’ correlation matching to reduce the deformation of the corrected subsets caused by the different viewing angles of the left and right cameras. The 3D reconstruction results of the ROIs from different cylinders’ surfaces are shown in Figure 16. The results from our program and from DICe are consistent.
The local ROIs’ points cloud data in Figure 17 are used to achieve cylinder fit by the nonlinear least squares method based on the Levenberg-Marquardt algorithm. The cylinder mathematical model can be illustrated as follows:
( X p x 0 ) 2 + ( Y p y 0 ) 2 + ( Z p z 0 ) 2 [ l ( X p x 0 ) + m ( Y p y 0 ) + n ( Z p z 0 ) ] 2 ( l 2 + m 2 + n 2 ) = r
In which ( x 0 , y 0 , z 0 ) illustrates a point at the cylinder’s main axis, ( l , m , n ) illustrates the direction vector of the main axis, ( X p , Y p , Z p ) illustrates any point on the surface of the cylinder, and r is the radius of the fitting cylinder.
The three cylinders are made of acrylic material, and their design radii are 75 mm, 100 mm, and 125 mm. The material characteristics determine that the machining accuracy is not very high. The local ROIs fitting radius data from Table 17 reflects the local ROIs’ curvature radii information, which are generally consistent with the design values. The curvature radius of Cylinder #3 is closer to the design value, which reflects how stereo measurements with the binocular camera system work better when the targets’ curve surface has less curvature radius. Obviously, fitting results with 3D point clouds separately calculated by Zhang’s parameters and IACC’s parameters are consistent, and the data in Table 17 can only reflect approximate values of cylinder’s radii. Overall, the fitting results with the present IACC’s parameters are a little closer to the design value of cylinders than with Zhang’s parameters. More experiments need to be conducted in the future to check the real full-field measurement accuracy.
As shown in Figure 17, there are some imperceptible residual adhesive films from the speckle pattern’s transfer printing process retained on the surface of cylinder #2. Clearly, the 3D point clouds from our program have contained flawed information. This reflects the good 3D reconstruction accuracy of our ICGN-based program and calibration parameters. The point clouds data of surface flaws may cause the fitting RMSE error of Cylinder #2 in Table 17 to be a little larger than the errors of Cylinder #1 and #3.
Experiments in Section 5.5 and Section 5.6 can summarize some useful conclusions:
  • The best calibration position should cover the potential measurements spatial range.
  • The present IACC binocular calibration method has the best calibration accuracy among the three contrast methods. And the circular discrete feature points’ measurement accuracy by binocular system with Paper’s calibration parameters and feature extraction algorithm can achieve less than 2.6 μm.
  • The present IACC calibration method can be further combined with classical stereo-DIC technologies, e.g., Newton–Raphson (NR) method and ICGN method, to achieve the surface ROIs’ full-field measurements.
  • Static measurement experiments and 3D reconstruction experiments have shown the feasibility of the present IACC method applied in stereo-DIC system calibration. Loading experiments are still needed for quantitative analysis of the improvement of measurement accuracy lifted by the present IACC calibration method. The quantitative analysis and dynamic loading experiments deserve further research.

5.7. Analysis of the Calibration Efficiency of Both Monocular and Binocular Camera Systems

Although calibration accuracy is important, the efficiency of calibration methods is also significant. There are two major factors that determine the efficiency of a certain calibration method. One is the efficiency of the algorithm. The other is the efficiency of the calibration implementation, e.g., the process of feature point-pairs’ coordinates acquisition, adjustment of stage’s sliding direction, arrangement to individually assign some factors for a calibration algorithm, etc. Except for the complexity of the algorithm itself, the efficiency of the algorithm is largely determined by the computer hardware and the programming level. Solely measuring the running time of an algorithm to characterize one’s efficiency is sometimes unfair. In this situation, reference [8] has given the idea of analyzing the complexity of the algorithm itself to reflect the efficiency. However, reference [8]’s efficiency evaluation method did not fully consider the extra implementation complexity of the methods. And reference [8] did not propose the calibration efficiency evaluation method for the binocular camera system, either.
This paper gives a more comprehensive calibration efficiency evaluation method for both monocular and binocular camera systems, the deduction and analysis of which are appended in Appendix C.
According to the evaluation method in Appendix C, assume 10 images are used for calibration, and there are 63 features in every image. The numbers of nonlinear iterations are set to 200, and the C_operation is set to 1 × 108. The complexity of the mentioned four methods can be quantified as shown in Table 18.
Thus, from the perspective of algorithm efficiency, for the mentioned four monocular calibration methods, Tsai’s method has the best efficiency, followed by ACC, present IACC, and Zhang’s methods. For the mentioned four binocular calibration methods’ algorithm efficiency, Tsai’s method has the best efficiency, followed by present IACC, Zhang’s, and ACC methods.
From the overall efficiency including algorithm and implementation, for the mentioned four monocular calibration methods, Zhang’s method has the best efficiency, followed by present IACC, ACC, and Tsai’s methods. For the mentioned four binocular calibration methods’ overall efficiency, Zhang’s and present IACC methods have similar best efficiency, followed by Tsai’s and ACC methods.
Summarizing the results from Section 5.1, Section 5.2, Section 5.3, Section 5.4, Section 5.5, Section 5.6 and Section 5.7, this gives our suggestions for the choice of the mentioned four calibration methods:
  • Zhang’s method is the easiest to implement, but the calibration accuracy is not the best. Thus, Zhang’s method is the best choice if there are no extreme demands of high-precision calibration and measurements.
  • The present IACC method for monocular and binocular calibration has the best calibration accuracy and moderate implementation complexity. The present IACC method is the preferred choice for high-precision calibration and measurements. Especially when the structure of camera systems and measurements’ position is confirmed in advance. And in some special scenarios when 2D targets are restricted to change postures in multi-viewing positions, the present IACC method would be the best solution for both accuracy and efficiency.
  • The ACC method can supply accurate calibration parameters for monocular camera systems with good efficiency. However, it is not suitable for binocular camera systems for either accuracy or efficiency.
  • Tsai’s method has distinct defects in mathematical model and is inefficient using planar target with non-coplanar mode. Using real stereo targets will reduce the complexity of Tsai’s method. However, it is still not suitable for high-precision calibration and measurements.
  • The Present algorithm in Appendix B can well serve for the mentioned four methods with high-precision and good stability.

6. Conclusions

This paper proposes an Improved Affine Coordinate Correction Mathematical (IACC) model which can be well applied for the calibration of both monocular and binocular camera systems. Our novel calibration methods are stable, efficient, and with high precision. Based on Local-ROI-OTSU and gradient-based Edge Radial Section Scanning method, a novel simple extraction algorithm for the symmetric circles pattern is proposed. The proposed novel algorithm can further improve the accuracy and stability of existing calibration methods.
Performance simulations verify that the present IAAC method possesses good accuracy and stability. The present IACC method can fit different levels of image noise (from 0~2 Pixels), different numbers of planes (at least 2), and 2D targets’ planes at different viewing angles (from 0°~50°). The proper number of images (around 10) and viewing angles (less than 45°) could keep the uncertainty of parameters relatively low and have enough calibration accuracy.
Simulation and real performance experiments are set for the proposed simple novel circle feature points extraction algorithm. Simulation verified that the present new algorithm in Appendix B has better stability with respect to illumination and viewing angles change than the traditional algorithm. Real experiments demonstrate that our new algorithm can significantly improve the calibration accuracy for both coplanar and non-coplanar calibration methods. Calibration results reflect that the feature distraction accuracy of our new circle feature points extraction algorithm can be within 0.02 Pixels. It is worth mentioning that our circle feature points extraction algorithm keeps high accuracy and simplicity without any non-linear iteration or complex rectify method.
Real data in Section 5.3, Section 5.4 and Section 5.5 verify that the accuracy of the present IACC method is better than that of Tsai’s, AAC, and Zhang’s calibration methods for both monocular camera systems and binocular camera systems. The calibration accuracy of the present IACC for the binocular system increases by 10 times than with the AAC method, as well as by 40% than with Zhang’s method. The calibration parameters supported by our IACC method can help real stereo vision system’s measurement accuracy reach within 3 μm for discrete feature points, which is remarkably superior to parameters supported by control methods.
Our novel IACC calibration methods have further applied for binocular-camera-based full-field stereo measurements based on two classical stereo-DIC methods. Static measurements for displacements and deformations show that calibration parameters supported by the IACC method are feasible to apply in stereo-DIC measurements and have similar accuracy with Zhang’s parameters. All-process 3D reconstruction experiments for cylinders’ surfaces reflects the IACC method’s potential application for calibrating curves surface’s high-precision shape, deformation, and strain measurement visual systems.
At last, a comprehensive calibration efficiency evaluation method for different calibration methods is given in this paper. According to the analysis, the present IAAC method has the best calibration accuracy and moderate implementation complexity. The present IACC method is the preferred choice for high-precision calibration and measurements.
In future research, we will focus more on improving stereo rectify strategies for full-field stereo measurements based on high-precision calibration parameters. The quantitative analysis and high-speed dynamic loading experiments for stereo-DIC measurements deserves further research. Research on the calibration target’s pattern combining with DIC measurements’ speckle pattern still deserves more concentration. The combining of our calibration methods with the iterative refinement of the control points’ strategy also deserves more work.

Author Contributions

Conceptualization, H.Z. and F.D.; methodology, H.Z. and F.D.; software, H.Z.; validation, H.Z., T.L. and J.L.; formal analysis, H.Z.; investigation, H.Z. and Z.C.; resources, H.Z., Z.C. and X.L.; data curation, H.Z.; writing—original draft preparation, H.Z.; writing—review and editing, T.L., J.L., G.N., Z.C. and X.L.; visualization, H.Z.; supervision, F.D.; project administration, F.D.; funding acquisition, F.D and G.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundations of China (U2241265, 52205573, 61971307, 62231011), National Key Research and Development Plan (2020YFB2010800), China Postdoctoral Science Foundation (2022M720106), Joint Foundation of Ministry of Education of China for Equipment Pre-research (8091B022144), National Defense Science and Technology Key Laboratory Fund (6142212210304), Guangdong Province Key Research and Development Plan Project (2020B0404030001), the Fok Ying Tung Education Foundation (171055), Young Elite Scientists Sponsorship Program by Cast of China (2021QNRC001), and the Young Teacher Research Initiation Project of State Key Laboratory (Pilq2304).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data that support the findings of this study are included within this article.

Acknowledgments

The authors would like to express their sincere gratitude to Sandia National Laboratories for the open source providing of digital image correlation tool DICe.

Conflicts of Interest

The authors declare that they have no affiliation with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.

Appendix A

Alongside calibration models, the improvements of calibration targets’ 3D production and corresponding 2D image feature extraction can directly enhance the calibration result. Chen et al. [3] applied local binary pattern (LBP) coded phase shifting wedge grating arrays pattern to improve the specificity and location accuracy of feature points; Zhu et al. [10] used optimal polarization angle to extract cheeseboard corner features which effectively improves the calibration accuracy; Sels et al. [11] used Gray-code patterns to reduce calibration uncertainty; Chen et al. [12] using synthetic random speckle patterns and digital image correlation to locate and match control points which supply better calibration accuracy and less uncertainty; Liu X et al. [39] proposed a novel circular points extraction method based on Franklin matrix which effectively improves the calibration accuracy. Different methods have been proposed to improve calibration accuracy by enhancing the sub-pixel positioning accuracy of feature points [23,25,26,28,29,30,32,39,40].
Most of the above improvements are based on increasing numbers and positioning accuracy of feature points. Along with these improvements, calibration accuracy and stability can correspondingly become enhanced.
As shown in Figure A1, chessboard pattern and symmetric circles grid pattern are the most common patterns. Many image processing algorithm libraries such as OpenCV, HALCON etc., integrate stable and precision feature extraction algorithms for these two kinds of patterns.
Figure A1. These are the patterns of calibration planar targets used in this paper, listed as follows: (a) Chessboard pattern; (b) Symmetric circles grid pattern.
Figure A1. These are the patterns of calibration planar targets used in this paper, listed as follows: (a) Chessboard pattern; (b) Symmetric circles grid pattern.
Sensors 23 08466 g0a1

Appendix B

This paper proposes a novel circle feature points extraction algorithm. We use a simple blob detector to quickly find the rough circle centers and radius of the key points and make ROI segmentation for each key point. Then, we use a simple strategy to sort the key points in the right order. Further, apply Sobel operator to acquire gradient image. According to the rough centers and radius information, locate the ROI of each circle. Separately operate the OTSU [41] algorithm for each ROI of the gradient images to confirm the effective gradient amplitude. Apply radial section scanning method to acquire the accurate sub-pixel edge of each circle. At last, use ellipse fitting to calculate and refine accurate centers. The flow chart of the proposed algorithm is as follows:
Figure A2. This figure shows the flow chart of the proposed circle feature points extraction algorithm.
Figure A2. This figure shows the flow chart of the proposed circle feature points extraction algorithm.
Sensors 23 08466 g0a2
For the step edge, the first order derivative of edge section’s grayscale intensity distribution is Gaussian-like distribution. Theoretically, the normalized first order derivative image of a circle patten should be as shown in Figure A3:
Figure A3. The left part of this figure shows the grayscale intensity distribution and corresponding first order derivative intensity along with the edge’s gradient direction. The right part of this figure shows the grayscale image of the circle pattern and its normalized gradient-intensity image.
Figure A3. The left part of this figure shows the grayscale intensity distribution and corresponding first order derivative intensity along with the edge’s gradient direction. The right part of this figure shows the grayscale image of the circle pattern and its normalized gradient-intensity image.
Sensors 23 08466 g0a3
For each edge region of segmentation, the grayscale intensity distribution along with the gradient direction should be like the left part of Figure A3. The symmetry axis’ coordinate xe of gradient-intensity curve could characterize the edge point position of this segmentation region.
In this paper, we use the gray centroid method to acquire the location of xe. Sobel operator has a low calculation amount, simple structure, and high precision. Therefore, it has a wide range of applications in fields of remote sensing, image processing, and industrial detection [42]. For convenience, we use a two-directional 3 × 3 Sobel operator to separately obtain the gradient of x-direction and y-direction. Then, we calculate the gradient intensity as follows:
g ( x , y ) = g x 2 g y 2
Thus, around each edge point, along with the direction of gradient, the coordinate of each edge point (xe, ye) could express as follows:
{ x e = E x i g ( x i , y i ) E g ( x i , y i ) y e = E y i g ( x i , y i ) E g ( x i , y i )
In which, E represents the continuous region of rough edge point along with the gradient direction. The Gray centroid method can find the turning point of Gaussian-like distribution quickly and accurately. Simulation verified that the Gray Centroid method can be used to locate the turning point of Gaussian-like curves. The location accuracy and the calculation of efficiency is better than the traditional linear Gaussian Fitting method if there is little noise of the original data of the gradient intensity. The simulation results are shown in Figure A4.
Figure A4. This figure shows the simulation of different edge position extraction methods. The Gray Centroid method has better performance than the Gaussian Fitting method if there is little noise.
Figure A4. This figure shows the simulation of different edge position extraction methods. The Gray Centroid method has better performance than the Gaussian Fitting method if there is little noise.
Sensors 23 08466 g0a4
The Gradient-based Gray Centroid method has been verified which has good effect on the inspection of the edges of feature points, lines, and curves if there is less noise interference occurring in certain ROIs [43]. This paper further applies the Gradient-based Gray Centroid method in symmetric circles arrays and reduces noise interference by the combination of Local-ROI-OTSU and Radial Section Scanning strategies.
For the real images captured by cameras, the distribution of grayscale intensity is uneven. This uneven distribution is usually caused by illumination condition, exposure control, or some other factors. This uneven distribution may cause the noise on the gradient intensity images which can be illustrated as Figure A5a:
Figure A5. (a) shows the grayscale intensity distribution in actual images captured by cameras. The uneven illumination condition, exposure control, and some other factors may introduce noise for both pattern images and corresponding gradient intensity images; (b) shows the redial section scanning method to distract the edge point of each circle pattern in specific ROI.
Figure A5. (a) shows the grayscale intensity distribution in actual images captured by cameras. The uneven illumination condition, exposure control, and some other factors may introduce noise for both pattern images and corresponding gradient intensity images; (b) shows the redial section scanning method to distract the edge point of each circle pattern in specific ROI.
Sensors 23 08466 g0a5
Obviously, the noise introduced may cause unexpected errors for finding the edge point. The Otsu algorithm is used to determine the threshold in this paper. In fact, different ROI of the same image may also have different distributions of intensity. In this paper, we separately calculate the threshold for each ROI. It will improve the stability of threshold value calculated by the Otsu algorithm when non-ROI regions’ intensity changes.
After eliminating the noise under the threshold, we use the redial section scanning method to distract the edge points of each circle pattern in specific ROI. As shown in Figure A5b, the blob detector can roughly find the centers of asymmetric circle grid patterns, as well as the radius of each circle. With the rough center and radius, we can simply divide ROIs for specific circles. We take the rough center of a circle as the center of square ROI, and four times radius as the length of the square’s single side.
Take the rough center as the starting point and Δ θ as the interval angle to scan the circle’s edge region in a radial direction. The radius direction of each Δ θ could be seen as the gradient direction of each segmentation edge region. Bilinear interpolation is used to obtain the gradient intensity in a radial direction. Then, use the gray centroid method to distract the edge point’s coordinate at this direction. With N pairs known edge points’ coordinates, least square method associate with ellipse fitting is used to calculate the center of each circle. It is worth mentioning that the known edge points’ coordinates could participate in least square ellipse fitting with equal weight. This is because the found edge points’ coordinates have high sub-pixel accuracy by our method.
For the demand of actual measurements, the area of each circle in targets is not always equal. Because of the search strategy of the algorithm, the order of key points found by simple blob detector is not always regular. To solve this problem, this paper gives an easy strategy to sort the key points in regular order. The typical scene is shown in Figure A6:
Figure A6. This figure shows the process of the proposed easy strategy to sort the key points in regular order.
Figure A6. This figure shows the process of the proposed easy strategy to sort the key points in regular order.
Sensors 23 08466 g0a6
First, find the four vertexes’ coordinates of the pattern’s rectangle region. The four points in Figure A6 are shown as Pt-TL, Pt-TR, Pt-BL, and Pt-BR. When the yaw angle and roll angle between the target’s plane and the imaging plane remain in relatively low level (usually less than 45°), the following equation could be used to search for these four points’ pixel coordinates:
{ P t - T L . x + P t - T L . y = min P t i K e y P t s ( P t i . x + P t i . y ) P t - T R . x P t - T R . y = max P t i K e y P t s ( P t i . x P t i . y ) P t - B L . x P t - B L . y = min P t i K e y P t s ( P t i . x P t i . y ) P t - B R . x + P t - B R . y = max P t i K e y P t s ( P t i . x + P t i . y )
With known size of real targets’ pattern, the homography matrix between key-points’ imagine pixel coordinates and world coordinates could be solved. Then, operate perspective transform for all key points from the image to the world coordinates system, and sort them in the regular order. If we have separately built indexes of key points in both the pixel coordinates system and the world coordinates system, we could easily build the remap relationship between these two indices. Correspondingly, use the remap relationship to sort the key points of the pixel coordinates in regular order as shown in Figure A6.

Appendix C

The complexity of the algorithm itself is largely determined by the size of the Jacobian matrix and the number of iterations. The method can be expressed as Equation (A4):
{ C o m p l e x i t y _ Algorithm = ( n _ intrinsic + n _ extrinsic + n _ disdortion ) × n _ rows × n _ i t e r s C o m p l e x i t y _ Implementation = n _ m × C _ opreation T A C o m p l e x i t y _ Algorithm T I C o m p l e x i t y _ Implementation E f f i c i e n c y = 1 T A + T I
In which n_intrinmsic denotes the DOF of intrinsic parameters to be calibrated, n_extrinsic denotes the DOF of intrinsic parameters, n_distortion denotes the number of distortion coefficients, n_rows denotes the number of rows of the Jacobian matrix applied in different nonlinear optimization methods, n_iters denotes the number of iterations, C_operation denotes a positive constant to represent normalized complexity of indispensable operations serving for one calibration method and n_m denotes the quantifiable value of C_operation in one calibration method. TA denotes the running time of the algorithm and TI denotes the indispensable operation time of one calibration method. Obviously, TA and TI positively correlate to Complexity_Algorithm and Complexity_Implementation, respectively.
Assuming the number of calibration images is N_img, the number of feature points in each image is n_f, and the number of iterations is n_iters. According to different objective functions of the four contrast methods, in the monocular camera system’s calibration, the algorithms’ complexity of Zhang’s method, Tsai’s method, ACC method, and the present IACC method can be expressed as follows:
{ C o m p l e x i t y _ Algorithm ( Zhang ) = ( 5 + 6 × N _ i m g + 4 ) × 2 × ( N _ i m g × n _ f ) × n _ i t e r s C o m p l e x i t y _ Algorithm ( T s a i ) = ( 2 + 2 ) × ( N _ i m g × n _ f ) × n _ i t e r s C o m p l e x i t y _ Algorithm ( A C C ) = ( 2 + 9 + 4 ) × 2 × ( N _ i m g × n _ f ) × n _ i t e r s C o m p l e x i t y _ Algorithm ( I A C C ) = ( 5 + 8 + 4 ) × 2 × ( N _ i m g × n _ f ) × n _ i t e r s
The implementation’s complexity of the mentioned four contrast methods, in the monocular camera system’s calibration, according to the mathematical models’ analysis from Section 2.1, Section 2.2, Section 2.3 and Section 3.1, can be expressed as follows:
{ C o m p l e x i t y _ Implementation ( Z h a n g ) = 1 × C _ o p e r e t i o n C o m p l e x i t y _ Implementation ( T s a i ) = 3 × C _ o p e r e t i o n C o m p l e x i t y _ Implementation ( A C C ) = 1.5 × C _ o p e r e t i o n C o m p l e x i t y _ Implementation ( I A C C ) = 1.5 × C _ o p e r e t i o n
In most general scenarios, Zhang’s method is the easiest to implement because the calibration target can be placed at any multi-viewing position within FOV, as long as the planes are not parallel to each other. Thus, the n_m of Zhang’s monocular calibration method is assigned to 1. The ACC and the present novel IACC monocular calibration methods do not require extra adjustment for making the sliding direction vertical to the target’s plane. Compared with Zhang’s method, these two methods just need to further record the shifts of target’s sliding. Thus, the n_m of these two methods is assigned to 1.5 based on practical experience. Similarly, for Tsai’s non-coplanar method which needs extra adjustment of the sliding direction, the n_m of Tsai’s method is assigned to 3 based on actual experience.
Clearly, N_img ≥ 2 is necessary for all four methods. Within the same N_img, n_f, and n_iters, Tsai’s method has better efficiency than the other three methods because less parameters are considered in nonlinear optimization. Obviously, the ACC method and the present IACC method have better efficiency than Zhang’s method for there is only one set of fixed extrinsic parameters to be optimized in the ACC method and the present IACC method. The number of extrinsic parameters will further increase along with the images’ number in Zhang’s method. The present IACC method is slightly more complex than the ACC method because the present IACC method has further considered the uncertain optical centers’ position in our nonlinear optimization model.
Correspondingly, according to the mathematical models’ analysis from Section 2.1, Section 2.2 and Section 3.1, the algorithms’ complexity and the implementation’s complexity of the mentioned binocular camera systems can be expressed as:
{ B i n o _ C o m p l e x i t y _ Algorithm ( Z h a n g ) = 2 × C o m p l e x i t y _ Algorithm ( Z h a n g ) B i n o _ C o m p l e x i t y _ Algorithm ( T s a i ) = 2 × C o m p l e x i t y _ Algorithm ( T s a i ) B i n o _ C o m p l e x i t y _ Algorithm ( A C C ) = 2 × C o m p l e x i t y _ Algorithm ( A C C ) + 15 × 4 × ( N _ i m g × n _ f ) × n _ i t e r s B i n o _ C o m p l e x i t y _ Algorithm ( I A C C ) = 2 × C o m p l e x i t y _ Algorithm ( I A C C )
{ B i n o _ C o m p l e x i t y _ Implementation ( Z h a n g ) = 1 × C _ o p e r e t i o n B i n o _ C o m p l e x i t y _ Implementation ( T s a i ) = 3 × C _ o p e r e t i o n B i n o _ C o m p l e x i t y _ Implementation ( A C C ) = 3 × C _ o p e r e t i o n B i n o _ C o m p l e x i t y _ Implementation ( I A C C ) = 1.5 × C _ o p e r e t i o n
For Tsai’s method, Zhang’s method, and the present IACC method, the algorithm of the binocular calibration process can be seen as two single cameras simultaneously calibrated with correlated feature point-pairs. However, the ACC method cannot use this strategy to complete binocular calibration. Extra extrinsic parameters of the binocular camera system need to be solely calibrated again after intrinsic parameters present calibration of two single cameras. Since the ACC method introduced penalty function constraints to keep extrinsic parameters’ orthogonality, the operation of individually assigning seven penalty factors for each binocular camera systems will increase the complexity of the binocular calibration. Thus, the n_m of the ACC method is assigned to three, at the same level as Tsai’s method of adjusting the sliding direction. The other three binocular calibration methods keep the same level of implementation complexity as the corresponding monocular calibration methods.
The efficiency of all four contrast monocular calibration methods and binocular calibration methods can be evaluated by Equations (A4)–(A8). It is worth noting that TI in Equation (A4) is mostly larger than TA in the current state of computer hardware and software conditions.

References

  1. Guan, J.; Deboeverie, F.; Slembrouck, M.; van Haerenborgh, D.; van Cauwelaert, D.; Veelaert, P.; Philips, W. Extrinsic Calibration of Camera Networks Using a Sphere. Sensors 2015, 15, 18985–19005. [Google Scholar] [CrossRef] [PubMed]
  2. Poulin-Girard, A.S.; Thibault, S.; Laurendeau, D. Influence of camera calibration conditions on the accuracy of 3D reconstruction. Opt. Express 2016, 24, 2678–2686. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, X.; Song, X.; Wu, J.; Xiao, Y.; Wang, Y.; Wang, Y. Camera calibration with global LBP-coded phase-shifting wedge grating arrays. Opt. Lasers Eng. 2021, 136, 106314. [Google Scholar] [CrossRef]
  4. Abdel-Aziz, Y.I.; Karara, H.M. Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry. Photogramm. Eng. Remote Sens. 2015, 81, 103–107. [Google Scholar] [CrossRef]
  5. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  6. Shi, Z.C.; Shang, Y.; Zhang, X.F.; Wang, G. DLT-Lines Based Camera Calibration with Lens Radial and Tangential Distortion. Exp. Mech. 2021, 61, 1237–1247. [Google Scholar] [CrossRef]
  7. Zhang, J.; Duan, F.; Ye, S. An Easy Accurate Calibration Technique for Camera. Chin. J. Sci. 1999, 20, 193–196. [Google Scholar]
  8. Zheng, H.; Duan, F.-j.; Fu, X.; Liu, C.; Li, T.; Yan, M. A Non-Coplanar High-Precision Calibration Method for Cameras Based on Affine Coordinate Correction Model. Meas. Sci. Technol. 2023, 34, 095018. [Google Scholar] [CrossRef]
  9. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  10. Zhu, Z.; Wang, X.; Liu, Q.; Zhang, F. Camera calibration method based on optimal polarization angle. Opt. Lasers Eng. 2019, 112, 128–135. [Google Scholar] [CrossRef]
  11. Sels, S.; Ribbens, B.; Vanlanduit, S.; Penne, R. Camera Calibration Using Gray Code. Sensors 2019, 19, 246. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, B.; Pan, B. Camera calibration using synthetic random speckle pattern and digital image correlation. Opt. Lasers Eng. 2020, 126, 105919. [Google Scholar] [CrossRef]
  13. Hartley, R.I. Estimation of Relative Camera Positions for Uncalibrated Cameras. In ECCV’92: Proceedings of the Second European Conference on Computer Vision, Santa Margherita Ligure, Italy, 19–22 May 1992; Sandini, G., Ed.; Springer: Berlin/Heidelberg, Germany, 1992; pp. 579–587. [Google Scholar]
  14. Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–151. [Google Scholar] [CrossRef]
  15. Li, D.; Jia, T.; Wang, Y.; Chen, D.; Wu, C.; Wang, H.; Wu, Y.; Zhang, L. Structured Light Self-Calibration Algorithm Based on Random Speckle. In Proceedings of the 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Suzhou, China, 29 July–2 August 2019; pp. 1299–1304. [Google Scholar]
  16. Li, G.; Huang, X.; Li, S. A novel circular points-based self-calibration method for a camera’s intrinsic parameters using RANSAC. Meas. Sci. Technol. 2019, 30, 055005. [Google Scholar] [CrossRef]
  17. Li, X.; Zhang, W.; Song, G. Calibration Method for Line-Structured Light Three-Dimensional Measurement Based on a Simple Target. Photonics 2022, 9, 218. [Google Scholar] [CrossRef]
  18. Wei, Z.; Cao, L.; Zhang, G. A novel 1D target-based calibration method with unknown orientation for structured light vision sensor. Opt. Laser Technol. 2010, 42, 570–574. [Google Scholar] [CrossRef]
  19. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  20. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  21. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT or SURF. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  22. Pan, B. Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Meas. Sci. Technol. 2018, 29, 082001. [Google Scholar] [CrossRef]
  23. Xu, J. Analyzing and Improving the Tsai Camera Calibration Method in Machine Vision. Comput. Eng. Sci. 2010, 32, 45–48+58. [Google Scholar]
  24. Tang, S.; Dong, Z.; Feng, W.; Li, Q.; Nie, L. Fast and Accuracy Camera Calibration Based on Tsai Two-Step Method. In Proceedings of the 2021 7th International Conference on Mechatronics and Robotics Engineering (ICMRE), Budapest, Hungary, 3–5 February 2021; pp. 190–194. [Google Scholar]
  25. Yin, Z.; Ren, X.; Du, Y.; Yuan, F.; He, X.; Yang, F. Binocular camera calibration based on timing correction. Appl. Opt. 2022, 61, 1475–1481. [Google Scholar] [CrossRef] [PubMed]
  26. Cheng, Q.; Huang, P. Camera Calibration Based on Phase Estimation. IEEE Trans. Instrum. Meas. 2023, 72, 1–9. [Google Scholar] [CrossRef]
  27. Chen, H.; Zhuang, J.; Liu, B.; Wang, L.; Zhang, L. Camera calibration method based on circular array calibration board. Syst. Sci. Control Eng. 2023, 11, 2233562. [Google Scholar] [CrossRef]
  28. Wang, X.; Zhao, Y.; Yang, F. Camera calibration method based on Pascal’s theorem. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419846406. [Google Scholar] [CrossRef]
  29. Dong, Q.C.; Wang, L.; Feng, J.Q. Confidence-based camera calibration with modified census transform. Multimed. Tools Appl. 2020, 79, 23093–23109. [Google Scholar] [CrossRef]
  30. Zhang, J.; Yu, H.; Deng, H.; Chai, Z.; Ma, M.; Zhong, X. A Robust and Rapid Camera Calibration Method by One Captured Image. IEEE Trans. Instrum. Meas. 2019, 68, 4112–4121. [Google Scholar] [CrossRef]
  31. Datta, A.; Kim, J.-S.; Kanade, T. Accurate Camera Calibration using Iterative Refinement of Control Points. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; IEEE: Kyoto, Japan, 2009; pp. 1201–1208. [Google Scholar]
  32. Hannemose, M.; Wilm, J.; Frisvad, J.R.; Bodermann, B.; Frenner, K.; Silver, R.M. Superaccurate Camera Calibration Via Inverse Rendering. In Modeling Aspects in Optical Metrology VII; SPIE: Munich, Germany, 2019. [Google Scholar]
  33. Sutton, M.A.; Orteu, J.-J.; Schreier, H. Image Correlation for Shape, Motion and Deformation Measurements. Basic Concepts, Theory and Applications; Springer Science & Business Media, LLC: New York, NY, USA, 2009. [Google Scholar]
  34. Ghorbani, R.; Matta, F.; Sutton, M.A. Full-Field Deformation Measurement and Crack Mapping on Confined Masonry Walls Using Digital Image Correlation. Exp. Mech. 2014, 55, 227–243. [Google Scholar] [CrossRef]
  35. Genovese, K.; Chi, Y.; Pan, B. Stereo-camera calibration for large-scale DIC measurements with active phase targets and planar mirrors. Opt. Express 2019, 27, 9040–9053. [Google Scholar] [CrossRef]
  36. Sutton, M.A.; Hild, F. Recent Advances and Perspectives in Digital Image Correlation. Exp. Mech. 2015, 55, 1–8. [Google Scholar] [CrossRef]
  37. Pan, B.; Li, K.; Tong, W. Fast, Robust and Accurate Digital Image Correlation Calculation Without Redundant Computations. Exp. Mech. 2013, 53, 1277–1289. [Google Scholar] [CrossRef]
  38. Turner, D. Digital Image Correlation Engine (DICe) Reference Manual, Sandia Report, SAND2015-10606 O. 2015. Available online: https://github.com/dicengine/dice (accessed on 11 August 2023).
  39. Liu, X.; Tian, J.; Kuang, H.; Ma, X. A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board. Electronics 2022, 11, 627. [Google Scholar] [CrossRef]
  40. Wang, Y.; Liu, L.; Cai, B.; Wang, K.; Chen, X.; Wang, Y.; Tao, B. Stereo calibration with absolute phase target. Opt. Express 2019, 27, 22254–22267. [Google Scholar] [CrossRef]
  41. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  42. Chang, Q.; Li, X.; Li, Y.; Miyazaki, J. Multi-directional Sobel operator kernel on GPUs. J. Parallel Distrib. Comput. 2023, 177, 160–170. [Google Scholar] [CrossRef]
  43. Duan, F. Study on Fundamental Theories and Applied Technique of Computer Vision Inspection; Tianjin University: Tianjin, China, 1994. [Google Scholar]
Figure 1. Simulated error data with Tsai’s method. (a) shows the relative error of f changing trend with the increasing uncorrected yaw angle between sliding direction and target’s plane; (b) shows the reprojection error changing trend with increasing uncorrected yaw angle between sliding direction and target’s plane.
Figure 1. Simulated error data with Tsai’s method. (a) shows the relative error of f changing trend with the increasing uncorrected yaw angle between sliding direction and target’s plane; (b) shows the reprojection error changing trend with increasing uncorrected yaw angle between sliding direction and target’s plane.
Sensors 23 08466 g001
Figure 2. This figure shows the real imaging process and reflects the experience that the measurement accuracy would be better if the calibration feature points closely cover the measurement position.
Figure 2. This figure shows the real imaging process and reflects the experience that the measurement accuracy would be better if the calibration feature points closely cover the measurement position.
Sensors 23 08466 g002
Figure 3. This figure separately shows the implementation processes of coplanar and non-coplanar calibration methods. The multiple panels of this figure are listed as: (a) The implementation process of coplanar monocular calibration methods; (b) The implementation process of non-coplanar monocular calibration methods; (c) The implementation process of coplanar binocular calibration methods; (d) The implementation process of non-coplanar binocular calibration methods.
Figure 3. This figure separately shows the implementation processes of coplanar and non-coplanar calibration methods. The multiple panels of this figure are listed as: (a) The implementation process of coplanar monocular calibration methods; (b) The implementation process of non-coplanar monocular calibration methods; (c) The implementation process of coplanar binocular calibration methods; (d) The implementation process of non-coplanar binocular calibration methods.
Sensors 23 08466 g003
Figure 4. This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system. This process can be represented as a matrix transformation equation.
Figure 4. This is a transformation model for mapping a target affine space coordinate system to an orthogonal world coordinate system. This process can be represented as a matrix transformation equation.
Sensors 23 08466 g004
Figure 5. This figure is a camera pinhole imaging model. The transformation among different coordinate system processes can be represented by matrix transformation equations.
Figure 5. This figure is a camera pinhole imaging model. The transformation among different coordinate system processes can be represented by matrix transformation equations.
Sensors 23 08466 g005
Figure 6. This figure shows the performance of the proposed calibration method with respect to the noise level. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the noise level; (b) Relative error of fx and fy with respect to the noise level; (c) Absolute error of γ with respect to the noise level; (d) Relative error of U0 and V0 with respect to the noise level; (e) Uncertainty of fx and fy with respect to the noise level; (f) Absolute error of R_vec and T_vec with respect to the noise level; (g) Absolute error of βx and βy with respect to the noise level; (h) Absolute error of distortion coefficients with respect to the noise level.
Figure 6. This figure shows the performance of the proposed calibration method with respect to the noise level. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the noise level; (b) Relative error of fx and fy with respect to the noise level; (c) Absolute error of γ with respect to the noise level; (d) Relative error of U0 and V0 with respect to the noise level; (e) Uncertainty of fx and fy with respect to the noise level; (f) Absolute error of R_vec and T_vec with respect to the noise level; (g) Absolute error of βx and βy with respect to the noise level; (h) Absolute error of distortion coefficients with respect to the noise level.
Sensors 23 08466 g006
Figure 7. This figure shows the performance of the proposed calibration method with respect to the number of planes. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the number of planes; (b) Relative error of fx and fy with respect to the number of planes; (c) Absolute error of γ with respect to the number of planes; (d) Relative error of U0 and V0 with respect to the number of planes; (e) Uncertainty of fx and fy with respect to the number of planes; (f) Absolute error of R_vec and T_vec with respect to the number of planes; (g) Absolute error of βx and βy with respect to the number of planes; (h) Absolute error of distortion coefficients with respect to the number of planes.
Figure 7. This figure shows the performance of the proposed calibration method with respect to the number of planes. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the number of planes; (b) Relative error of fx and fy with respect to the number of planes; (c) Absolute error of γ with respect to the number of planes; (d) Relative error of U0 and V0 with respect to the number of planes; (e) Uncertainty of fx and fy with respect to the number of planes; (f) Absolute error of R_vec and T_vec with respect to the number of planes; (g) Absolute error of βx and βy with respect to the number of planes; (h) Absolute error of distortion coefficients with respect to the number of planes.
Sensors 23 08466 g007
Figure 8. This figure shows the performance of the proposed calibration method with respect to the rotation angle of the targets’ plane. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the rotation angle of the targets’ plane; (b) Relative error of fx and fy with respect to the rotation angle of the targets’ plane; (c) Absolute error of γ with respect to the rotation angle of the targets’ plane; (d) Relative error of U0 and V0 with respect to the rotation angle of the targets’ plane; (e) Uncertainty of fx and fy with respect to the rotation angle of targets’ plane; (f) Absolute error of R_vec and T_vec with respect to the rotation angle of the targets’ plane; (g) Absolute error of βx and βy with respect to the rotation angle of the targets’ plane; (h) Absolute error of distortion coefficients with respect to the rotation angle of the targets’ plane.
Figure 8. This figure shows the performance of the proposed calibration method with respect to the rotation angle of the targets’ plane. The multiple panels of this figure are listed as follows: (a) Simulated reprojection error with respect to the rotation angle of the targets’ plane; (b) Relative error of fx and fy with respect to the rotation angle of the targets’ plane; (c) Absolute error of γ with respect to the rotation angle of the targets’ plane; (d) Relative error of U0 and V0 with respect to the rotation angle of the targets’ plane; (e) Uncertainty of fx and fy with respect to the rotation angle of targets’ plane; (f) Absolute error of R_vec and T_vec with respect to the rotation angle of the targets’ plane; (g) Absolute error of βx and βy with respect to the rotation angle of the targets’ plane; (h) Absolute error of distortion coefficients with respect to the rotation angle of the targets’ plane.
Sensors 23 08466 g008
Figure 9. This figure shows the performance simulation results of the two contrast algorithms with respect to illumination and viewing angles changes. The multiple panels of this figure are listed as follows: (a) Stability simulations with respect to illumination changes (viewing angle: 0°); (b) Stability simulations with respect to illumination changes (viewing angle: 20°); (c) Stability simulations with respect to illumination changes (viewing angle: 45°).
Figure 9. This figure shows the performance simulation results of the two contrast algorithms with respect to illumination and viewing angles changes. The multiple panels of this figure are listed as follows: (a) Stability simulations with respect to illumination changes (viewing angle: 0°); (b) Stability simulations with respect to illumination changes (viewing angle: 20°); (c) Stability simulations with respect to illumination changes (viewing angle: 45°).
Sensors 23 08466 g009
Figure 10. This figure shows the feature searching ability of our novel circle feature points extraction algorithm for fitting different inclination angles of planar targets.
Figure 10. This figure shows the feature searching ability of our novel circle feature points extraction algorithm for fitting different inclination angles of planar targets.
Sensors 23 08466 g010
Figure 11. This figure shows the binocular camera system to be calibrated.
Figure 11. This figure shows the binocular camera system to be calibrated.
Sensors 23 08466 g011
Figure 12. This figure shows the horizontal and vertical point-pairs clearances to be measured in the targets. Each target in a specific position can acquire 56 sets of horizontal clearances and 54 sets of vertical distance clearances.
Figure 12. This figure shows the horizontal and vertical point-pairs clearances to be measured in the targets. Each target in a specific position can acquire 56 sets of horizontal clearances and 54 sets of vertical distance clearances.
Sensors 23 08466 g012
Figure 13. This figure shows a 3D reconstruction of the circular feature points’ centers. Listed as follows: (a) 3D reconstruction of feature points with paper’s parameters in mono-viewing muti-plane position; (b) 3D reconstruction of feature points with paper’s parameters in multi-viewing muti-plane position.
Figure 13. This figure shows a 3D reconstruction of the circular feature points’ centers. Listed as follows: (a) 3D reconstruction of feature points with paper’s parameters in mono-viewing muti-plane position; (b) 3D reconstruction of feature points with paper’s parameters in multi-viewing muti-plane position.
Sensors 23 08466 g013
Figure 14. This figure shows the calibrated binocular camera system is used to implement static 3D surface shape reconstruction experiments of cylinder objects with different radii.
Figure 14. This figure shows the calibrated binocular camera system is used to implement static 3D surface shape reconstruction experiments of cylinder objects with different radii.
Sensors 23 08466 g014
Figure 15. (ac) show the static measurement results of the surface’s z-coordinates, displacement of x-direction, and normal strain of x-direction. The results data of (ac) comes from the binocular camera system with the present IACC calibration parameters. Correspondingly, (df) show the static measurement results of the surface’s z-coordinates, displacement of x-direction, and normal strain of x-direction. The results data of (df) comes from the binocular camera system with Zhang’s calibration parameters.
Figure 15. (ac) show the static measurement results of the surface’s z-coordinates, displacement of x-direction, and normal strain of x-direction. The results data of (ac) comes from the binocular camera system with the present IACC calibration parameters. Correspondingly, (df) show the static measurement results of the surface’s z-coordinates, displacement of x-direction, and normal strain of x-direction. The results data of (df) comes from the binocular camera system with Zhang’s calibration parameters.
Sensors 23 08466 g015
Figure 16. This figure shows 3D reconstructions of the ROIs’ matched subsets from three different cylinders. Listed as follows: (a) 3D reconstruction of surface ROI on cylinder #1; (b) 3D reconstruction of surface ROI on cylinder #2; (c) 3D reconstruction of surface ROI on cylinder #3.
Figure 16. This figure shows 3D reconstructions of the ROIs’ matched subsets from three different cylinders. Listed as follows: (a) 3D reconstruction of surface ROI on cylinder #1; (b) 3D reconstruction of surface ROI on cylinder #2; (c) 3D reconstruction of surface ROI on cylinder #3.
Sensors 23 08466 g016
Figure 17. This figure shows the 3D reconstruction of Cylinder #2 with some residual adhesive films retained on the surface of the ROI.
Figure 17. This figure shows the 3D reconstruction of Cylinder #2 with some residual adhesive films retained on the surface of the ROI.
Sensors 23 08466 g017
Table 1. The mathematical model and algorithm flow of Tsai’s non-coplanar calibration methods.
Table 1. The mathematical model and algorithm flow of Tsai’s non-coplanar calibration methods.
Mathematical Model
(Tsai’s Method)
ρ ( P ~ I P ~ C ) = K Tsai [ R 3 × 3 T 3 × 1 ] P ~ W K Tsai = [ f s x 0 0 0 f 0 0 0 1 ] D = ( k 1 k 2 k 3 )
Distortion Coefficients
P ~ C = ( U 0 V 0 1 ) T
Matrix of Intrinsic ParametersKnown Optic Center
Linear Initial Values Solving
(RAC Constraints)
[ a 1 a 2 a 7 ] [ s x R 3 × 3 T x T y ] [ T z f ]
1st Step2nd Step
Nonlinear Optimization I ( T z ,   f ,   k 1 ,   k 2 ,   k 3 ) = min
Nonlinear Optimization
(Binocular Camera System)
I ( T a _ z ,   f a ,   k a 1 ,   k a 2 ,   k a 3 , T b _ z ,   f b ,   k b 1 ,   k b 2 ,   k b 3 ) = min
Table 2. The mathematical model and algorithm flow of Zhang’s coplanar calibration methods.
Table 2. The mathematical model and algorithm flow of Zhang’s coplanar calibration methods.
Mathematical Model
(Zhang’s Method)
ρ P ~ I = K Zhang [ R 3 × 3 T 3 × 1 ] P ~ W = H 3 × 3 P ~ W K Zhang = [ f x γ U 0 0 f y V 0 0 0 1 ] D = ( k 1 k 2 p 1 p 2 k 3 )
Matrix of Intrinsic ParametersDistortion Coefficients
Linear Initial Values Solving ( H 1 _ 3 × 3 ,   H 2 _ 3 × 3 H N _ 3 × 3 ) B 3 × 3 [ f x     f y     γ     U 0     V 0 ] [ R 1 _ vector 3 × 1     T 1 _ vector 3 × 1 R N _ vector 3 × 1     R N _ vector 3 × 1 ]
Nonlinear Optimization I ( K Zhang ,   D ,   R 1 _ vector 3 × 1 ,   T 1 _ vector 3 × 1 ,   ,   R N _ vector 3 × 1 ,   T N _ vector 3 × 1 ) = min
Nonlinear Optimization
(Binocular Camera System)
I ( K a _ Zhang ,   D a , K b _ Zhang ,   D b ,   R a - b _ vector 3 × 1 ,   T a - b _ vector 3 × 1 ,   R a 1 _ vector 3 × 1 ,   T a 1 _ vector 3 × 1 , ,   R aN _ vector 3 × 1 ,   T aN _ vector 3 × 1 ) = min
Table 3. The mathematical model and algorithm flow of ACC non-coplanar calibration method for monocular camera system.
Table 3. The mathematical model and algorithm flow of ACC non-coplanar calibration method for monocular camera system.
Mathematical Model
(ACC Method)
ρ ( P ~ I P ~ C ) = K ACC [ R 3 × 3 T 3 × 1 ] [ η β ] P ~ p = M 3 × 4 P ~ p K ACC = [ f / s x 0 0 0 f 0 0 0 1 ] D = ( k 1 k 2 p 1 p 2 k 3 )
Distortion Coefficients
P ~ C = ( U 0 V 0 1 ) T
Matrix of Intrinsic ParametersKnown Optic Center
Linear Initial Value Solving M 3 × 4 = [ m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8 m 9 m 10 m 11 1 ]
Nonlinear Optimization I ( m 1 ,   ,   m 11 ,   k 1 ,   k 2 ,   p 1 ,   p 2 ,   k 3 ) = min
Final Parameters Separation ( m 1 ,   ,   m 11 ) K ACC = [ f / s x 0 0 0 f 0 0 0 1 ]   and   [ η β ] = [ 1 η x β x 0 0 η y β y 0 0 0 β z 0 0 0 0 1 ]   and   [ R 3 × 3 T 3 × 1 ]
Table 4. The mathematical model and algorithm flow of ACC non-coplanar calibration method for binocular camera system.
Table 4. The mathematical model and algorithm flow of ACC non-coplanar calibration method for binocular camera system.
Mathematical Model
(ACC Method)
(Extrinsic Parameters Calibration Only)
{ ρ a ( P ~ a _ I P ~ a _ C ) = K a _ ACC [ R a _ 3 × 3 T a _ 3 × 1 ] [ η β ] P ~ p = K a _ ACC G 3 × 4 P ~ p ρ b ( P ~ b _ I P ~ b _ C ) = K b _ ACC [ R b _ 3 × 3 T b _ 3 × 1 ] [ η β ] P ~ p = K b _ ACC H 3 × 4 P ~ p ( K a _ ACC ,   K b _ ACC ) ( D a ,   D b )
Calibrated Distortion Coefficients
( P ~ a _ C , P ~ b _ C )
Calibrated Intrinsic ParametersKnown Optic Centers
Linear Initial Value Solving G 3 × 4 = [ g 1 g 2 g 3 g 4 g 5 g 6 g 7 g 8 g 9 g 10 g 11 1 ]   and   H 3 × 4 = [ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 h 10 h 11 1 ]
Penalty Constraints Construction ( f p 1 ,   f p 2 ,   f p 3 ,   f p 4 ,   f p 5 ,   f p 6 ,   f p 7 )
Nonlinear Optimization I ( g 1 ,   ,   g 11 ,   h 1 ,   ,   h 11 ,   f p 1 ,   ,   f p 7 ) = min
Final Parameters Separation { ( g 1 ,   ,   g 11 ) ( h 1 ,   ,   h 11 ) [ η β ] = [ 1 η x β x 0 0 η y β y 0 0 0 β z 0 0 0 0 1 ]   and   [ R a _ 3 × 3 T a _ 3 × 1 ]   and   [ R b _ 3 × 3 T b _ 3 × 1 ]
Table 5. Proposed novel improved affine coordinate correction (IACC) mathematical model for non-coplanar calibration.
Table 5. Proposed novel improved affine coordinate correction (IACC) mathematical model for non-coplanar calibration.
Mathematical Model
(Present Improved-ACC Method)
ρ ( P ~ I P ~ C ) = K IACC [ R 3 × 3 T 3 × 1 ] [ β ] P ~ p = A 3 × 4 P ~ p K IACC = [ f x γ 0 0 f y 0 0 0 1 ] D = ( k 1 k 2 p 1 p 2 k 3 )
Distortion Coefficients
P ~ C = ( U 0 V 0 1 ) T
Matrix of Intrinsic ParametersOptic Center
(Initial as Image Center)
Linear Initial Value Solving A 3 × 4 = [ a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 a 11 1 ] K IACC = [ f x γ 0 0 f y 0 0 0 1 ]   and   [ β ] = [ 1 0 β x 0 0 1 β y 0 0 0 β z 0 0 0 0 1 ]   and   [ R v e c t o r 3 × 1 T 3 × 1 ]
Nonlinear Optimization I ( K IACC ,   P C   , R vector 3 × 1 ,   T vector 3 × 1 ,   D ,   β x ,   β y ) = min
Nonlinear Optimization
(Binocular Camera System)
I ( K a - IACC ,   P a - C   , R a - vector 3 × 1 ,   T a - vector 3 × 1 ,   D a ,   K b - IACC ,   P b - C   , R b - vector 3 × 1 ,   T b - vector 3 × 1 ,   D b ,   β x ,   β y ) = min
Table 6. The Information of the Calibration Targets’ Pattern.
Table 6. The Information of the Calibration Targets’ Pattern.
Pattern TypeNumber of ColumnsNumber of RowsClearance of Feature Points
Chessboard11810 mm ± 1 μm 1
Symmetric Circles9715 mm ± 1 μm 1
1 This level of accuracy is decided by the manufacturing technique.
Table 7. Parameters of Chessboard Data Sets made by findChessboardCorners.
Table 7. Parameters of Chessboard Data Sets made by findChessboardCorners.
Data SetsNumber of Feature Point-PairsApplied MethodsType
Single_L_Chessboard_Laser_V201760Tsai’s Method, ACC and IACC MethodNon-Coplanar
Single_R_Chessboard_Laser_V201760Tsai’s Method, ACC and IACC MethodNon-Coplanar
Single_L_Chessboard_ZZY1760Zhang’s MethodCoplanar
Single_R_Chessboard_ZZY1760Zhang’s MethodCoplanar
Table 8. This table shows the Single_L camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.
Table 8. This table shows the Single_L camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.
Pattern Type
(Extraction Algorithm)
Chessboard
(findChessboardCorners)
CameraSingle_L Camera
MethodsTsai’sACC MethodZhang’sIACC Method
(fx, fy)(2251.457, 2251.750)(2252.358, 2252.167)(2254.887, 2255.056)(2254.071, 2254.118)
(U0, V0)(640, 512)(650, 556)(647.79, 493.60)(639.54, 487.16)
(k1, k2)(1.23 × 10−8, −9.02 × 10−15)(1.16 × 10−8, −7.38 × 10−15)(−6.47 × 10−2, 2.70 × 10−1)(−7.29 × 10−2, 4.34 × 10−1)
(p1, p2)-(−2.62 × 10−7, 2.63 × 10−9)(1.17 × 10−3, 5.35 × 10−4)(1.40 × 10−3, 7.26 × 10−4)
Reprojection Error * (Unit: Pix)0.6410.0840.1770.074
Reprojection Error * (Unit: mm)0.0590.0080.0190.007
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.
Table 9. This table shows the Single_R camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.
Table 9. This table shows the Single_R camera calibration results using the chessboard pattern with feature points found by findChessboardCorners from OpenCV 3.3.0.
Pattern Type
(Extraction Algorithm)
Chessboard
(findChessboardCorners)
CameraSingle_R Camera
MethodsTsai’sACC MethodZhang’sIACC Method
(fx, fy)(2252.373, 2252.480)(2246.971, 2246.647)(2247.280, 2247.239)(2246.966, 2246.797)
(U0, V0)(640, 512)(689, 514)(674.23, 505.23)(675.70, 511.10)
(k1, k2)(1.34 × 10−8, −3.51 × 10−15)(1.55 × 10−8, −8.62 × 10−15)(−7.39 × 10−2, 1.95 × 10−1)(−6.94 × 10−2, 2.85 × 10−2)
(p1, p2)-(−5.95 × 10−7, 3.66 × 10−8)(1.52 × 10−4, 1.48 × 10−3)(−1.52 × 10−5, 1.45 × 10−3)
Reprojection Error * (Unit: Pix)0.5390.0700.0980.066
Reprojection Error * (Unit: mm)0.0490.0060.0100.006
* Reprojection (bold data in the table) error is the key data to evaluate the calibration accuracy.
Table 10. This table shows the Single_L camera calibration results using the symmetric circle pattern with feature points found by findCirclesGrid from OpenCV 3.3.0.
Table 10. This table shows the Single_L camera calibration results using the symmetric circle pattern with feature points found by findCirclesGrid from OpenCV 3.3.0.
Pattern Type
(Extraction Algorithm)
Symmetric Circle Pattern
(findCirclesGrid)
CameraSingle_L Camera
MethodsTsai’sZhang’sACC MethodIACC method
(fx, fy)(2258.761, 2258.832)(2258.761, 2258.832)(2253.180, 2253.028)(2254.551, 2254.486)
(U0, V0)(640, 512)(646.36, 493.89)(646, 541)(639.47, 478.33)
(k1, k2)(1.17 × 10−8, −8.91 × 10−15)(−5.75 × 10−2, 2.39 × 10−1)(1.10 × 10−8, −7.60 × 10−15)(−6.61 × 10−2, 3.38 × 10−1)
(p1, p2)-(1.32 × 10−3, 4.50 × 10−4)(−2.53 × 10−7, −7.23 × 10−9)(1.42 × 10−3, 6.62 × 10−4)
Reprojection Error * (Unit: Pix)0.6290.0890.0700.024
Reprojection Error * (Unit: mm)0.0640.0110.0070.002
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.
Table 11. This table shows the Single_R camera calibration results using the symmetric circle pattern with feature points found by findCirclesGrid from OpenCV 3.3.0.
Table 11. This table shows the Single_R camera calibration results using the symmetric circle pattern with feature points found by findCirclesGrid from OpenCV 3.3.0.
Pattern Type
(Extraction Algorithm)
Symmetric Circle Pattern
(findCirclesGrid)
CameraSingle_R camera
MethodsTsai’sZhang’sACC MethodIACC method
(fx, fy)(2232.061, 2231.872)(2245.017, 2245.000)(2244.275, 2244.229)(2244.373, 2244.342)
(U0, V0)(640, 512)(670.80, 503.99)(659, 494)(664.59, 491.88)
(k1, k2)(8.60 × 10−9, 3.95 × 10−15)(−5.62 × 10−2, 2.03 × 10−1)(1.53 × 10−8, −1.06 × 10−14)(−8.26 × 10−2, 3.93 × 10−1)
(p1, p2)-(1.41 × 10−4, 1.42 × 10−3)(−7.54 × 10−7, −5.65 × 10−9)(5.22 × 10−5, 1.58 × 10−3)
Reprojection Error * (Unit: Pix)1.0340.0440.0420.041
Reprojection Error * (Unit: mm)0.1090.0050.0040.004
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.
Table 12. This table shows the Single_L camera calibration results using symmetric circle patterns with feature points found by Present New Algorithm in Appendix B.
Table 12. This table shows the Single_L camera calibration results using symmetric circle patterns with feature points found by Present New Algorithm in Appendix B.
MethodsZhang’sPresent IACC Method
Feature Extraction AlgorithmfindCirclesGridNew Method in Appendix BfindCirclesGridNew Method in Appendix B
(fx, fy)(2258.761, 2258.832)(2257.609, 2257.477)(2254.551, 2254.486)(2254.686, 2254.609)
(U0,V0)(646.36, 493.89)(647.60, 493.95)(639.472, 478.326)(639.152, 480.773)
(k1, k2)(−5.75 × 10−2, 2.39 × 10−1)(−5.67 × 10−2, 2.37 × 10−1)(−6.61 × 10−2, 3.38 × 10−1)(−6.14 × 10−2, 2.72 × 10−1)
(p1, p2)(1.32 × 10−3, 4.50 × 10−4)(1.26 × 10−3, 6.62 × 10−4)(1.42 × 10−3, 6.62 × 10−4)(1.45 × 10−3, 6.57 × 10−4)
Reprojection Error * (Unit: Pix)0.0890.0350.0240.019
Reprojection Error * (Unit: mm)0.0110.0040.0020.002
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.
Table 13. This table shows the Single_R camera calibration results using symmetric circle pattern with feature points found by the Present New Algorithm in Appendix B.
Table 13. This table shows the Single_R camera calibration results using symmetric circle pattern with feature points found by the Present New Algorithm in Appendix B.
MethodsZhang’sPresent IACC Method
Feature Extraction AlgorithmfindCirclesGridNew Method in Appendix BfindCirclesGridNew Method in Appendix B
(fx, fy)(2245.017, 2245.000)(2258.622, 2258.425)(2244.373, 2244.342)(2243.868, 2243.764)
(U0, V0)(670.80, 503.99)(671.442, 504.214)(664.590, 491.874)(663.304, 501.017)
(k1, k2)(−5.62 × 10−2, 2.03 × 10−1)(−5.78 × 10−2, 2.34 × 10−1)(−8.26 × 10−2, 3.93 × 10−1)(−7.11 × 10−2, 1.99 × 10−1)
(p1, p2)(1.41 × 10−4, 1.42 × 10−3)(1.59 × 10−4, 1.54 × 10−4)(5.22 × 10−5, 1.58 × 10−3)(5.82 × 10−5, 1.59 × 10−3)
Reprojection Error * (Unit: Pix)0.0440.0330.0410.028
Reprojection Error * (Unit: mm)0.0050.0040.0040.003
* Reprojection error (bold data in the table) is the key data to evaluate the calibration accuracy.
Table 14. This table shows the calibration results separately calibrated by the ACC method, Zhang’s method, and the present IACC method.
Table 14. This table shows the calibration results separately calibrated by the ACC method, Zhang’s method, and the present IACC method.
Pattern TypeSymmetric Circle Pattern
(Present New Algorithm in Appendix B)
Number of Images-Pairs5
MethodsACC MethodZhang’s Method
TypeNon-coplanarCoplanar
KaKb 2253.180 0 646 0 2253.027 541 0 0 1 2244.275 0 659 0 2244.229 494 0 0 1 2249.114 0 646.522 0 2248.951 492.704 0 0 1 2242.060 0 675.820 0 2241.410 504.346 0 0 1
DaDb 1 × 10 8 7 × 10 15 6 × 10 14 3 × 10 7 7 × 10 9 1 × 10 8 1 × 10 14 8 × 10 15 8 × 10 7 6 × 10 9 0.067 0.374 0.001 0.001 0.803 0.067 0.145 4 × 10 5 0.001 0.640
βηx 2.772 e 3 1.678 e 4 T −3.133e−3-
Ra_b_Mat3×3 0.861 6.030 × 10 3 0.508 0.005 1 0.004 0.508 0.001 0.861 0.862 0.001 0.506 0.002 1 0.001 0.506 0.002 0.862
Ta_b 121.089 4.780 32.076 T 121.776 0.226 31.857 T
Reprojection Error *0.594 Pixels0.055 Pixels
MethodsPresent IACC Method
TypeNon-coplanar
KaKb 2257.200 1.626 643.293 0 2255.697 487.601 0 0 1 2241.096 1.633 667.915 0 2242.470 498.817 0 0 1
DaDb 0.062 0.298 0.001 7 × 10 4 0.392 0.066 0.125 2 × 10 4 0.001 0.616
β 2.772 × 10 3 1.678 × 10 4 T
Ra_b_Mat3×3 0.863 2.215 × 10 4 0.506 0.001 1 0.001 0.506 0.002 0.863
Ta_b 121.884 0.100 31.421 T
Reprojection Error *0.032 Pixels
* Reprojection error in Pixels (bold data in the table) is a key data to evaluate the calibration accuracy of different methods.
Table 15. This table shows the 3D reconstruction error data of the present calibration method and Zhang’s method with the target in mono-viewing multiplane position.
Table 15. This table shows the 3D reconstruction error data of the present calibration method and Zhang’s method with the target in mono-viewing multiplane position.
Clearances TypeHorizontal Point-Pairs Clearances Error DataVertical Point-Pairs Clearances Error Data
Number of Measured Clearances280270
Real Clearance Value15 mm ± 1 μm15 mm ± 1 μm
Calibration MethodsZhang’sACCIACCZhang’sACCIACC
Reproject Error of Calibration0.055 Pixels
(Non-Measured Position)
0.594 Pixels
(Measured Position)
0.032 Pixels
(Measured Position)
0.055 Pixels
(Non-Measured Position)
0.594 Pixels
(Measured Position)
0.032 Pixels
(Measured Position)
Measured Average Value (Unit: mm)14.951014.995215.000314.948815.013915.0001
Abs of Average Error (Unit: mm)0.04900.00480.00030.05120.01390.0001
Root Mean Square Error * (Unit: mm)0.05200.02110.00260.05210.03000.0018
* RMS error (bold data in the table) is the key data to evaluate the accuracy of distance measurements.
Table 16. This table shows the 3D reconstruction error data of the present calibration method and Zhang’s method with the target in multi-viewing multiplane position.
Table 16. This table shows the 3D reconstruction error data of the present calibration method and Zhang’s method with the target in multi-viewing multiplane position.
Clearances TypeHorizontal Point-pairs Clearances Error DataVertical Point-pairs Clearances Error Data
Number of Measured Clearances280270
Real Clearance Value15 mm ± 1 μm15 mm ± 1 μm
Calibration MethodsZhang’sACCIACCZhang’sACCIACC
Reproject Error of Calibration0.055 Pixels
(Measured Position)
0.594 Pixels
(Non-Measured Position)
0.032 Pixels
(Non-Measured Position)
0.055 Pixels
(Measured Position)
0.594 Pixels
(Non-Measured Position)
0.032 Pixels
(Non-Measured Position)
Measured Average Value (Unit: mm)15.007615.050815.056915.007715.069415.0521
Abs of Average Error (Unit: mm)0.00760.05080.05690.00770.06940.0521
Root Mean Square Error * (Unit: mm)0.02170.05750.05910.01730.07700.0544
* RMS error (bold data in the table) is the key data to evaluate the accuracy of distance measurements.
Table 17. The fitting results of local ROIs of three cylinders with different radii.
Table 17. The fitting results of local ROIs of three cylinders with different radii.
Calibration
Parameters from IACC method
Fitting Results ( x 0 , y 0 , z 0 ) ( l , m , n ) rNumbers of PointsRMSE
Cylinder #1(77.168, −1.588, 246.656)(−0.003, 1.000, −0.001)76.369150,0000.0065
Cylinder #2(65.792, 0.095, 290.116)(0.009, 1.000, −0.007)103.510100,0000.0099
Cylinder #3(52.377, −14.842, 302.689)(0.077, 2.692, −0.076)124.870150,0000.0063
Calibration
Parameters from Zhang’s Method
Fitting Results ( x 0 , y 0 , z 0 ) ( l , m , n ) rNumbers of PointsRMSE
Cylinder #1(75.656, 1.517, 246.340)(−0.003, 1.000, −0.001)76.704150,0000.0066
Cylinder #2(64.074, 0.198, 289.784)(0.010, 1.000, −0.007)103.980100,0000.0100
Cylinder #3(50.785, −8.227, 302.300)(0.029, 1.000, −0.028)125.469150,0000.0062
Table 18. Quantified complexity of the mentioned four calibration methods.
Table 18. Quantified complexity of the mentioned four calibration methods.
MethodsTsaiZhangACCIACC
Calibration ModeMonocularBinocularMonocularBinocularMonocularBinocularMonocularBinocular
Complexity of
Algorithm
5.04 × 1051.008 × 1061.7388 × 1073.477 × 1073.780 × 1061.5120 × 1074.284 × 1068.568 × 106
Complexity of
Implementation
3 × 1083 × 1081 × 1081 × 1081.5 × 1083 × 1081.5 × 1081.5 × 108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, H.; Duan, F.; Li, T.; Li, J.; Niu, G.; Cheng, Z.; Li, X. A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements. Sensors 2023, 23, 8466. https://doi.org/10.3390/s23208466

AMA Style

Zheng H, Duan F, Li T, Li J, Niu G, Cheng Z, Li X. A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements. Sensors. 2023; 23(20):8466. https://doi.org/10.3390/s23208466

Chicago/Turabian Style

Zheng, Hao, Fajie Duan, Tianyu Li, Jiaxin Li, Guangyue Niu, Zhonghai Cheng, and Xin Li. 2023. "A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements" Sensors 23, no. 20: 8466. https://doi.org/10.3390/s23208466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop