Next Article in Journal
Pansharpening with a Guided Filter Based on Three-Layer Decomposition
Previous Article in Journal
FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target

Key Laboratory of Precision Opto-mechatronics Technology (Beihang University), Ministry of Education, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 1074; https://doi.org/10.3390/s16071074
Submission received: 1 April 2016 / Revised: 4 July 2016 / Accepted: 6 July 2016 / Published: 12 July 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm2.

1. Introduction

As one of the main structures of machine vision sensors, BSVS acquires 3D scene geometric information through one pair of images and has many applications in industrial product inspection, robot navigation, virtual reality, etc. [1,2,3]. Structural parameter calibration is always an important and concerning issue in BSVS. Current calibration methods can be roughly classified into three categories: methods based on 3D targets, 2D targets and 1D targets. 3D target-based methods [4,5] obtain the structural parameters by placing the target only once in the sensor field of view. However, its disadvantages lie in the fact that large size 3D targets are exceedingly difficult to machine, and it is usually impossible to maintain the calibration image with all feature points at the same level of clarity. 2D target-based methods [2,6] require the plane target to be placed freely at least twice with different positions and orientations, and different target calibration features are unified to a common sensor coordinate frame through the camera coordinate frame. Therefore, calibration operation becomes more convenient than in 3D target-based methods. However, there are also weaknesses in two primary aspects. One is that repeated calibration feature unification will increase transformation errors. The other is that when the two cameras form a large viewing angle or the multi-camera system requires calibration, it is difficult to simultaneously maintain the calibration image with all features at the same level of clarity for all cameras. Regarding 1D target-based methods [7], which are much more convenient than 2D target-based methods, the target is freely placed no less than four times with different positions and orientations. The image points of the calibration feature points are used to determine the rotation matrix R and the translation vector T, and the scale factor of T is obtained by the known distance constraint. Unfortunately, 1D target-based methods have the same weaknesses as 2D target-based methods. Moreover, in practice, 1D targets should be placed many times to obtain enough feature points.
The sphere is widely used in machine vision calibration owing to its spatial uniformity and symmetry [8,9,10,11,12,13,14,15,16,17]. Agrawal et al. [11] and Zhang et al. [16] both used spheres to calibrate intrinsic camera parameters using the relationship between the projected ellipse of the sphere and the dual image of the absolute conic (DIAC). Moreover, they also mentioned that the structural parameters between two or more cameras could be obtained by using the 3D points cloud registration method. However, this method requires many feature points to guarantee high accuracy. Wong et al. [17] proposed two methods to recover the fundamental matrix, and then structural parameters could be deduced when the intrinsic parameters of the two cameras are known. One method is to use sphere centres, intersection points and visual points of tangency to compute the fundamental matrix. The other is to determine the fundamental matrix using the homography matrix and the epipoles, which are computed via plane-induced homography. However, the second method requires an extra plane target to transfer the projected ellipse from the first view to the second view.
In this paper, we propose a method using a double-sphere target to calibrate the structural parameters of BSVS. The target consists of two identical spheres fixed by a rigid bar of known length and unknown radii. During calibration, the double-sphere target is placed freely at least twice in different positions and orientations. From the projected ellipses of spheres, the image points of sphere centres and a so-called depth-scale factor for each sphere can be calculated. Because any three non-collinear sphere centres determine a spatial plane πs, if we have at least three non-parallel planes with their normal vectors obtained in both camera coordinate frames, the rotation matrix R can be solved. However, πs could not be directly obtained. We obtained its normal vector by an intermediate plane paralleling to the plane πs, which is recovered by the depth-scale factors and the image points of sphere centres. From the epipolar geometry, a linear relation between the translation vector T including a scale factor and the image points is derived, and then SVD is used to solve it. Furthermore, the scale factor is determined based on the known distance constraint. Finally, R and T are combined to be refined by Levenberg-Marquardt algorithm. Due to the complete symmetry of the sphere, wherever the sphere is placed in the sensor vision field, all cameras can capture the same high-quality images of the sphere, which are essential to maintain the calibration consistency, even if the angle between the principal rays of the two cameras is large. Moreover, regarding multi-camera system calibration, it is often difficult to make the target features simultaneously visible in all views because of the variety of positions and orientations of the cameras. In general, the cameras are divided into several smaller groups, and each group is calibrated separately, and finally all cameras are registered into a common reference coordinate frame [17]. However, using the double-sphere target can obtain the relationship of the cameras with common view district by once calibration. A kind of terrible configuration of two cameras mentioned above will be often taken place in multi-camera calibration. Therefore, using the double-sphere target can reduce the times of calibration and make calibration operation easy and efficient.
The remaining sections of this paper are organized as follows: Section 2 briefly describes a few basic properties of the projected ellipse of the sphere. Section 3 elaborates the principles of the proposed calibration method based on the double-sphere target. Section 4 provides detailed analysis of the impact on the image point of the sphere centre when the projected ellipse is not accurately extracted with positional deviation. Section 5 presents computer simulations and real data experiments to verify the proposed method. The conclusions are given in Section 6.

2. Basic Principles

This section mainly describes some related properties of the projected ellipse of sphere.

2.1. Derivation of the Projected Ellipse

Agrawal [11] and Zhang [14] each give the formula of the projected ellipse of the sphere. We further synthesize the two different derivation approaches to gain an easily understood explanation, which is briefly described as follows:
Consider a camera P = K [ I 3 × 3 | 0 ] = [ K | 0 ] viewing a sphere Q with radius R0 centered at X = [X0 Y0 Z0]T in the camera coordinate frame O~XYZ, where K is the camera intrinsic matrix. Q is expressed as (XX0)2 + (YY0)2 + (ZZ0)2 = R 0 2 .
Denoting X 0 2 + Y 0 2 + Z 0 2 by h0; then we have:
X = [ X 0 Y 0 Z 0 ] T = h 0 d
where d is the unit vector of X.
Sphere Q is further expressed by the following coefficient matrix:
Q = [ I 3 × 3 X X T X T X R 0 2 ] = [ 1 0 0 X 0 0 1 0 Y 0 0 0 1 Z 0 X 0 Y 0 Z 0 h 0 2 R 0 2 ]
Thus, the dual Q* of Q is defined as:
Q * = Q 1
Next, we obtain the dual C* of the projected ellipse C of sphere Q under camera P [4]:
C * = P Q * P T = K K T h 0 2 R 0 2 K d d T K T
Denoting h0/R0 by μ. From Equation (4), we have:
C * = K K T μ 2 K d d T K T = K K T o o T
with o = μKd, which is the image point of sphere centre X.

2.2. Derivation of the Image Point of the Sphere Centre

From Equation (5), C* can also be written as:
ρ C = ω o o T
where ρ is an unknown scale factor, and ω = K K T .
Let C 1 , C 2 be the dual of projected ellipses of spheres Q 1 , Q 2 under camera P , respectively; then we have:
{ ρ 1 C 1 = ω o 1 o 1 T ρ 2 C 2 = ω o 2 o 2 T
where ρ1, ρ2 are two unknown scale factors, o1 = μ1Kd1, o2 = μ2Kd2, and μ1, μ2, d1, d2 have the same meanings as μ in Equation (5) and d in Equation (1).
Let X Q 1 O ,     X Q 2 O denote the centres of spheres Q1, Q2, respectively. These two points and the camera centre O determine a plane. Denote the vanishing line of this plane by l12; then we know from [14] that:
C 2 C 1 l 12 = ρ 2 ρ 1 l 12
From Equation (8), it is observed that l 12 is the eigenvector corresponding to the eigenvalue ρ 2 / ρ 1 regarding matrix C 2 C 1 , which has two real intersections with each projected ellipse C 1 and C 2 .
Because l 12 passes through the image points o 1 and o 2 , l 12 = o 1 × o 2 . Hence, if we know three projected ellipses C 1 , C 2 and C 3 of spheres Q 1 , Q 2 and Q 3 , the image points o 1 , o 2 and o 3 of these three spheres centres will be given by:
o 1 = l 12 × l 13 , o 2 = l 12 × l 23 , o 3 = l 13 × l 23

2.3. Computation of the Depth-Scale Factor μ

Motivated by [18], we give a simple method to solve the depth-scale factor. As known, there are two mutually orthogonal unit vectors d ¯ 1 and d ¯ 2 perpendicular with d in Equation (5). Denote [ d ¯ 1 d ¯ 2 d ] by R ¯ ; then, the dual C * of the ellipse is also expressed as follows:
C * = K ( I μ 2 d d T ) K T = K ( R ¯ R ¯ T + R ¯ d i a g { 0 , 0 , μ 2 } R ¯ T ) K T = K R ¯ d i a g { 1 , 1 , μ 2 + 1 } R ¯ T K T
Ellipse C is then given by:
ρ c C = K T R ¯ d i a g { 1 , 1 , 1 μ 2 + 1 } R ¯ T K 1
where ρ c is an unknown scale factor. If K is known, Equation (11) will be rewritten as:
ρ c K T C K   =   R ¯ d i a g { 1 , 1 , 1 μ 2 + 1 } R ¯ T
Denoting K T C K by A ; then we have:
A = 1 ρ c R ¯ d i a g { 1 , 1 , 1 μ 2 + 1 } R ¯ T
As known, R ¯ is an orthogonal matrix, so the singular values of matrix A are 1 / | ρ c | , 1 / | ρ c | and 1 / [ | ρ c | ( μ 2 + 1 ) ] , and μ can be obtained by SVD. For μ = h 0 / R 0 and h 0 > R 0 , μ is greater than 1. For different spheres with the same radius, μ is proportional to the corresponding h0.

3. Calibration Principles

3.1. Acquisition of the Rotation Matrix

If K is known, the normalized back-projected vector d of the sphere centre in the camera coordinate frame will be:
d = K 1 o K 1 o
Denote μ d by D ; then:
D = μ d = h 0 R 0 d = 1 R 0 X Q O
where X Q O is the sphere centre.
From Section 2.3, we can obtain the depth-scale factor μ , and when there are three spheres Q1, Q2 and Q3 with the same radius centered at X Q 1 O , X Q 2 O and X Q 3 O , we can obtain three vectors D1, D2 and D3 to determine a plane D 1 D 2 D 3 ¯ . The plane D 1 D 2 D 3 ¯ is parallel to the plane π s formed by X Q 1 O , X Q 2 O and X Q 3 O . Therefore, the normal vector n of the plane π s is calculated by:
n = ( D 2 D 1 ) × ( D 3 D 1 )
Refer to Figure 1; for the BSVS, we can obtain the normal vectors n l and n r of the same plane π s in the left camera coordinate frame (LCCF) and the right camera coordinate frame (RCCF) respectively. Thus, the following equation stands:
n r = R n l
where R is the rotation matrix between LCCF and RCCF.
If there are m spheres with non-coplanar centres and m 4 , we can obtain the C m 3 equations of Equation (17) to solve R.

3.2. Acquisition of the Translation Vector

In the BSVS, suppose that the left camera is K l [ I 3 × 3 | 0 ] and the right camera is K r [ R | T ] , and x l , x r are the image points of the 3D point X ; then we have:
{ s l x l = K l [ I | 0 ] X s r x r = K r [ R | T ] X
where s l , s r are two unknown scale factors, and R, T are the rotation matrix and translation vector of the BSVS, respectively.
Define a skew-symmetric matrix [ T ] × by T as [ T ] × = [ 0 T z T y T z 0 T x T y T x 0 ] . Denote p l = K l 1 x l , p r = K r 1 x r , and s = s r / s l ; then from Equation (18), we have:
[ T ] × R p l = s [ T ] × p r
Denote R p l by p ^ l , and R is known, so we can obtain the final expression as:
p r T [ T ] × p ^ l = 0
Obviously, Equation (20) is a homogenous equation of T . Given at least three pairs of image points of the sphere centres, we can solve T with a scale factor κ as T 0 (see Appendix A for more details); i.e., T = κ T 0 . Furthermore, based on the known distance L 0 between the two sphere centres of the target and the common sense of a positive for the Z coordinate of the sphere centre, κ is determined.

3.3. Nonlinear Optimization

Consider that R and T are separately obtained; in this section, we take them as the initial values to obtain more accurate results by combining them.
Establish the object function as:
min F ( x ) = i = 1 n ( j = 1 2 d ( p j l i , p ^ j l i ) + j = 1 2 d ( p j r i , p ^ j r i ) + λ ( L i L 0 ) )
where x = { R , T } , d ( ) represents the Euclidean distance, p j l i , p j r i are the real non-homogeneous image coordinates of the sphere centres, p ^ j l i , p ^ r l i are the non-homogeneous reprojection image coordinates of the sphere centres, L i is the calculated distance of the two sphere centres, L 0 is the known distance of the two sphere centres, n is the number of placement times, and λ is the weight factor.
To maintain the orthogonal constraint of the rotation matrix, parameter R is transformed into the Rodriguez vector r = ( r x , r y , r z ) T , so x = [ r ; T ] . Considering the principle of error distribution, λ is taken to be 10. The Levenberg-Marquardt optimization algorithm is used to obtain the final results of R and T .

3.4. Summary

The implementation procedure of our proposed calibration is as follows:
  • Calibrate the intrinsic parameters of two cameras.
  • Take enough images of the double-sphere target with different positions and orientations by moving the target.
  • Extract the subpixel contour points of the projected ellipses using Steger’s method [19], and then perform ellipse fitting [20].
  • Compute the image points of each sphere centre, and then conduct image points matching.
  • Compute the scale factor μ of each sphere.
  • Solve the structural parameters R and T using the algorithm described in Section 3.1 and Section 3.2.
  • Refine the parameters by solving Equation (21).

4. Error Analysis

The general equation of the ellipse is A x 2 + B x y + C y 2 + D x + E y + 1 = 0 , and the coordinates of the ellipse centre are given as follows:
{ x c = B E 2 C D 4 A C B 2 y c = B D 2 A E 4 A C B 2
The matrix form of the ellipse is written as C = ( A B / 2 D / 2 B / 2 C E / 2 D / 2 E / 2 1 ) . The dual C * of C is given by:
C * = ρ c * C 1 = ( 4 C E 2 4 A C B 2 D E 2 B 4 A C B 2 B E 2 C D 4 A C B 2 D E 2 B 4 A C B 2 4 A D 2 4 A C B 2 B D 2 A E 4 A C B 2 B E 2 C D 4 A C B 2 B D 2 A E 4 A C B 2 1 )
where ρ c * is an unknown scale factor. Combining Equations (22) and (23), we can then obtain the following relationship between the ellipse centre ( x c , y c ) and the elements of matrix C * :
{ C * ( 1 , 3 ) = x c C * ( 2 , 3 ) = y c
As known, there are many factors affecting the extraction of the ellipse contour points. Regarding the extracted points, the positional deviation may occur due to noise. We will then discuss how the computation of the image point of the sphere centre is influenced under this condition.
Suppose that the shape of the ellipse remains constant and the ellipse does not rotate; then we use the ellipse centre to represent the position of the ellipse. Let Q denote the sphere, C denote the projected ellipse of sphere Q , and ( x , y ) denote the image point of the sphere centre.
To simplify the discussion, consider the condition in which the sphere centre is in the first quadrant of the camera coordinate frame. Because the sphere centre can be located in the first quadrant of the camera coordinate frame by rotating the camera, this discussion can be generalized.
First of all, let us discuss the element a of C (Note: C ( 3 , 3 ) = 1 ). Expanding Equation (5) by the replacements d ( d x d y d z ) T , K ( α x 0 u 0 a y v 0 1 ) , o η ( x y 1 ) gives:
a = μ 2 ( u 0 d z + α x d x ) 2 ( α x 2 + u 0 2 ) μ 2 d z 2 1
The sphere is always in front of the camera, and the sphere centre is located in the first quadrant of the camera coordinate frame, so we have Z > R 0 > 0 , d x > 0 , d y > 0 , d z > 0 and u 0 < x < 2 u 0 . Based on these equations, we can obtain:
a > 0 w h e n X R 0
The details are described in Appendix B.
Next, we discuss the factors that have influences on calculating the image point of the sphere centre. Denoting C by ( a b / 2 d / 2 b / 2 c e / 2 d / 2 e / 2 1 ) , we can obtain the following equation from Equation (5):
( 2 u 0 d ) x 2 2 ( α x 2 + u 0 2 a ) x + ( α x 2 + u 0 2 ) d 2 u 0 a = 0
Substituting d = x c / 2 , e = y c / 2 into Equation (27), we get:
( u 0 x c ) x 2 ( α x 2 + u 0 2 a ) x + ( α x 2 + u 0 2 ) x c u 0 a = 0
By Equation (28), we can obtain:
x = ( α x 2 + u 0 2 a ) + ( α x 2 + u 0 2 a ) 2 + 4 ( x c u 0 ) [ ( α x 2 + u 0 2 ) x c u 0 a ] 2 ( x c u 0 )
From Equation (29), computing the partial derivative of x with respect to x c gives:
x x c = α x 2 + u 0 2 a 2 ( x c u 0 ) 2 M [ 2 u 0 ( x c u 0 ) + ( α x 2 + u 0 2 a ) ] M
where M = ( α x 2 + u 0 2 a ) 2 + 4 ( x c u 0 ) [ ( α x 2 + u 0 2 ) x c u 0 a ] .
Let:
ρ x x c = α x 2 + u 0 2 a 2 ( x c u 0 ) 2 M [ 2 u 0 ( x c u 0 ) + ( α x 2 + u 0 2 a ) ] M
and we can deduce that ρ x x c satisfies 0 < ρ x x c < 1 when α x / u 0 > 3 is valid (see Appendix C for more details).
Suppose that Δ x c is the positional deviation of the fitted ellipse, and Δ x is the computation error of image point x . Equation (30) is then written as:
Δ x = ρ x x c Δ x c
When α x / u 0 > 3 and X R 0 are valid, ρ x x c satisfies 0 < ρ x x c < 1 , which shows that computation error Δ x caused by Δ x c is reduced.
Because the extracted ellipse contour points have a positional deviation, the fitted ellipse also has a similar deviation. The following section discusses the solution for how to reduce the computation error of the image point of the sphere centre in this condition.
Firstly, consider the relationship between a and μ 2 . From Equation (25), we can obtain:
a μ 2 = α x 2 d z 2 [ ( d x d z ) 2 2 u 0 α x d x d z + 1 ] ( μ 2 d z 2 1 ) 2
When α x / u 0 > 3 is valid, we can deduce a μ 2 > 0 . Hence, a is a monotonically increasing parameter with respect to μ ( μ > 0 ).
Second, from Equation (28), we have:
x x c = α x 2 + u 0 2 x 2 2 ( x c u 0 ) x + α x 2 + u 0 2 a
When α x / u 0 > 3 , we can obtain:
x x c > 0
(see Appendix D for more details).
Suppose Δ x c is the positional deviation of the fitted ellipse, and Δ x is the computation error caused by Δ x c . Equation (34) can then be written as:
Δ x = α x 2 + u 0 2 x 2 2 ( x c u 0 ) x + α x 2 + u 0 2 a Δ x c
Based on Equations (26), (35) and (36), we can deduce that Δ x has a positive relationship with μ .
Similarly, we can obtain:
Δ y = α y 2 + v 0 2 y 2 2 ( x c v 0 ) y + α y 2 + v 0 2 c Δ y c
If Y R 0 , α y / v 0 < 3 are valid, we can deduce that c is a monotonically increasing parameter with respect to μ , and also Δ y has a positive relationship with μ .
Finally, based on the condition described above, we can obtain the following conclusion that the computation errors Δ x , Δ y both have positive relationships with μ . The smaller the value of μ, the smaller the computation error Δ x , Δ y . By reducing h 0 (the depth of the sphere centre) or increasing R 0 (the radius of the sphere), μ is smaller, so Δ x , Δ y will be smaller. In this way, we can improve the computational accuracy of the image point ( x , y ) of the sphere centre.

5. Experiments

5.1. Computer Simulations

Using computer simulations, we analyse the following factors affecting the calibration accuracy: (1) image noise level σ ; (2) the number of placement times N of the target; and (3) the depth-scale factor μ of the sphere.
Table 1 shows the intrinsic parameters of two simulation cameras. The camera distortions are not considered. Suppose that the LCCF is the world coordinate frame (WCF), and set the simulation BSVS structural parameters as r = [−0.03,0.47,0.07]T, T = [−490,−49,100]T. The working distance of BSVS is approximately 1000 mm, and the field of view is approximately 240 × 320 mm. The relative deviation of the calibration results and the truth values are used for the evaluation of accuracy. The rotation matrix R is expressed as Rodrigues vector r; then, both the rotation vector r and translation vector T have dimensions 3 × 1. The Euclidean distances of the simulation vectors and true vectors of r and T are used to represent absolute errors; the ratio of absolute error and the corresponding mould of the truth value are then the relative error.

5.1.1. Performance w.r.t. the Noise Level and the Number of Placement Times of the Target

In this experiment, Gaussian noise with 0 mean and σ (0.05–0.50 pixel or 0.05–1.00 pixel) standard deviation is added to the contour points of the projected image. For each noise level and the number of placement times N (2, 3, 4) of the target, we perform 200 independent trials, and Figure 2 and Figure 3 show the relative error of R and T under different conditions. As we can see, errors increase with increasing noise level. The relative errors of R and T are even less than 5% with the minimum number of placement times (namely N = 2), and the relative errors are drastically reduced when increasing the number of placement times. For σ = 1, N = 4, the calibration errors of R and T are less than 1‰. However, it is clear that the noise level is less than 1 pixel in practical calibration.

5.1.2. Performance w.r.t. the Depth-Scale Factor μ

This experiment studies the performance with respect to the depth-scale factor μ, which is the ratio of the depth h 0 of sphere centre and sphere radius R 0 . To ensure the same orientation, we change the value of μ by varying only the sphere radius. Gaussian noise with 0 mean and standard deviation σ = 0.50 pixel is added to the contour points of the projected ellipse, and the target is placed N = 3 times. We vary the radius from 4 mm to 36 mm and perform 200 independent trials for each radius. Figure 4 shows the results that the relative errors decrease with increasing radius (that is, the depth-scale factor μ decreases). Note that in practice, if the radius increases too much, the image of sphere may be too large to display in the image plane.

5.2. Real Data

In the experimental results on real data, the BSVS is composed of two AVT-Stingray F504B cameras, a 17 mm Schneider lens and support structures. The image resolution of the cameras is 1600 × 1200 pixels. Figure 5 shows the structure of the BSVS.

5.2.1. Intrinsic Parameters Calibration

The Matlab toolbox and a checkerboard target (see Figure 6) are used to calibrate the intrinsic parameters. There are 10 × 10 corner points on the checkerboard target, and the distance between any two adjacent corner points is 10 mm with 5 µm accuracy. Twenty images of different orientations are taken for intrinsic parameters calibration of each camera. Table 2 shows the calibration results.

5.2.2. Structural Parameters Calibration

The double-sphere target (see Figure 6) is composed of two spheres with the same radius and support structure. The distance between these two spheres is 149.946 mm with 0.003 mm accuracy. Set the LCCF as the WCF. To explore the best of number of placement times, a double-sphere target is placed freely 28 times in the measurement space, and 28 pairs of images are obtained. For evaluating the accuracy of the calibration, another 15 pairs of images of the double-sphere target are captured.
We then randomly select 8, 10, 12, 14, 16, 18, 22 and 28 pairs of images for calibration using our method and obtain several sets of structural parameters. Figure 7 illustrates the extraction and fitting details of a pair of target images.
The calibrated BSVS is used to measure the distance between two sphere centres of the double-sphere target by means of another 15 pairs of measured images. Root-mean-square (RMS) errors of these measured values are taken as evaluation criteria of calibration accuracy. Table 3 shows the results, and Figure 8 displays the relative errors and absolute errors of RMS.
From Figure 8, we can see that when the number of placement times is greater than 16, the errors begin to monotonically decrease. Consequently, we must place the double-sphere target approximately 16 times in the experiment.
In this experiment, we have the calibration parameters with 18 times as the final result. By using the calibration parameters, we reconstruct the target positions in space, and the results are shown in Figure 9. For comparison, the Matlab toolbox method is also carried out for structural parameters calibration. Table 4 shows the calibration results of these two methods.

5.2.3. Accuracy Evaluation

To evaluate the accuracy of the calibration, another 10 pairs of images of the checkerboard target are captured. In addition, 15 pairs of previous captured images of the double-sphere target are also used for accuracy evaluation.
Using the calibrated BSVS by these two methods, we measure the distance between two sphere centres of the double-sphere target and each distance between two adjacent corner points of the checkerboard target. The RMS errors of these measured values are taken as the evaluation criteria of calibration accuracy.
(a)
Measure the double-sphere target
Figure 10 displays the measured results of 15 distances, and Table 5 shows a comparison of the errors.
The results in Table 5 show that the RMS error of our algorithm is 0.084 mm, and the relative error is approximately 0.06%; the RMS error of the toolbox method is 0.111 mm, and the relative error is approximately 0.07%. Consequently, it is obvious that our method is slightly better than the toolbox method in measuring the distance of two sphere centres. The standard errors of the measured values show that the measured results by our method are more stable.
(b)
Measure the checkerboard target
The details of the checkerboard target have been introduced in the preceding paragraph. Because the measured values are too numerous, we represent these values by the scatter plots in Figure 11. Table 6 shows the comparison of errors. For an intuitive display of the calibration results of our method, we reconstruct the 3D points of the checkerboard target, and Figure 12 shows the results.
As we can see in Table 6, the RMS errors are 0.008 and 0.005 mm, and the relative errors are 0.08% and 0.05%, respectively. When measuring the distance of the corner points of the checkerboard, the toolbox method is slightly better. According to the standard errors, we can find that these two methods are both reasonably stable.
As we have observed from the accuracy evaluation, our method exhibits similar calibration accuracy to the toolbox method. The measured errors of both methods are less than 0.9‰. The sphere has an excellent property of complete symmetry, which can effectively avoid the simultaneously visible problem of target features in multi-camera calibration.
The toolbox method based on the plane target is a typical method. However, the algorithm usually requires the plane target to be placed in different orientations so as to provide enough constraints, which increases the possibility of occurrence of the simultaneous visibility problem. When the angle between the principal rays of the two cameras is large, it is difficult to capture high quality image at the same time, so the calibration accuracy would be heavily influenced. Figure 13 shows a comparison of the plane target and the double-sphere target in calibration when the angle between the principal rays is large. As seen in Figure 13, the right image of the plane target is so tilt that the corners cannot be accurately extracted, while both images of the double-sphere target have the same high level of clarity and the contours can be accurately extracted. Therefore, it is obvious that the double-sphere target will perform better than the plane target.

6. Conclusions

In the paper, we describe a method to calibrate structural parameters. This method requires a double-sphere target placed a few times in different positions and orientations. We utilize the normal vectors of spatial planes to compute the rotation matrix and use a linear algorithm to solve the translation vector. The simulations demonstrate how the noise level, the number of placement times and the depth-scale factor influence the calibration accuracy. Real data experiments have shown that when measuring the object with a length of approximately 150 mm, the accuracy is 0.084 mm, and when measuring 10 mm, the accuracy is 0.008 mm.
If the sphere centres are all coplanar, our method will fail. Therefore, the double-sphere target should be placed in different positions and orientations to avoid this degradation. Because the calibration characteristic of the sphere is its contour, we should prevent the double-sphere target from completely mutual occlusion. As mentioned above, the two spheres should have the same radius. However, if the two sphere centres are unequal, our method can still work. If the ratio of the two radii is known, the ratio value should be considered when recovering the intermediate parallel planes; other computation procedures remain unchanged. If the ratio is unknown, three arbitrary projected ellipses with the same sphere should be selected to recover the intermediate parallel plane. Furthermore, this target must be placed at least four times. Obviously, such a target provides fewer constraints than the target with a known ratio of sphere radii when solving the rotation matrix. To calibrate the BSVS with a small public view while guarantee high accuracy, we can couple these two spheres with large radii to form a double-sphere target. In multi-camera calibration, using the double-sphere target can avoid the simultaneous visibility problem and performs well.

Acknowledgments

This work is supported by the Natural Science Foundation of Beijing (No. 3142012), and the National Key Scientific Instrument and Equipment Development Project (No. 2012YQ140032), and the Supported Program for Young Talents (No. YMF-16-BJ-J-17). We appreciate the constructive comments received from the reviewers.

Author Contributions

The work presented here was performed in collaboration between two authors. Both authors have contributed to, seen and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BSVSbinocular stereo vision sensor
LCCFleft camera coordinate frame
RCCFright camera coordinate frame
WCFworld coordinate frame
RMSRoot mean square

Appendix A

This appendix provides a solution to solve Equation (20). Expanding Equation (20),
( p rx p ry p rz ) T [ 0 T z T y T z 0 T x T y T x 0 ] ( p ^ lx p ^ ly p ^ lz ) = 0
Equation (A1) can be written as
[ p ^ ly p rz p ^ lz p ry p ^ lx p rz + p ^ lz p rx p ^ lx p ry p ^ ly p rx ] [ T x T y T z ] T = 0
From Equation (A2), we have
A i T = 0
where A i is now a 1 × 3 matrix. Given at least three pairs of image points, we can obtain an equation A T = 0 , where A is the coefficient matrix composed of coefficients A i from each equation A i T = 0 . Using SVD, we can obtain the solution T 0 of equation A T = 0 .

Appendix B

In this appendix, the sign of a in Equation (25) will be discussed. Because Z > R 0 > 0 holds, we have
μ 2 d z 2 1 = h 0 2 R 0 2 d z 2 1 = Z 2 R 0 2 1 > 0
Because of d x > 0 , d y > 0 , d z > 0 , u 0 < x < 2 u 0 , we can deduce
μ 2 ( u 0 d z + α x d x ) 2 ( α x 2 + u 0 2 ) = ( μ u 0 d z + μ α x d x ) 2 ( α x 2 + u 0 2 ) = ( u 0 Z R 0 + α x X R 0 ) 2 ( α x 2 + u 0 2 )
If X R 0 , then X / R 0 1 , therefore
μ 2 ( u 0 d z + α x d x ) 2 ( α x 2 + u 0 2 ) > 0
Now, we have
a > 0 w h e n X R 0

Appendix C

In this appendix, we endeavour to determine the range of ρ x x c in Equation (31). Because the ellipse centre is close to the projected image point of the sphere centre, we can approximate ( x c u 0 ) / α x ( x u 0 ) / α x = d x / d y .
Denote ρ 1 = { M [ 2 u 0 ( x c u 0 ) + ( α x 2 + u 0 2 a ) ] } / M and ρ 2 = α x 2 + u 0 2 a 2 ( x c u 0 ) 2 , then
ρ x x c = ρ 1 ρ 2
Considering ρ 1 , we can obtain
ρ 1 = ( α x 2 + u 0 2 a ( x u 0 ) 2 ) 2 + 4 ( α x 2 + u 0 2 ) ( x u 0 ) 2 + 4 ( α x 2 + u 0 2 a ) u 0 ( x u 0 ) 3 [ 2 u 0 x u 0 + α x 2 + u 0 2 a ( x u 0 ) 2 ] ( α x 2 + u 0 2 a ( x u 0 ) 2 ) 2 + 4 ( α x 2 + u 0 2 ) ( x u 0 ) 2 + 4 ( α x 2 + u 0 2 a ) u 0 ( x u 0 ) 3
and then deduce
α x 2 + u 0 2 a ( x u 0 ) 2 = μ 2 d z 2 μ 2 d z 2 1 [ ( d x d z ) 2 2 u 0 α x d x d z + 1 ] ( x u 0 α x ) 2 = Z 2 Z 2 1 [ ( d z d x ) 2 2 u 0 α x d z d x 1 ]
Denote d z / d x by m . Because 0 < d x / d z < u 0 / α x holds, m satisfies m > α x / u 0 . In general, Z satisfies Z > > 1 , so Z 2 / ( Z 2 1 ) 1 . Replacing d z / d x with m in Equation (C2), we have
α x 2 + u 0 2 a ( x u 0 ) 2 = m 2 2 u 0 α x m 1
Several other polynomials of Equation (C1) are reduced to
α x 2 + u 0 2 ( x u 0 ) 2 = ( 1 + u 0 2 α x 2 ) m 2   and   u 0 x u 0 = u 0 α x m
Combining Equations (C3) and (C4), ρ x x c can be written as
ρ x x c = ρ 1 ρ 2 = 1 m 2 + 1 ( m 2 2 u 0 α x m 1 )
For a quadratic equation m 2 2 ( u 0 / α x ) m 1 with respect to m ( m > α x / u 0 > 0 ), if α x / u 0 satisfies α x / u 0 > 3 , the quadratic equation will constantly be greater than zero.
Consequently, ρ x x c satisfies 0 < ρ x x c < 1 when α x / u 0 > 3 .

Appendix D

In this appendix, we determine the sign of x x c in Equation (34). For the numerator of Equation (34), because of u 0 < x < 2 u 0 , α x / u 0 > 3 ,
α x 2 + u 0 2 x 2 > 0
constantly holds. For the denominator, we have
α x 2 + u 0 2 a + 2 ( x c u 0 ) x = μ 2 α x 2 d z 2 [ ( d x d z ) 2 2 u 0 α x d x d z + 1 ] μ 2 d z 2 1 + 2 ( x c u 0 ) x
If α x / u 0 > 3 , then ( r x / r z ) 2 2 ( u 0 / α x ) ( r x / r z ) + 1 > 0 holds. Because x c is close to x , x c u 0 > 0 , and we have
α x 2 + u 0 2 a + 2 ( x c u 0 ) x > 0
Consequently, when α x / u 0 > 3 , we have x x c > 0 .

References

  1. Gao, H. Computer Binocular Stereo Vision; Publishing House of Electronics Industry: Beijing, China, 2012. [Google Scholar]
  2. Steger, C.; Ulrich, M.; Wiedemann, C. Machine Vision Algorithms and Applications; Tsinghua University Press: Beijing, China, 2008. [Google Scholar]
  3. Zhang, G. Visual Measurement; Science Press: Beijing, China, 2008. [Google Scholar]
  4. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  5. Ma, S.; Zhang, Z. Computer Vision: Theory and Algorithms; Science Press: Beijing, China, 1998. [Google Scholar]
  6. Bouguet, J.Y. Camera Calibration Toolbox for Matlab. 2010. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 1 May 2015).
  7. Zhou, F.; Zhang, G.; Wei, Z.; Jiang, J. Calibrating binocular vision sensor with one-dimensional target of unknown motion. J. Mech. Eng. 2006, 42, 92–96. [Google Scholar] [CrossRef]
  8. Penna, M.A. Camera calibration: A quick and easy way to determine the scale factor. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 1240–1245. [Google Scholar] [CrossRef]
  9. Daucher, N.; Dhome, M.; Lapreste, J. Camera calibration from spheres images. In Proceedings of the European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; pp. 449–454.
  10. Teramoto, H.; Xu, G. Camera calibration by a single image of balls: From conics to the absolute conic. In Proceedings of the 5th Asian Conference on Computer Vision, Melbourne, Australia, 23–25 January 2002; pp. 499–506.
  11. Agrawal, M.; Davis, L.S. Camera calibration using spheres: A semi-definite programming approach. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 782–789.
  12. Wong, K.Y.K.; Mendonça, P.R.S.; Cipolla, R. Camera calibration from surfaces of revolution. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 147–161. [Google Scholar] [CrossRef] [Green Version]
  13. Ying, X.; Zha, H. Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 15–21 October 2005; pp. 596–603.
  14. Zhang, H.; Zhang, G.; Wong, K.Y.K. Camera calibration with spheres: Linear approaches. In Proceedings of the International Conference on Image Processing, Genova, Italy, 11–14 September 2005; pp. 1150–1153.
  15. Zhang, G.; Wong, K.-Y.K. Motion estimation from spheres. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 1238–1243.
  16. Zhang, H.; Wong, K.Y.K.; Zhang, G. Camera calibration from images of spheres. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 499–502. [Google Scholar] [CrossRef] [PubMed]
  17. Wong, K.-Y.K.; Zhang, G.; Chen, Z. A stratified approach for camera calibration using spheres. IEEE Trans. Image Process. 2011, 20, 305–316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Jia, J. Study on Some Vision Geometry Problems in Muti-Cameras System; Xidian University: Xi’an, China, 2013. [Google Scholar]
  19. Steger, C. Unbiased Extraction of Curvilinear Structures from 2D and 3D Images; Utz, Wiss.: Munich, Germany, 1998. [Google Scholar]
  20. Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct Least Square Fitting of Ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
Figure 1. Using a double-sphere target to calibrate a binocular stereo vision system.
Figure 1. Using a double-sphere target to calibrate a binocular stereo vision system.
Sensors 16 01074 g001
Figure 2. Relative errors vs. the noise level of the image points when N = 2.
Figure 2. Relative errors vs. the noise level of the image points when N = 2.
Sensors 16 01074 g002
Figure 3. Relative errors vs. the noise level of the image points when N = 3 and 4.
Figure 3. Relative errors vs. the noise level of the image points when N = 3 and 4.
Sensors 16 01074 g003
Figure 4. Relative errors vs. the sphere radius of the double-sphere target.
Figure 4. Relative errors vs. the sphere radius of the double-sphere target.
Sensors 16 01074 g004
Figure 5. Structure of the BSVS.
Figure 5. Structure of the BSVS.
Sensors 16 01074 g005
Figure 6. Checkerboard target and double-sphere target.
Figure 6. Checkerboard target and double-sphere target.
Sensors 16 01074 g006
Figure 7. Extraction and ellipse fitting of images (a,b) are the origin images and (c,d) are the processed images.
Figure 7. Extraction and ellipse fitting of images (a,b) are the origin images and (c,d) are the processed images.
Sensors 16 01074 g007
Figure 8. Relative errors and absolute errors of RMS errors with different number of placement times.
Figure 8. Relative errors and absolute errors of RMS errors with different number of placement times.
Sensors 16 01074 g008
Figure 9. 3D reconstruction of the spatial position of the target by our method.
Figure 9. 3D reconstruction of the spatial position of the target by our method.
Sensors 16 01074 g009
Figure 10. Results of measurements by two methods.
Figure 10. Results of measurements by two methods.
Sensors 16 01074 g010
Figure 11. Results of measuring the checkerboard target by two methods.
Figure 11. Results of measuring the checkerboard target by two methods.
Sensors 16 01074 g011
Figure 12. Reconstructed 3D points by our method.
Figure 12. Reconstructed 3D points by our method.
Sensors 16 01074 g012
Figure 13. Images of the plane target and the double-sphere target (a,c) are the left images; and (b,d) are the right images.
Figure 13. Images of the plane target and the double-sphere target (a,c) are the left images; and (b,d) are the right images.
Sensors 16 01074 g013
Table 1. Intrinsic parameters of the simulation cameras.
Table 1. Intrinsic parameters of the simulation cameras.
Parameter ListPrincipal Distance (pixel/mm)Principal Point (pixel)SkewResolution (pixel)
Left Camera(5100.0, 5100.0)(800.0, 600.0)01600 × 1200
Right Camera(5100.0, 5100.0)(800.0, 600.0)01600 × 1200
Table 2. Intrinsic parameters of the cameras.
Table 2. Intrinsic parameters of the cameras.
Intrinsic Parameters α x α y γ u 0 v 0 k 1 k 2
Left Camera5086.8065086.8270787.205595.726−0.2431.662
Right Camera5087.8285087.6380831.764562.411−0.2400.330
Table 3. Comparison of measured values (mm).
Table 3. Comparison of measured values (mm).
Place Times810121416182228
Image 1149.916149.923149.906149.953149.898149.950149.879149.928
Image 2149.769149.811149.800149.817149.776149.799149.727149.785
Image 3149.930149.877149.852149.885149.842149.915149.904149.900
Image 4149.843149.895149.934149.972149.939149.847149.870149.883
Image 5149.887149.926149.959150.000149.970149.897149.908149.920
Image 6149.911149.913149.939149.971149.971149.922149.932149.915
Image 7149.925149.999150.010149.957149.931149.856149.914149.926
Image 8149.854149.904149.908149.854149.825149.769149.844149.840
Image 9150.068150.087150.082150.028149.995149.961150.058150.036
Image 10149.934149.909149.888149.841149.800149.809149.920149.878
Image 11149.896149.969149.980150.027149.973149.953149.872149.948
Image 12149.987150.049150.047150.084150.009150.009149.956150.028
Image 13149.998150.045150.036150.074149.992150.009149.963150.031
Image 14149.940149.896149.879149.928149.864149.918149.926149.925
Image 15149.914149.866149.850149.892149.846149.901149.899149.893
RMS (mm)0.0730.0750.0790.0800.0840.0840.0790.071
Table 4. Comparison of the structural parameters.
Table 4. Comparison of the structural parameters.
Structure Parameters r x r y r z t x t y t z
Our Method−0.00460.59930.0302−473.95−5.622121.70
Toolbox Method−0.00710.60140.0290−474.97−7.181122.36
Table 5. Comparison of measurement accuracy of the double-sphere target (mm).
Table 5. Comparison of measurement accuracy of the double-sphere target (mm).
MethodsRMSRelative Error (%)Standard Error
Our method0.0840.0560.073
Toolbox method0.1110.0740.091
Table 6. Comparison of the measurement accuracy of the checkerboard target (mm).
Table 6. Comparison of the measurement accuracy of the checkerboard target (mm).
MethodRMSRelative Error (%)Standard Error
Our method0.0080.0840.0055
Toolbox method0.0050.0530.0053

Share and Cite

MDPI and ACS Style

Wei, Z.; Zhao, K. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target. Sensors 2016, 16, 1074. https://doi.org/10.3390/s16071074

AMA Style

Wei Z, Zhao K. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target. Sensors. 2016; 16(7):1074. https://doi.org/10.3390/s16071074

Chicago/Turabian Style

Wei, Zhenzhong, and Kai Zhao. 2016. "Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target" Sensors 16, no. 7: 1074. https://doi.org/10.3390/s16071074

APA Style

Wei, Z., & Zhao, K. (2016). Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target. Sensors, 16(7), 1074. https://doi.org/10.3390/s16071074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop