Next Article in Journal
The Facile Synthesis of Branch-Trunk Ag Hierarchical Nanostructures and Their Applications for High-Performance H2O2 Electrochemical Sensors
Previous Article in Journal
Accurate Sample Time Reconstruction of Inertial FIFO Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images

1
Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, China
2
School of Naval Architecture & Ocean Engineering, Huazhong University of Science & Technology, 1037 Luoyu Road, Wuhan 430074, China
3
Hubei Key Laboratory of Naval Architecture & Ocean Engineering Hydrodynamics (HUST), Huazhong University of Science & Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(12), 2873; https://doi.org/10.3390/s17122873
Submission received: 12 October 2017 / Revised: 29 November 2017 / Accepted: 6 December 2017 / Published: 13 December 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
Most of the existing calibration methods for binocular stereo vision sensor (BSVS) depend on a high-accuracy target with feature points that are difficult and costly to manufacture. In complex light conditions, optical filters are used for BSVS, but they affect imaging quality. Hence, the use of a high-accuracy target with certain-sized feature points for calibration is not feasible under such complex conditions. To solve these problems, a calibration method based on unknown-sized elliptical stripe images is proposed. With known intrinsic parameters, the proposed method adopts the elliptical stripes located on the parallel planes as a medium to calibrate BSVS online. In comparison with the common calibration methods, the proposed method avoids utilizing high-accuracy target with certain-sized feature points. Therefore, the proposed method is not only easy to implement but is a realistic method for the calibration of BSVS with optical filter. Changing the size of elliptical curves projected on the target solves the difficulty of applying the proposed method in different fields of view and distances. Simulative and physical experiments are conducted to validate the efficiency of the proposed method. When the field of view is approximately 400 mm × 300 mm, the proposed method can reach a calibration accuracy of 0.03 mm, which is comparable with that of Zhang’s method.

1. Introduction

Calibration of stereo vision sensors is an essential step of vision measurement [1,2,3]. Vision sensors with high calibration accuracy usually guarantee high measurement accuracy. Vision measurement is mainly conducted to complete the 3D reconstruction of the measured objects. According to the measuring principle, the vision measurement system can be divided into three major categories: (1) line-structured light measurement system; (2) binocular stereo vision measurement system; (3) multi-camera stereo vision measurement system. When adopting the line-structured light method, extraction accuracy of the center of the light stripe affects the measurement accuracy [4,5]. Light scattering occurs when the projection angle between the light plane and the object is relatively large. As a result, calibration and measurement accuracy decline. The multi-camera stereo vision measurement system can implement online vision measurement with a large field of view and multi-viewpoints, and it is equivalent to multi-pair binocular stereo vision sensors (BSVSs) [6]. Therefore, research on the calibration of BSVS is of great significance.
To date, research on the calibration of BSVS mainly focused on the different forms of high-accuracy targets, including 1D [7,8], 2D [9], and 3D targets [10,11]. Zhao et al. [12] proposed a method based on a 1D target with two feature points of known distance. Compared with Zhang’s method [13], which is also based on a 1D target, Zhao’s method not only improves the calibration accuracy of intrinsic parameters but also implements the extrinsic parameter calibration of BSVS. Zhang’s method [14] using the planar checkerboard target has made a remarkable impact on the study of camera calibration. Other methods using rectification error optimization [15] and perpendicularity compensation [16] have been proposed to improve calibration accuracy.
To achieve high-accuracy calibration under complex circumstances, different forms of targets are utilized in the calibration of BSVS. A calibration method based on spot laser and parallel planar target is proposed to improve calibration under complex light conditions [17]. This method does not rely on feature points with known distance or size. In each shot, only one spot is projected on the target, resulting in low efficiency in online measurement. Given that random noise is inevitable, this method cannot guarantee high accuracy due to the location uncertainty of feature points in a picture. Wu et al. [18] proposed a global calibration method based on vanishing features of a target. In addition, the specially designed target is constructed of two mutually orthogonal groups of parallel lines with known lengths. Zhang et al. [19] proposed a novel method based on spherical target images with certain size, which implements synchronous calibration of a multi-camera system. At present, the spherical target with extremely high quality is hard to manufacture. Considering the noise, unideal light conditions and other factors, using a spherical target to calibrate does not guarantee high accuracy [20,21].
From the abovementioned methods, accuracy of the distance of feature points or the size of the target is a common requirement. In addition, accuracy of the requisite sizes greatly affects the calibration accuracy of BSVS. To solve the problem presented above, this study introduces a novel calibration method that does not rely on specific feature points and works efficiently under complex conditions. The proposed method adapts a ring laser to project an elliptical stripe on the parallel planar target. During the calibration, Zhang’s method is primarily utilized to obtain the intrinsic parameter of two cameras. The elliptical stripes are then used as the medium to solve the extrinsic parameters. Finally, the optimal solutions of calibration results are obtained via non-linear optimization.
The remainder of this paper is organized as follows. Section 2 mainly describes the mathematical model of BSVS, the algorithm principles, realization procedure, and other details of the proposed method. Section 3 discusses other expansive forms of the proposed method, as well as its relevant performance under complex lighting conditions. Section 4 presents the simulation and real data experiments conducted to validate the effectiveness of the proposed method. Section 5 states the conclusions of our work.

2. Principle and Methods

2.1. Mathematical Model of BSVS

As shown in Figure 1, the coordinate systems of the left and right cameras are O c 1 x c 1 y c 1 z c 1 and O c 2 x c 2 y c 2 z c 2 , respectively. p ˜ L = [ u L v L 1 ] T and p ˜ R = [ u R v R 1 ] T are homogeneous coordinates of non-distorted images of point P in the coordinate system of the image by the left and right cameras, respectively. The transformation matrix from the coordinate system of the left camera to that of the right camera is T LR = [ R LR t LR 0 1 ] , where R LR , t LR are the rotation matrix and translation vector, respectively. r LR is the Rodrigues’ representation of the rotation matrix R LR .
The spot P is projected by the BSVS. The binocular stereo vision model is used to calculate the 3D coordinates q L = [ x L y L z L 1 ] of point P in O c 1 x c 1 y c 1 z c 1 :
{ ρ L p ˜ L = K L [ I 3 × 3 0 3 × 1 ] q L ρ R p ˜ R = K R [ R LR t LR ] q L
where K L and K R are the matrices of intrinsic parameters of the left and right cameras, respectively. K = [ a x γ u 0 0 a y v 0 0 0 1 ] , where u 0 and v 0 are the coordinates of the principal point, a x and a y are the scale factors in the image u and v axes, and γ is the skew of the two image axes.

2.2. Algorithm Principle

The calibration process of the proposed method is shown in Figure 2. In our case, a single ring laser projector and a double parallel planar target are utilized to generate the elliptical stripes as illustrated in Figure 2. In addition, the distance between the two parallel planes is constrained.
As shown in Figure 2, Q j ( j = 1 , 2 ) are the two elliptical stripes projected on the two parallel planes. Q j = [ 1 / β j 2 0 0 0 1 / α j 2 0 0 0 1 ] is the expression of the elliptical stripe in space, 2 α j is the major axis of the j-th ellipse, and 2 β j is the minor axis of the j-th ellipse. We assume that O j x j y j z j is the coordinate of the j-th ellipse in space. For O j x j y j z j , the y-axis is the major axis of Q j the x-axis is the minor axis of Q j , and the origin is the center of Q j in space. The projections of Q j in the left and right cameras are denoted as e L j and e R j , respectively. R L j and t L j are the rotation matrix and translation vector from O j x j y j z j to O c 1 x c 1 y c 1 z c 1 , respectively. R R j and t R j are the rotation matrix and translation vector from O j x j y j z j to O c 2 x c 2 y c 2 z c 2 , respectively. R LR and t LR are the rotation matrix and translation vector from O c 1 x c 1 y c 1 z c 1 to O c 2 x c 2 y c 2 z c 2 , respectively. All of the coordinate frames generated by the intersection of the parallel plane and conical surface projected by the single ring laser projector are parallel to each other, that is, R L 1 = R L 2 and R R 1 = R R 2 . Notably, the two elliptical stripes captured in each case have the following properties:
  • he ratios of the minor axis to the major axis k = β j / α j are equivalent.
  • The minor axis of the major axis of one elliptical stripe is parallel to that of the other elliptical stripe. The angles between the minor axis and the major axis of these two elliptical stripes are equivalent.

2.2.1. Solving R LR

As shown in Equation (2), e L j and Q j are 3 × 3 matrices. According to multi-view geometry foundation [22], the relationship between e L j and Q j is as following:
{ p T e L j p = 0 q T Q j q = 0
where p = [ u v 1 ] T is the undistorted image homogeneous coordinate of the point on the j-th elliptical stripe under O c 1 x c 1 y c 1 z c 1 , and q = [ x y 1 ] T is the coordinate of the point on the j-th elliptical stripe under O j x j y j .
Combining Equation (2) and the camera model, we have:
ρ j Q j = ( K L [ r 1 r 2 t L j ] ) T e L j K L [ r 1 r 2 t L j ]
where ρ j represents the non-zero scale factors, and r j denotes the j-th column of the rotation matrix R L j . K L represents the intrinsic parameter of the left camera and is obtained using Zhang’s method.
According to Equation (3), the equation relating e L j to Q j is obtained in Equation (4):
ρ j Q j = ρ j ( 1 / β j 2 0 0 0 1 / α j 2 0 0 0 1 ) = ( r 1 T W j r 1 r 1 T W j r 2 r 1 T W j t L j r 2 T W j r 1 r 2 T W j r 2 r 2 T W j t L j t L j T W j r 1 t L j T W j r 2 t L j T W j t L j )
where W j = K L T e L j K L .
For two elliptical stripes located on the target, we have two equations in the form of Equation (4). According to the property of the matrix in Equation (4), equations related to the two elliptical stripes can be decomposed into the following 12 equations:
r 1 T W 1 r 1 = ρ 1 / β 1 2 ; r 1 T W 2 r 1 = ρ 2 / β 2 2 ; r 2 T W 1 r 2 = ρ 1 / k β 1 2 ; r 2 T W 2 r 2 = ρ 2 / k β 2 2 ; r 1 T W 1 r 2 = 0 ; r 1 T W 2 r 2 = 0 ; r 1 T W 1 t L 1 = 0 ; r 1 T W 2 t L 2 = 0 ; r 2 T W 1 t L 1 = 0 ; r 2 T W 2 t L 2 = 0 ; t L 1 T W 1 t L 1 = ρ 1 ; t L 2 T W 2 t L 2 = ρ 2
Establishing simultaneous equations with the first six equations in Equation (5) and utilizing the orthogonality of r 1 and r 2 , we have:
r 1 T W 1 r 1 = k r 2 T W 1 r 2 ; r 1 T W 2 r 1 = k r 2 T W 2 r 2 ; r 1 T W 1 r 2 = 0 ; r 1 T W 2 r 2 = 0 ; r 2 T r 2 = 1 ; r 1 T r 1 = 1 ; r 1 T r 2 = 0
Non-linear optimization is adopted to solve Equation (6). Thereafter, r 1 and r 2 can be solved directly. According to R L 1 = R L 2 = [ r 1 r 2 r 1 × r 2 ] , we obtain R L 1 and R L 2 . Similarly, the solution of R R 1 = R R 2 can be determined.
Taking the target as a medium, the transformation matrix can be obtained as follows:
[ R LR t LR 0 1 ] = [ R R 1 t R 1 0 1 ] [ R L 1 t L 1 0 1 ] 1
According to Equation (7), we have the final expression of R LR , which is shown in Equation (8) as follows:
R LR = R R 1 R L 1 1

2.2.2. Solving t LR

Establishing simultaneous equations with the last four equations in Equation (5) yields the following expression:
{ r 1 T W 1 t L 1 = 0 r 2 T W 1 t L 1 = 0 r 1 T W 2 t L 2 = 0 r 2 T W 2 t L 2 = 0
Given that Equation (9) has a typical form of AX = 0, we cannot obtain a unique non-zero solution t L 1 and t L 2 by solving Equation (9) directly. Upon analyzing Equation (9), t L 1 and t L 2 are the center of e L 1 and e L 2 , respectively, which are the coordinates of the origin point of O 1 x 1 y 1 z 1 and O 2 x 2 y 2 z 2 , respectively. Suppose that t ˜ L 1 and t ˜ L 2 are the unit vectors from the origin point of O c 1 x c 1 y c 1 z c 1 to the origin point of O 1 x 1 y 1 z 1 and O 2 x 2 y 2 z 2 , we have:
{ t ˜ L 1 = ( W L 1 T r 1 × W L 1 T r 2 ) / ( W L 1 T r 1 × W L 1 T r 2 ) t ˜ L 2 = ( W L 2 T r 1 × W L 2 T r 2 ) / ( W L 2 T r 1 × W L 2 T r 2 )
Similarly, the rotation matrix R R 1 = R R 2 and the translation vectors t ˜ R 1 , t ˜ R 2 can be solved according to the abovementioned method.
Let t ˜ LR denote the unit vector from the origin point of O c 1 x c 1 y c 1 z c 1 to the origin point of O c 2 x c 2 y c 2 z c 2 . As shown in Figure 3, t ˜ L 1 , t ˜ R 1 and t ˜ LR lie on a plane.
According to the coplanarity constraint, we have:
( t ^ R 1 × t ˜ L 1 ) T t ˜ LR = 0
where t ^ L 1 = R LR t ˜ L 1 . Suppose that v = ( t ^ R 1 × t ˜ L 1 ) T , the coplanarity constraint can be rewritten as a homogeneous equation in t ˜ LR :
v t ˜ LR = 0
If n sets of images of the target are observed, by stacking n such equations as Equation (12), we have:
V t ˜ LR = 0
where V is an n × 3 matrix. If n ≥ 3, a unique solution t ˜ LR can be obtained up to a scale factor. Unitizing the solution, we have the unit vector t ˜ LR . Given that t LR = k LR t ˜ LR , Equation (1) can be rewritten as follows:
{ x ˜ L = z ˜ L u L / f L y ˜ L = z ˜ L v L / f L z ˜ L = ( f L ( f R t ˜ x X R t ˜ z ) ) / ( u R ( r 7 u L + r 8 v L + f L r 9 ) f R ( r 1 u L + r 2 v L + f L r 3 ) )
where t ˜ LR = [ t ˜ x , t ˜ y , t ˜ z ] T .
According to Equation (14), we can obtain the coordinate of a feature point in 3D reconstruction up to a scale factor k , that is, [ x L , y L , z L ] = k [ x ˜ L , y ˜ L , z ˜ L ] . In detail, [ x L , y L , z L ] is the actual coordinates of feature point, where [ x ˜ L , y ˜ L , z ˜ L ] is the normalized coordinates of feature point up to the scale factor k . To solve k , we reconstruct the 3D coordinates of all the feature points that lie on the ellipse in O c 1 x c 1 y c 1 z c 1 . Using the plane fitting method, the coefficients of the two plane equations of the target can be determined as follows:
{ a 1 x ˜ L 1 + b 1 y ˜ L 1 + c 1 z ˜ L 1 + d ˜ 1 = 0 a 2 x ˜ L 2 + b 2 y ˜ L 2 + c 2 z ˜ L 2 + d ˜ 2 = 0
where [ a 1 , b 1 , c 1 , d ˜ 1 ] and [ a 2 , b 2 , c 2 , d ˜ 2 ] denote the coefficients of the two plane equations of the target when the scale factor k is unknown.
Similarly, the plane equations can be determined by fitting the coordinates of all the characteristic points in 3D reconstruction as follows:
{ a 1 x L 1 + b 1 y L 1 + c 1 z L 1 + d 1 = 0 a 2 x L 2 + b 2 y L 2 + c 2 z L 2 + d 2 = 0
where d 1 = k d ˜ 1 , d 2 = k d ˜ 2 .
Given that the two planes of target are parallel to each other, the actual distance D between two planes can be solved as the absolute of the difference between the distance between the origin of the left camera and two planes. According to Equation (15), the distance between the origin of the left camera and the plane can be solved up to the scale factor k . Thus, we have the normalized distance D ˜ as follows:
D ˜ = | | d ˜ 1 | a 1 2 + b 1 2 + c 1 2 | d ˜ 2 | a 2 2 + b 2 2 + c 2 2 |
Considering that actual distance D of two planes is known, the scale factor k is inferred as:
k = D / D ˜
In this case, the final scale factor k is the average of the entire scale factor. Thus, k is presented as follows:
t LR = k t ˜ LR

2.2.3. Non-Linear Optimization

Calibration error exists due to random noise and other disturbances. Hence, non-linear optimization is utilized to obtain the optimal solution of calibration results. We randomly sample several feature points from one stripe, and the matching points will be the intersection of the other stripe and corresponding epipolar line.
To improve the calibration accuracy, the target is placed at different positions. For each position, assume that O 1 i x 1 i y 1 i z 1 i and O 2 i x 2 i y 2 i z 2 i are the target coordinate systems under two parallel planes. For the feature points located on different target planes, we reconstruct their 3D coordinates under the corresponding target coordinate system. Then, the ellipse fitting method is adopted to obtain Q 1 i and Q 2 i . From Q 1 i and Q 2 i , we can solve the major axes α 1 i and α 2 i and minor axes β 1 i and β 2 i , as well as the angles θ 1 i and θ 2 i . According to the properties of elliptical stripes, the objective function is established as follows:
e 1 ( a ) = min i n ( | α 1 i β 1 i α 2 i β 2 i | + | θ 1 i θ 2 i | )
where a = ( R LR , t LR , R Li 1 , t Li 1 , t Li 2 ) and R Li 1 , t Li 1 are the rotation matrix and transformation vector, respectively, from the left camera coordinate system to O 1 i x 1 i y 1 i z 1 i at each position. t Li 2 is the transformation vector from the left camera coordinate system to O 2 i x 2 i y 2 i z 2 i , and n denotes the number of positions.
In each position, we reconstruct the 3D coordinates of the feature points under the coordinate system of BSVS. Then, the planar fitting method is utilized to obtain the equation of the left plane Π L i and right plane Π R i . Therefore, we obtain the second objective function based on the measurement distance and actual distance:
e 2 ( a ) = min ( i n D i s t ( Π L i , Π R i ) D )
where D i s t ( Π 1 , Π 2 ) is the distance of two planes under the coordinate system of BSVS, and D is the actual distance of the two parallel target planes.
According to the coplanarity constraint introduced in Section 2.2.2, we have the following objective function:
e 3 ( a ) = min i n ( ( t R i 1 × t L i 1 ) T t LR + ( t R i 2 × t L i 2 ) T t LR )
where m and l are the feature points in the two target planes, and E is the essential matrix of BSVS.
Thereafter, the final objective function is established as follows:
e ( a ) = e 1 ( a ) + e 2 ( a ) + e 3 ( a )
Thus, the optimal solution of R LR and t LR under the maximum likelihood criteria can be solved via non-linear optimization methods (e.g., Levenberg–Marquardt algorithm [23]).

3. Discussion

The two geometric properties of projected elliptical stripe introduced in Section 2.2 comprise the core idea of the proposed method. Notably, various methods are available to obtain the elliptical stripes, such as the use of different forms of lasers or projector to project elliptical stripes on a target plane. Hence, equations in the form of Equation (5) are available to solve the rotation matrix and transformation vector of BSVS. The calibration form used in this study is the simplest form of the proposed method. If the axes of the projected light cone in each case remain parallel to each other, the elliptical stripes embody the geometric properties whether the divergence angle of the projective tool is a constant or not. Figure 4 shows several calibration forms for the proposed method.
The lasers shown in Figure 4 are easy to purchase, and the lasers with suitable wavelength and pattern according to the actual condition can be chosen. The BSVS is usually equipped with optical filter, so capturing an ordinary target clearly is difficult. The proposed method adopts the images captured by the strong laser. Thus, this method works much better under complex light conditions such as strong light, dim light, and non-uniform light. In comparison with common methods, the proposed method is more suitable for outdoor online calibration.

4. Experiment

4.1. Simulation Experiment

Simulation is performed to validate the efficiency of the proposed method. Image noise, distance of the two target planes, and size of the projected elliptical stripe considerably affect calibration accuracy when the BSVS is calibrated using the proposed method. Hence, simulation is performed based on the above factors. The conditions of the simulation experiments are as follows: camera resolution of 1628 pixels × 1236 pixels, focal length of 16 mm, field of view is 400 mm × 300 mm, placement position is approximately 600 mm away from the BSVS, r LR is [0.0084, 0.6822, 0.0416], and t LR is [−449.6990, −5.6238, 180.8245]T. Calibration accuracy is evaluated using the root mean square errors (RMSEs) of r x , r y , r z , t x , t y and t z , as well as the deviation between the 3D reconstruction and actual coordinates of the feature points.

4.1.1. Impact of Image Noise on Calibration Accuracy

In the experiment, the distance between the two target planes is 60 mm. The target is placed at 15 different positions in each experiment, and a total of 100 independent experiments are performed at each noise level. Gaussian noise with zero mean and standard deviation of 0.1–1 pixel with an interval of 0.1 pixel is added to the feature points. As shown in Figure 5, the calibration accuracy decreases linearly with increasing image noise. In general, the calibration accuracy is high even with a relatively high noise level.

4.1.2. Impact of Distance between Two Target Planes on Calibration Accuracy

In the experiment, the distance of two target planes is 60 mm. The target is placed at 15 different positions in each experiment, and a total of 100 independent experiments are performed at each distance level. Gaussian noise with zero mean and standard deviation of 0.5 pixel is added to the feature points. The distance between the two target planes ranges from 10 mm to 100 mm with an interval of 10 mm. As shown in Figure 6a,b, the RMSEs of r x , t x , t y and t z decrease as the distance levels increase, whereas the RMSEs of r y and r z increase as the distance levels increase. As shown in Figure 6c, the calibration accuracy increases remarkably with rising distance level in the range of 10–40 mm but gradually decreases when distance level increases in the range of 40–100 mm. Based on the above analysis, the improvement in calibration accuracy is not entirely related to the increase in distance level. High accuracy can be obtained when the ratio of field of view to the distance between two target planes is 10 (400 mm/40 mm).

4.1.3. Impact of Elliptical Stripe Size on Calibration Accuracy

In the experiment, the distance between two target planes is 60 mm. The target is placed at 15 different positions in each experiment, and a total of 100 independent experiments are performed at each size level. Gaussian noise with zero mean and standard deviation of 0.5 pixel is added to the feature points. The ratio of the major axes to the minor axes of the elliptical stripe in space is 1.1, and the length of minor axes is from 100 mm to 280 mm with an interval of 20 mm. As shown in Figure 7a,b, the RMSEs of extrinsic parameters decrease as the size levels increase. However, according to the reconstruction errors shown in Figure 7c, the calibration accuracy increases substantially with rising size level in the range of 100–160 mm but gradually decreases when the distance level increases in the range of 160–280 mm. For the proposed method, the most accurate calibration results do not necessarily contribute to the best calibration accuracy. From Figure 7c, the proposed method yields optimal calibration accuracy when the ratio of field of view to the distance between two target planes is approximately 2.5 (400 mm/160 mm).

4.2. Physical Experiment

Zhang’s method is widely used in camera calibration due to its convenience and efficiency. Hence, we compare the proposed method with Zhang’s method. In practice, Zhang’s method is flexible in application, and even a printed checkerboard paper is feasible in calibration. The calibration errors of Zhang’s method mainly come from two parts, namely, the manufacture error of the target and the location error of the image feature points [24]. For Zhang’s method, an important requirement of the checkerboard target is that the length of each grid must be equivalent and known. Thereafter, the calibration accuracy would decrease drastically when the target accuracy is not high. The normal checkerboard target and the light-emitting planar checkerboard target are the most commonly used targets for Zhang’s method; meanwhile, it is difficult to achieve high accuracy manufacturing for checkerboard. On the contrary, the double planar target can easily ensure high production quality with low cost, and the laser is easily obtained.
The calibration accuracy of Zhang’s method relies heavily on the extraction accuracy of the feature points of the target. When the lighting condition is unideal, the calibration image quality via Zhang’s method is poor with respect to the proposed method. Since the proposed method adopts strong laser stripes, it is easy to obtain the clear and stable calibration images. Steger method is used in the proposed method to extract laser stripe. Steger method is precise and stable when lighting changes, and it is used widely in complex situations and outdoor measurements. The following experiments are conducted to further prove the validity and stability of the proposed method, and show its’ superiority in application under complex circumstances.

4.2.1. Performance of Different Targets in Complex Light Environments

In this section, the advantages and disadvantages of the proposed method and Zhang’s method are evaluated in complex lighting conditions, such as dim light and strong light. In the following experiments, a normal planar checkerboard target and a light-emitting planar checkerboard target are used in Zhang’s method, and a double parallel planar target is used in the proposed target.
Calibration images obtained in good light environments when the proposed method and Zhang’s method are used are shown in Figure 8. As shown in Figure 8, all the characteristic points and the light stripes on three targets can be extracted.
Calibration images obtained in dim light environment when the proposed method and Zhang’s method are used are shown in Figure 9. Generally, the methods used to obtain better calibration images are increasing the exposure time or aperture. Despite an increase of the exposure time or aperture, clear characteristic point images of the normal checkerboard target cannot be obtained in dim light environments. The light-emitting planar checkerboard target and double parallel planar target are feasible under dim lighting conditions. Consequently, the proposed method has certain advantages in the dim light environment. As shown in Figure 8, the characteristic points and the light stripes on the light-emitting planar checkerboard target and double parallel planar target can be extracted.
Calibration images obtained in strong sunlight environment when the proposed method and Zhang’s method are used are shown in Figure 10. As shown in Figure 10, most characteristic points on the normal checkerboard target are difficult to obtain because of strong light. Strong light causes serious refraction on the surface of the light-emitting planar checkerboard target, and as a result, characteristic points on the refraction area cannot be extracted precisely. The proposed method adopts strong laser stripes to calibrate, and strong laser stripes are clear and stable in strong light environments. Obviously, the proposed method performs better than Zhang’s method.
According to the above experiments, the checkerboard targets are not feasible under the complex lighting conditions. Meanwhile, Zhang’s method performs poorly in strong light environments. On the contrary, the proposed method guarantees high accuracy and stability under complex lighting conditions.

4.2.2. Extrinsic Calibration of BSVS

Two sets of physical experiments are performed, namely, the proposed method and Zhang’s method. Zhang’s method is widely used in camera calibration due to its convenience and efficiency. Hence, we compare the proposed method with Zhang’s method.
As shown in Figure 11, two cameras are equipped with the same 16 mm optical lens. The resolution of the camera is 1628 pixels × 1236 pixels, the measurement distance is 600 mm, and the field of view is approximately 400 mm × 300 mm. The resolution of the projector (Dell, M110, Dell Computer Corporation, Round Rock, TX, USA) is 1280 pixels × 800 pixels.
MATLAB Toolbox in [25] is adopted to complete the intrinsic and extrinsic parameter calibrations of BSVS. A light-emitting planar checkerboard target is used in the physical experiments. The number of feature points on the target is 10 × 10, and the target accuracy is 5 µm. The intrinsic parameter calibration results of two cameras using Zhang’s method are shown in Table 1.
The calibration process consists of the following steps: (1) the intrinsic and extrinsic parameters of BSVS are calibrated using Zhang’s method; (2) the calibration of the proposed method is implemented using the intrinsic parameters calibrated by Zhang’s method. The production accuracy of a double parallel planar target is 0.02 mm, and the distance between two target planes is 60.27 mm. The target is placed 15 times in each trial.
The Steger method [26] is adopted to extract the center of the light stripes. Thereafter, the corresponding ellipse is obtained by the ellipse fitting method [27]. Figure 12 shows the results of processing the light stripes in the image. Images used in the two methods are shown in Figure 13.
Table 2 shows the comparison of the extrinsic parameters calibrated via the two methods. In general, the effects of the two extrinsic calibration methods show no significant difference.

4.2.3. Evaluation of the Proposed Method

To further evaluate the proposed method, the light-emitting planar checkerboard target is placed five times before the BSVS. The feature points are the corner points of target, namely, the vertices of each grid on the target. The grid is a small square, and its length of side is 10 mm. The target accuracy is 1 µm, so the relative uncertainty of grid side length is ±0.01%. Obviously, the grid side length is fairly accurate. At each position, the 3D reconstruction coordinates of the feature points on target are computed based on the two methods. Table 3 shows the reconstruction results of five feature points at one of those positions.
The measurement distance d m of the feature points is computed using the 3D reconstruction coordinates. The actual distance of the feature points on the target coordinate frame is denoted as d t , which can be calculated with grid side length known. The deviation between measurement distance d m and actual distance d t is calculated as the reconstruction error Δ d . Figure 14a shows the statistical diagram of the data in different reconstruction error levels, and Figure 14b illustrates the box chart showing the statistical analysis of reconstruction error.
From Figure 14a, most of the reconstruction errors based on Zhang’s method are relatively low. In the box chart, the two short horizontal lines above and below the error bar represent the maximum and minimum values of the data, respectively. As shown in Figure 14b, the deviation between the minimum reconstruction error and zero is relatively large when using the proposed method. The small rectangle in the error bar denotes the mean of the data. Compared with Zhang’s method, the mean reconstruction error using the proposed method considerably deviates from zero. The error bar shows the distribution of the data, and its lower and upper boundaries represent 25% and 75% of the data, respectively. Along the direction of the ordinate, the length of the error bar is relatively longer in the proposed method than in Zhang’s method. For Zhang’s method, the reconstruction error is more symmetric about zero, which means that the reconstruction errors are mainly close to zero. The reconstruction RMSEs of the proposed method and Zhang’s method are 0.03 mm and 0.02 mm, respectively. In terms of calibration accuracy, the proposed method is comparable with Zhang’s method
Stability is important for the evaluation of a calibration method. Hence, 10 sets of repetitive experiments are performed to validate the efficiency of the proposed method. For each method, 15 sets of images are randomly selected to calibrate the BSVS. Subsequently, repeatability analysis of the calibration parameters and calibration accuracy is conducted. Figure 15 shows the comparison of repeatability analysis of the calibration results.
In Figure 15, the black asterisks represent the calibration parameters, the purple curves are the fitted normal distribution curves of the calibration parameters, and the thin horizontal lines in purple represent the mean calibration parameters. The shape of the normal distribution curve correlates with the standard deviation of the data. The curve is narrow and high when the standard deviation is low, whereas the curve with a relatively high standard deviation is flat and low. As shown in Figure 15b,f, the lengths of error bar of the proposed method is close to that of Zhang’s method, meanwhile, the fitted normal distribution curves are similar in shape. Hence, the stability of the proposed method is basically the same as that of Zhang’s method. It can be observed from Figure 15c–e that the dispersion of the calibration results of proposed method is high. However, the proposed method performs better in stability as shown in Figure 15a. Accuracy of the calibration method is determined by the entire extrinsic parameter. Hence, the efficiency of the calibration method cannot be evaluated well according to one parameter only. To further prove the stability of the proposed method, we calculated the RMS of the reconstruction errors to present the calibration accuracy of the two methods. Then, the contribution of calibration accuracy is analyzed as shown in Figure 16.
In Figure 16, the error bar represents the contribution of calibration accuracy via the two methods. The black asterisks are the entire calibration accuracy data. From the data, the calibration accuracy of Zhang’s method is approximately 0.02 mm, and that of the proposed method is close to 0.03 mm. In detail, the majority of calibration accuracy data of the proposed method is less than 0.03 mm. Along the direction of the ordinate, the length of the error bar of the proposed method is approximately twice that of Zhang’s method. Thus, the accuracy data of Zhang’s method is relatively concentrated. The thin horizontal lines in purple represent the mean calibration accuracy. By comparison, the mean calibration accuracy using Zhang’s method is close to 0.015 mm, which is approximately half that of the proposed method. In addition, the fitted normal distribution curve of Zhang’s method is relatively narrow and high, implying that the calibration accuracy of these methods is highly stable. Based on the above analysis, we make the following evaluation: Zhang’s method performs slightly better in stability and calibration accuracy, meanwhile, stability and calibration accuracy of both methods are relatively high.
The performance of the proposed method is slightly worse than Zhang’s method. However, some methods can be used in the calibration process to further improve calibration accuracy and stability. For instance, we can use multi-planar targets, project multiple elliptical stripes, and adopt enhanced non-linear optimization methods. The proposed method can adopt the feather point, which is not captured by the two cameras simultaneously. In general, the proposed method is slightly inferior to Zhang’s method but performs fairly well in practice. Moreover, the proposed method is convenient, flexible, and suitable for dynamic online calibration of BSVS.

5. Conclusions

This paper presents an extrinsic calibration method based on unknown-sized elliptical stripe images. The proposed method avoids using high-accuracy target with certain-sized feature points. Strong light stripes are the core of the proposed method, which is suitable for calibration under complex circumstances. In addition, the proposed method performs well in calibration with an optical filter. The proposed method comes in various forms by flexibly combining the target and elliptical stripe, thereby guaranteeing relatively high calibration accuracy under different conditions. In practice, the planar target can easily ensure high production quality with low cost, and the laser is easily obtained. Several physical experiments validate the efficiency of the proposed method. In conclusion, the proposed method is valuable for practical extrinsic calibration of BSVS.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (NSFC) (51575033, 51679101) and National Key Scientific Instrument and Equipment Development Projects of China (2012YQ140032).

Author Contributions

Zhen Liu, Suining Wu and Yang Yin conceived the article, conducted the experiments, constructed the graphs and wrote the paper. Jinbo Wu helped establish mathematical model.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Y.J.; Gao, F.; Ren, H.Y.; Zhang, Z.H.; Jiang, X.Q. An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target. Sensors 2017, 17, 1188. [Google Scholar] [CrossRef] [PubMed]
  2. LI, Z.X.; Wang, K.Q.; Zuo, W.M.; Meng, D.Y.; Zhang, L. Detail-Preserving and Content-Aware Variational Multi-View Stereo Reconstruction. IEEE Trans. Image Process. 2016, 25, 864–877. [Google Scholar] [CrossRef] [PubMed]
  3. Poulin-Girard, A.S.; Thibault, S.; Laurendeau, D. Influence of camera calibration conditions on the accuracy of 3D reconstruction. Opt. Express 2016, 24, 2678–2686. [Google Scholar] [CrossRef] [PubMed]
  4. Lilienblum, E.; Al-Hamadi, A. A Structured Light Approach for 3-D Surface Reconstruction With a Stereo Line-Scan System. IEEE Trans. Instrum. Meas. 2015, 64, 1258–1266. [Google Scholar] [CrossRef]
  5. Liu, Z.; Li, X.J.; Yin, Y. On-site calibration of line-structured light vision sensor in complex light environments. Opt. Express 2015, 23, 29896–29911. [Google Scholar] [CrossRef] [PubMed]
  6. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proceedings of the IEEE International Conference on Computer Vision & Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 519–528. [Google Scholar]
  7. Wu, F.C.; Hu, Z.Y.; Zhu, H.J. Camera calibration with moving one-dimensional objects. Pattern Recognit. 2005, 38, 755–765. [Google Scholar] [CrossRef]
  8. Qi, F.; Li, Q.H.; Luo, Y.P.; Hu, D.C. Camera calibration with one-dimensional objects moving under gravity. Pattern Recognit. 2007, 40, 343–345. [Google Scholar] [CrossRef]
  9. Douxchamps, D.; Chihara, K. High-accuracy and robust localization of large control markers for geometric camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 376–383. [Google Scholar] [CrossRef] [PubMed]
  10. Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef]
  11. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  12. Zhao, Y.; Li, X.F.; Li, W.M. Binocular vision system calibration based on a one-dimensional target. Appl. Opt. 2012, 51, 3338–3345. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, Z.Y. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  15. Bradley, D.; Heidrich, W. Binocular Camera Calibration Using Rectification Error. In Proceedings of the 2010 Canadian Conference on Computer and Robot Vision, Ottawa, ON, Canada, 31 May–2 June 2010. [Google Scholar]
  16. Jia, Z.Y.; Yang, J.H.; Liu, W.; Liu, F.J.Y.; Wang, Y.; Liu, L.L.; Wang, C.N.; Fan, C.; Zhao, K. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system. Opt. Express 2015, 23, 15205–15223. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, Z.; Yin, Y.; Liu, S.P.; Chen, X. Extrinsic parameter calibration of stereo vision sensors using spot laser projector. Appl. Opt. 2016, 55, 7098–7105. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, H.; Wong, K.-Y.K.; Zhang, G.Q. Camera calibration from images of sphere. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 499–503. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, X.L.; Wu, S.T.; Xing, Z.H.; Jia, X. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features. Sensors 2016, 16, 838. [Google Scholar] [CrossRef] [PubMed]
  20. Ying, X.H.; Zha, H.B. Geometric interpretations of the relation between the image of the absolute conic and sphere images. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2031–2036. [Google Scholar] [CrossRef] [PubMed]
  21. Wong, K.-Y.K.; Zhang, G.Q.; Chen, Z.H. A stratified approach for camera calibration using spheres. IEEE Trans. Image Process. 2011, 20, 305–316. [Google Scholar]
  22. Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  23. Moré, J.J. The Levenberg-Marquardt Algorithm, Implementation and Theory. Lect. Notes Math. 1978, 630, 105–116. [Google Scholar]
  24. Liu, Z.; Wu, Q.; Chen, X.; Yin, Y. High-accuracy calibration of low-cost camera using image disturbance factor. Opt. Express 2016, 24, 24321–24336. [Google Scholar] [CrossRef] [PubMed]
  25. Bouguet, J.-Y. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed on 29 July 2017).
  26. Steger, C. An Unbiased Detector of Curvilinear Structures. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 20, 113–125. [Google Scholar] [CrossRef]
  27. Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct least square fitting of elipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
Figure 1. Binocular stereo vision model.
Figure 1. Binocular stereo vision model.
Sensors 17 02873 g001
Figure 2. Calibration process of the binocular stereo vision sensor.
Figure 2. Calibration process of the binocular stereo vision sensor.
Sensors 17 02873 g002
Figure 3. Process to solve t ˜ LR .
Figure 3. Process to solve t ˜ LR .
Sensors 17 02873 g003
Figure 4. Combination forms of laser and target. (a) Single ring stripe laser and parallel planar target; (b) Single ring stripe laser and parallel planar target; (c) Concentric double ring stripe laser and parallel planar target; (d) Multiple ring stripe laser and parallel planar target.
Figure 4. Combination forms of laser and target. (a) Single ring stripe laser and parallel planar target; (b) Single ring stripe laser and parallel planar target; (c) Concentric double ring stripe laser and parallel planar target; (d) Multiple ring stripe laser and parallel planar target.
Sensors 17 02873 g004
Figure 5. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different noise levels; (b) RMSEs of tx, ty and tz at different noise levels; (c) RMSEs of the 3D coordinates of the feature points at different noise levels.
Figure 5. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different noise levels; (b) RMSEs of tx, ty and tz at different noise levels; (c) RMSEs of the 3D coordinates of the feature points at different noise levels.
Sensors 17 02873 g005
Figure 6. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different distance levels; (b) RMSEs of tx, ty and tz at different distance levels; (c) RMSEs of the 3D coordinates of the feature points at different distance levels.
Figure 6. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different distance levels; (b) RMSEs of tx, ty and tz at different distance levels; (c) RMSEs of the 3D coordinates of the feature points at different distance levels.
Sensors 17 02873 g006
Figure 7. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different distance levels; (b) RMSEs of tx, ty and tz at different minor axis length levels; (c) RMSEs of the 3D coordinates of the characteristic points at different minor axis length levels.
Figure 7. RMSEs of the extrinsic parameters based on the proposed method. (a) RMSEs of rx, ry and rz at different distance levels; (b) RMSEs of tx, ty and tz at different minor axis length levels; (c) RMSEs of the 3D coordinates of the characteristic points at different minor axis length levels.
Sensors 17 02873 g007
Figure 8. Calibration images based on two methods in the good light environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Figure 8. Calibration images based on two methods in the good light environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Sensors 17 02873 g008
Figure 9. Calibration images based on two methods in the dim light environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Figure 9. Calibration images based on two methods in the dim light environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Sensors 17 02873 g009
Figure 10. Calibration images based on two methods in the strong sunlight environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Figure 10. Calibration images based on two methods in the strong sunlight environment. (a) Calibration images of the normal checkerboard target; (b) Calibration images of the light-emitting checkerboard target; (c) Calibration images of the double parallel planar target.
Sensors 17 02873 g010
Figure 11. Stereo vision sensor and target.
Figure 11. Stereo vision sensor and target.
Sensors 17 02873 g011
Figure 12. Result of processing the light stripes in the image. (a) Extraction of the center of the light stripes; (b) Ellipses obtained by ellipse fitting.
Figure 12. Result of processing the light stripes in the image. (a) Extraction of the center of the light stripes; (b) Ellipses obtained by ellipse fitting.
Sensors 17 02873 g012
Figure 13. Images used in calibration via two methods. (a) Images used in calibration via the proposed method; (b) Images used in calibration via Zhang’s method.
Figure 13. Images used in calibration via two methods. (a) Images used in calibration via the proposed method; (b) Images used in calibration via Zhang’s method.
Sensors 17 02873 g013
Figure 14. Reconstruction errors of light-emitting planar target via two methods. (a) Number of point pairs in different reconstruction error level via two methods; (b) Statistical distributions of the reconstruction error of the feature point pairs via two methods.
Figure 14. Reconstruction errors of light-emitting planar target via two methods. (a) Number of point pairs in different reconstruction error level via two methods; (b) Statistical distributions of the reconstruction error of the feature point pairs via two methods.
Sensors 17 02873 g014
Figure 15. Repeatability of calibration results via two methods. (a) Repeatability of rx via two methods; (b) Repeatability of ry via two methods; (c) Repeatability of rz via two methods; (d) Repeatability of tx via two methods; (e) Repeatability of ty via two methods; (f) Repeatability of tz via two methods.
Figure 15. Repeatability of calibration results via two methods. (a) Repeatability of rx via two methods; (b) Repeatability of ry via two methods; (c) Repeatability of rz via two methods; (d) Repeatability of tx via two methods; (e) Repeatability of ty via two methods; (f) Repeatability of tz via two methods.
Sensors 17 02873 g015
Figure 16. Repeatability of calibration accuracy error via two methods.
Figure 16. Repeatability of calibration accuracy error via two methods.
Sensors 17 02873 g016
Table 1. Intrinsic parameter calibration results of left and right cameras by Zhang’s method.
Table 1. Intrinsic parameter calibration results of left and right cameras by Zhang’s method.
fxfyu0 (pixel)v0 (pixel)γk1 (mm−2)k2 (mm−4)
Left camera3672.233672.87833.11631.998.46 × 10−5−0.11−0.05
Right camera3673.593672.85821.11632.18−1.59 × 10−5−0.130.92
Table 2. Comparison of the extrinsic parameters.
Table 2. Comparison of the extrinsic parameters.
rxryrztx (mm)ty (mm)tz (mm)
Proposed method0.00840.68220.0416−449.6990−5.6238180.8245
Zhang’s method0.00820.68450.0421−450.5520−5.7329183.8668
Table 3. Comparison of the 3D reconstruction results.
Table 3. Comparison of the 3D reconstruction results.
IndexProposed MethodZhang’s Method
x (mm)y (mm)z (mm)x (mm)y (mm)z (mm)
1100.430−40.851578.504100.550−40.899579.197
2100.732−30.883577.749100.856−30.921578.464
3101.028−20.922577.016101.157−20.949577.753
491.072−30.768575.20691.185−30.806575.923
591.676−10.858573.71291.798−10.872574.472

Share and Cite

MDPI and ACS Style

Liu, Z.; Wu, S.; Yin, Y.; Wu, J. Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images. Sensors 2017, 17, 2873. https://doi.org/10.3390/s17122873

AMA Style

Liu Z, Wu S, Yin Y, Wu J. Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images. Sensors. 2017; 17(12):2873. https://doi.org/10.3390/s17122873

Chicago/Turabian Style

Liu, Zhen, Suining Wu, Yang Yin, and Jinbo Wu. 2017. "Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images" Sensors 17, no. 12: 2873. https://doi.org/10.3390/s17122873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop