Next Article in Journal
Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays
Previous Article in Journal
Energy Efficient IoT Data Collection in Smart Cities Exploiting D2D Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features

School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(6), 838; https://doi.org/10.3390/s16060838
Submission received: 7 April 2016 / Revised: 23 May 2016 / Accepted: 1 June 2016 / Published: 8 June 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations.

1. Introduction

Vision sensors have advantages of flexibility and high-precision. Distributed vision sensors (DVS) are always used due to their wider fields of view (FOVs). Calibration is an important step for most camera applications. Calibration of a DVS typically has two stages: the intrinsic calibration which can be done separately and the global calibration which calculates the relative poses of the camera frames and the global coordinate frame (GCF). Then the information extracted from each camera can be integrated into the GCF. Generally, the coordinate frame of the reference camera is selected as GCF. However, vision sensors are usually widely distributed to have a better coverage. As two adjacent cameras usually have small or no overlapping FOV, the global calibration of DVS becomes of prime importance.
DVS can be calibrated by high-precision 3D measurement equipment. Lu et al. [1] constructed a measurement system and achieved calibration of non-overlapping DVS using two theodolites and a planar target. Calibration methods without expensive equipment have also been investigated. Peng et al. [2] proposed an approach omitting translation vectors between cameras, due to the loss of depth information during the camera projection [3]. It assumes that all the cameras have approximately the same optical center. However, this assumption is not appropriate when the relative distances between cameras are not small in comparison with the distances to the captured scene. In addition, feature detection and matching, such as scale invariant feature transform (SIFT) [4] is not reliable in insufficient textural environments due to lack of enough distinctive feature points [2].
Global calibration methods for DVS with overlapping FOV cannot be applied in the case of non-overlapping FOV. Most of the global calibration methods for DVS with non-overlapping FOV are based on mirror reflections [5,6,7], rigidity constraints of calibration targets [8,9,10], movements of the platform [11] and the auxiliary camera [12]. For general distributed vision sensors, it is hard to make sure that each camera has a clear sight of the targets through mirror reflections, especially in complex environments. Liu et al. [9] proposed a global calibration method by placing multiple targets in front of the vision sensors at least four times. Bosch et al. [10] use a poster to determine the relative poses of multiple cameras in two steps. It requires that different parts of the poster are observed by at least two cameras at the same time, so that SIFT features can be utilized. However, it is not flexible to use a long one-dimensional target [8], rigidly-connected targets [9] or a large area poster [10] for the calibration of widely distributed cameras. Pagel [11] achieved extrinsic calibration of a multi-camera rig with non-overlapping FOV by moving the platform. However, it required that at least two adjacent targets be visible and two cameras can see the target at the same time. Sun et al. [12] used an auxiliary camera to observe all the sphere targets. However, all the targets can hardly been observed by one camera at the same time due to widely distribution of the vision sensors.
Structure from motion [13] solves similar problems as that in the global calibration of DVS. The difference is that the global calibration transforms local coordinate frames into GCF, while structure from motion estimates the locations of 3D points from multiple images [14]. Fitzagibbon et al. [15] recover the 3D scene structure together with 3D camera positions from a closed image sequence. Compared with open sequences, the closed image sequence contains additional constraints. Zhang et al. [16] propose an incremental motion estimation algorithm to deal with long image sequences.
Generally, one calibration target is selected as the reference target. Compared with employing the auxiliary camera to capture all the targets in one image, capturing neighboring pairs of targets is more suitable for widely distributed cameras. The relative poses between neighboring pairs of the targets can be solved separately. Then the relative poses between each target and the reference target can be achieved by chainwise coordinate transformations. However, the error accumulates with increasing times of transformations. When dealing with DVS that provides a vision of the surrounding scene just as [2,11], vision sensors are always configured in ring-topologies to have a better coverage of the surroundings. The first sensor adjoins the end to form a closed chain. Thus a closed image sequence of neighboring targets can be acquired by the auxiliary camera.
Line features are more stable than point features in detection and matching [17]. The principle of perspective projection indicates that an infinite scene line is mapped onto an image plane as a line terminating in a vanishing point. Vanishing points and vanishing lines are the distinguish features of perspective projection [18]. Xu et al. [19] proposed a pose estimation method based on vanishing lines of a T-shaped target. Wang [20] used a target with three equally spaced parallel lines to estimate the rotation matrix by moving the target into at least three different positions. Wei et al. [21,22] calibrated a line-structured vision sensor and a binocular vision sensor by a planar target with several parallel lines. Two mutually orthogonal groups of parallel lines are common in urban environments, such as crossroads, facades of architectures and so on. They can be used as the calibration targets. Even if they are absent in the scene, targets with two mutually orthogonal groups of parallel lines can be employed.
In this paper, we focus on the calibration of widely distributed cameras in ring-topologies. A planar target with two mutually orthogonal groups of parallel lines is allocated to each camera. The vanishing line of the target plane is obtained from two vanishing points. Then the relative pose between each camera and its corresponding target is initialized and refined based on vanishing features and the known line length. A closed image sequence of neighboring pairs of calibration targets is acquired by repeated operations of the auxiliary camera. Then the relative poses between two adjacent targets can be obtained and the transformation matrix from each target to the reference target is initialized in a chainwise manner. In order to adjust the accumulated error due to the chain of transformations, a global calibration method is proposed to optimize relative poses of the targets based on the constraint of the ring-type structure. Finally, using the targets as media, the optimal relative poses between each camera and the reference camera are obtained.
The rest of the paper is organized as follows: preliminary work is introduced in Section 2. The proposed global calibration method is described in Section 3. Accuracy analysis of different factors’ effects is given in Section 4. Synthetic data, simulation images and real data experiments are carried out in Section 5. The conclusions are given in Section 6.

2. Preliminaries

2.1. Coordinate Frame Definition

In this paper, the camera coordinate frame is used as the vision sensor coordinate frame. Assuming DVS consists of M cameras, CkCF (1 ≤ kM) denotes the coordinate frame of camera k. AiCF (1 ≤ iM) denotes the coordinate frame of the auxiliary camera that captures two adjacent targets (i, j). The origins of CCF and ACF are fixed at the optical centers, respectively. IkCF (1 ≤ kM) denotes the image coordinate frame of camera k in pixels. The origin of ICF is fixed at the center of the image plane.
As shown in Figure 1, the target is constructed of two mutually orthogonal groups of parallel lines with a known length L1 and the distance L2. P m k and l m k denote the mth corner point and the mth feature line of target k, 1 ≤ m ≤ 6. TkCF (1 ≤ kM) denotes the coordinate frame of target k. l 6 k and l 1 k coincide with the x-axis and the z-axis of the target, respectively. The y-axis is decided by the right hand rule. ECF represents the ground coordinate frame. The origin of ECF is fixed on the ground. Plane xeoeze lies in the ground plane. The y-axis of ECF is decided by the right hand rule.

2.2. Measurement Model

In this paper, a two-dimensional image point is denoted by p = [ u , v ] T , a three-dimensional spatial point by P = [ X , Y , Z ] T . p ˜ and P ˜ are the corresponding homogeneous points, p ˜ = [ p T , 1 ] T , P ˜ = [ P T , 1 ] T . The projection of a spatial point in TCF onto the image plane is described as:
s p ˜ = [ Κ | 0 3 × 1 ] T P ˜ ,   K = [ f x 0 u 0 0 f y v 0 0 0 1 ] ,   T = [ R 3 × 3 t 3 × 1 0 1 × 3 1 ] ,   R = [ R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 ]
where K is the intrinsic parameter matrix, fx and fy are the equivalent focal length in horizontal and vertical directions, respectively. (u0, v0) is the principal point. T denotes the transformation matrix between targets and cameras. R is a 3 × 3 rotation matrix, t is a 3 × 1 translation vector. The rotation matrix can be expressed in terms of Y-X-Z Euler angles: yaw angle φ, pitch angle θ and roll angle ϕ:
R ( φ , θ , ϕ ) = [ c o s ϕ c o s φ   +   s i n θ s i n ϕ s i n φ c o s θ s i n ϕ c o s φ s i n θ s i n ϕ     c o s ϕ s i n φ c o s ϕ s i n θ s i n φ     c o s φ s i n ϕ c o s θ c o s ϕ s i n ϕ s i n φ   +   c o s ϕ c o s φ s i n θ c o s θ s i n φ   s i n θ   c o s θ c o s φ ]
In this paper, definitions of the transformation matrices are shown in Table 1.

3. The Principle of Global Calibration

The principle of global calibration is shown in Figure 2. In this paper, we choose camera 1 as the reference camera as well as target 1 as the reference target. The main process of the proposed global calibration method works as follows:
  • Intrinsic calibration is done separately for each camera using the J. Bouguet Camera Calibration Toolbox based on Zhang’s calibration method [23,24].The intrinsic parameters are treated as fixed and the cameras’ poses are unchangeable during the calibration.
  • Place the planar targets in each camera’s FOV. The symmetry axis of each target is set to approximately orient to its corresponding camera. Image Ik denotes target k captured by camera k. An image sequence I = {Ik|1 ≤ kM} is obtained.
  • Use the auxiliary camera to capture neighboring pairs of the targets. As shown in Figure 2a, image I ˜ i denotes two adjacent targets (i, j) captured by the auxiliary camera. A closed image sequence I ˜ is acquired, I ˜ = { I ˜ i | 1 i M } ; 1 ≤ iM; j = i +1 if i < M, j = 1 if i = M.
  • All the images are rectified to compensate for cameras’ distortion based on the intrinsic calibration results. The linear equation of parallel lines in the image plane can be obtained from the feature points extracted by Steger’s method [25].
  • Compute the transformation matrix T k tc based on the undistorted image sequence I.
  • Compute the transformation matrix T i j tt of two adjacent targets based on the undistorted image sequence I ˜ .
  • Calculate the initial value of T k 1 tt (2 ≤ kM) by multiple coordinate transformations, as shown in Figure 2b. Then T k 1 tt (2 ≤ kM) is refined by the global nonlinear optimization.
  • Compute the transformation matrix T k 1 cc (2 ≤ kM).The calibration is completed.

3.1. Sloving T k tc

3.1.1. Feature Extraction

The feature points on a line can be extracted by Steger’s method [25], denoted by pi = [ui, vi]T, where 1 ≤ is. s is the number of feature points. The projection of a line onto the image plane is also a straight line. The equation of a line on the image can be expressed as au + bv + c = 0.
Let A = [ p 1 p 2 p s ] T , w = [ 1 1 1 ] T , A is a s × 2 matrix, w is a s × 1 vector. Thus the relation of a ,   b and c can be obtained by the least squares method:
[ a b ] = c [ A T A ] 1 [ A T w ]
Thus, the linear equation of line l m k can be found by the above method, where 1 ≤ m ≤ 6. Then the coordinates of p ˜ m k are obtained from line intersections. As shown in Figure 1b, two vanishing points v1 and v2 can be found by lines l 1 k , l 3 k and lines l 5 k , l 6 k , respectively. Then the linear equation of the vanishing line is obtained, denoted by:
a ˜ u + b ˜ v + c ˜ = 0

3.1.2. Computing the Vanishing Line

As shown in Figure 1b, two groups of parallel lines converge at vanishing points v1 and v2 in the image plane, respectively. The line crossing v1 and v2 is the vanishing line. The equations of two non-parallel lines in TkCF are:
a i x + c i z + d i = 0 ,   y = 0 , ( i = 1 , 2 )
where a 1 c 2 a 2 c 1 0.
Let V ˜ 1  and  V ˜ 2 be the infinite points in the two lines, V ˜ i = [ c i , 0 , a i , 0 ] T ,   i = 1 , 2 (the elaboration is given in Appendix A). We have:
s i v ˜ i = [ Κ | 0 3 × 1 ] T k tc V ˜ i   ( i = 1 , 2 )
where K denotes the intrinsic parameter matrix of camera k.
Combining Equations (1) and (6), we have:
v ˜ i = [ f x c i R 11 + a i R 13 c i R 31 + a i R 33 + u 0 , f y c i R 21 + a i R 23 c i R 31 + a i R 33 + v 0 , 1 ] T
Vanishing line l can be computed by l = v ˜ 1 × v ˜ 2 . With Equation (7), we have:
l = [ f y ( R 23 R 31 R 21 R 33 ) f x ( R 11 R 33 R 13 R 31 ) f x f y ( R 21 R 13 R 11 R 23 ) + f x v 0 ( R 13 R 31 R 11 R 33 ) + f y u 0 ( R 21 R 33 R 23 R 31 ) ]
Combining Equations (2) and (8), the linear equation of the vanishing line is expressed as:
sin ϕ f x ( u u 0 ) + cos ϕ f y ( v v 0 ) tan θ = 0

3.1.3. Computing the Rotation Matrix of T k tc

Combining Equations (4) and (9), the roll angle ϕ and pitch angle θ can be obtained:
{ ϕ = tan 1 [ f x a ˜ / ( f y b ˜ ) ] θ = tan 1 ( c ˜ a ˜ sin ϕ f x sin ϕ f x u 0 cos ϕ f y v 0 )
Vanishing points are determined by the directions of the parallel lines [18], we have:
v ˜ i = K d i ( i = 1 , 2 )
where di is the 3 × 1 direction vector of the line in CCF.
l 1 k and l 6 k coincide with the z-axis and the x-axis of TkCF, respectively. Thus:
{ d 1 / d 1 = R [ 0 0 1 ] T d 2 / d 2 = R [ 1 0 0 ] T
According to the orthogonal constraint of a Rotation matrix, the rotation matrix R of T k tc can be obtained:
R = [ d 2 d 2 d 1 × d 2 d 1 d 2 d 1 d 1 ]

3.1.4. Computing Translation Vector of T k tc

P 7 k is a virtual point in the target plane. As vector P 1 k P 2 k d 2 , the projections of P 1 k P 7 k and P 2 k P 7 k onto the vector d 2 are equal. We have:
d 2 T P 1 k P 7 k = d 2 T P 2 k P 7 k
Combining Equations (1) and (14), we have:
z 1 z 2 = d 2 T K 1 p ˜ 2 k d 2 T K 1 p ˜ 1 k
where z 1 and z 2 are the z coordinates of P 1 k and P 2 k in C k CF , respectively. p ˜ 1 k and p ˜ 2 k are known coordinates of the corner points.
Besides, the length of P 1 1 P 2 k is known:
z 1 K 1 p ˜ 1 k z 2 K 1 p ˜ 2 k = L 1
Combining Equations (15) and (16), z 1 and z 2 can be found. Thus, the coordinate of P 1 k in C k CF is obtained. In addition, P 1 k is the origin of T k CF , the translation vector t of T k tc can be obtained:
t = z 1 K 1 p ˜ 1 k

3.1.5. Nonlinear Optimization

Let P ˜ m k ( 1 m 6 ) be the homogeneous coordinate of P m k in T k CF . Let p ˜ m k be the corresponding coordinate in the image I k . We have:
s m k p ˜ m k = [ Κ | 0 3 × 1 ] T k tc P ˜ m k
Assuming that image points are corrupted by independently and identically distributed Gaussian noise, the maximum likelihood estimation is obtained by minimizing the sum of squared distances between the observed feature lines and the re-projected corner points. T k tc (1 ≤ kM) are refined separately by minimizing the following function using Levenberg-Marquardt algorithm [26]:
f ( Ω ) = m = 1 6 [ d 2 ( p ˜ m k , l m k ) + d 2 ( p ˜ m k , l n k ) ]
n = { m 1 ,  if   m 2 6 ,     if   m = 1
where Ω = T k tc . l m k and l n k denotes the projections of l m k and l n k onto the image Ik, respectively. d(·) denotes distances between points and lines. R of T k tc is parameterized using the Rodrigues’ formula [27].

3.2. Initializing T k 1 tt

Generally, target pair (i, j) is visible in the image I ˜ i , that:
j = { i + 1 ,   if    i M 1 1 ,      if    i  =  M
As shown in Figure 2a, T i i ta and T j i ta are the transformation matrices from target i and target j to the auxiliary camera, respectively. T i i ta and T j i ta can be initialized and refined separately by the methods described in Section 3.1. Then the initial value of T i j tt can be calculated by:
T i j tt = ( T j i ta ) 1 T i i ta
The initial value of T k 1 tt (2 ≤ kM) can be obtained by the minimum times of chainwise coordinate transformations:
T k 1 tt = { [ T k 1 , k tt T k 2 , k 1 tt T 2 , 3 tt T 1 , 2 tt ] 1 ,    if   k ( M / 2 ) T M , 1 tt T M 1 , M tt T k + 1 , k + 2 tt T k , k + 1 tt ,    if   k > ( M / 2 )

3.3. Global Calibration of the Targets

According to the camera model, we have:
{ s m i i p ˜ m i i = [ Κ | 0 3 × 1 ] T j i ta ( T j 1 tt ) 1 T i 1 tt P ˜ m i s m j i p ˜ m j i = [ Κ | 0 3 × 1 ] T i i ta ( T i 1 tt ) 1 T j 1 tt P ˜ m j  
where K denotes the intrinsic matrix of the auxiliary camera; p ˜ m i i and p ˜ m j i denote the reprojections of P m i and P m j onto the image I ˜ i , respectively.
Assuming image points are corrupted by independent and identical Gaussian noise, T k 1 tt (2 ≤ kM) can be optimized by minimizing the following function using Levenberg-Marquardt algorithm [26]:
f ( Ω ) = i = 1 M m = 1 6 [ d 2 ( p ˜ m i i , l m i i ) + d 2 ( p ˜ m i i , l n i i ) + d 2 ( p ˜ m j i , l m j i ) + d 2 ( p ˜ m j i , l n j i ) ]
where Ω = ( T 2 , 1 t t , ,   T M 1 t t ) ,   T 1 , 1 t t = I 4 × 4 l m i i and l m j i denote the projections of line l m i and l m j onto the image I ˜ i , respectively. R  of  T k 1 tt ( 2 k M ) are parameterized by the Rodrigues formula. A good starting point of the optimization is provided by Equations (22) and (23). ( m , n ) and ( i , j ) subject to Equations (20) and (21), respectively.

3.4. Solving T k 1 cc

After the global calibration of the targets, the transformation matrix from each camera to the reference camera can be found:
T k 1 cc = T 1 tc T k 1 tt ( T k tc ) 1   ( 2 k M )
where T k tc (1 ≤ kM) are the results of Equation (19), T k 1 tt (2 ≤ kM) are the optimization results of Equation (25).

4. Accuracy Analysis of Different Factors’ Effects

In this section, analysis of several factors’ effects on the accuracy of the proposed method is performed by synthetic data experiments. The auxiliary camera’s intrinsic parameters are fx = fy = 512, u0 = 512, v0 = 384. The image resolution is 1024 pixel × 768 pixel.
The cameras’ positions are represented by the coordinates of the cameras’ origins in ECF. The cameras’ orientations are denoted by the Euler angles (φ, θ, ϕ) from ECF to CCF. Targets are placed on the ground for convenience. The targets’ positions are represented by the coordinates of the targets’ origins in ECF. The targets’ orientations are denoted by the yaw angle φ from positive z-axis of ECF to the symmetry axis of the target.
dR and dt denote the 2-norm of rotation vector and translation vector differences between the calculation results and the real data. The RMS errors of dR, dt are used to evaluate the accuracy. The number of points that emulate feature lines is equal to the line length in pixels. Gaussian noise with zero mean and different noise levels is added to the image coordinates of the points of feature lines. Analysis of the factors’ effects is shown as follows.

4.1. Accuracy vs. the Pitch Angle of Camera Relative to the Target

The image sequence I ˜ is acquired by the auxiliary camera. The pitch angle of the auxiliary camera relative to the target plane is one of the factors affecting the calibration accuracy. In this experiment, two adjacent targets are captured by the auxiliary camera at different pitch angles. The targets’ positions in ECF are [−450, 0, 300]T and [450, 0, 300]T, respectively. The yaw angles of the targets relative to ECF are −18° and 18°, respectively. Two targets are symmetric about the plane yeoeze. The optical axis of the auxiliary camera lies in the symmetry plane yeoeze. The error of T 1 , 2 tt obtained by Equation (22) is used to evaluate the effect of the pitch angle. Gaussian noise with σ = 0.2 pixel is added. L1 = 500 mm, L2 = 200 mm. For each level of pitch angle θ, 100 independent trials are performed.
From Figure 3, the RMS errors of rotation and translation are roughly U-shape. When θ→−90°, the optical axis of the auxiliary camera is perpendicular to the target plane. The vanishing points approximate to infinity, which leads to higher errors. When θ→0°, the number of the extracted feature points decreases, also leads to higher errors. It is ideal to capture pair targets when θ = −40°.

4.2. Accuracy vs. the Yaw Angle Difference between Two Adjacent Targets

In this experiment, ∆φ denotes the yaw angle difference between the symmetry axes of two adjacent targets. ∆φ varies according to the cameras’ distribution. We also use the error of T 1 , 2 tt calculated by Equation (22) to evaluate the effect of ∆φ.
The positions of the two targets are same with those in Section 4.1, while ∆φ varies from 0 to 85°. The two targets remain symmetric about the plane yeoeze and the auxiliary camera lies in the symmetry plane. Gaussian noise with σ = 0.2 pixel is added. L1 = 500 mm, L2 = 200 mm. For each level, 100 independent trials are performed.
From Figure 4, both the error of rotation and translation rise with the increasing of ∆φ. When ∆φ > 80°, the errors increase sharply. This is because when ∆φ→90°, a group of parallel lines of each target are parallel to the image plane, thus the vanishing points approximate to infinity, which leads to great errors, so it is necessary to avoid ∆φ→90° during the calibration.

4.3. Accuracy vs. the Distance of Parallel Lines

In this experiment, we also use the error of T 1 , 2 tt obtained by Equation (22) to evaluate the effect of the parallel line distances. The poses of the targets are same with those in Section 4.1. The pitch angle of the auxiliary camera relative to the target plane is set to −40°. L1 = 500 mm, L2 varies from 100 mm to 400 mm. Gaussian noise with different levels is added to the image points. For each distance level, 100 independent trials are performed.
From Figure 5, it can be seen that the error increases linearly with the noise level and decreases with the increasing distance of parallel lines. This is because the difference among slopes of intersecting lines goes up with the increasing distance of parallel lines. Calculation error of vanishing points is inversely proportional to the difference among the slopes of intersecting lines, which have been proven in [21].

5. Experimental Results

5.1. Experiment of a Use-Case

Numerous situations require a system that provides a real-time view of the surroundings [28]. One of the typical cases is the operations on aerial vehicles. In this experiment, eight cameras are used to simulate a DVS mounted on an unmanned aerial vehicle (UAV), as shown in Figure 6. The proposed method is compared with other typical methods by both synthetic data and images simulated by 3ds Max software.
The intrinsic parameters of eight cameras are fx = fy = 796.44, u0 = 512, v0 = 384. The intrinsic parameters of the auxiliary camera and the image resolution are same with those in Section 4. The positions and orientations of the cameras are listed in Table 2. Each target is placed on the ground in its corresponding camera’s FOV. All the targets have the same size, L1 = 500 mm, L2 = 200 mm.

5.1.1. Description of the Calibration Methods

There are many calibration methods for multiple cameras. Here five typical methods are described as follows and summarized in Table 3:
Method 1: This method is similar to the proposed method, except that corner points p ˜ m k are extracted by the corner extraction engine of the J. Bouguet Camera Calibration Toolbox [23], rather than the intersections of feature lines.
Method 2: This method is similar to the proposed method, except that planar checkerboards with 12 × 12 grids are used as the calibration targets. The side length of each square is 50 mm.
Method 3: The calibration targets and the extraction of corner points p ˜ m k are same with the proposed method. Instead of capturing neighboring target pairs, the auxiliary camera captures all the targets in one image frame, so the relative poses between targets can be computed directly.
Method 4: This method is similar to Method 3, except that corner points p ˜ m k are obtained by the corner extraction method used in Method 1.
Method 5: This method is similar to Method 3, except that planar checkerboards of Method 2 are used as the calibration targets.
In order to illustrate the effect of the global calibration, there are two sub-methods called chainwise calibration method and global calibration method. The only difference between the two sub-methods is whether to use the global optimization in Section 3.3 or not. T k 1 tt (2 ≤ kM) of the chainwise method are obtained directly from multiple coordinate transformations by Equation (23), while the global calibration method is based on an additional global optimization by Equation (25).

5.1.2. Synthetic Data Experiment

In this experiment, the RMS error of T k 1 cc (2 ≤ k ≤ 8) is used to evaluate the accuracy. Gaussian noise with σ = 0.2 pixel is added. For each method, 100 independent trials are performed. From Figure 7, the proposed method outperforms Methods 1–5. Figure 7a,b show that the error accumulates with the coordinate transformations, and peaks at camera 5, due to the maximum times of transformations. The methods based on the constraint of ring-topologies can effectively reduce the accumulated error, especially for cameras which are far away from the reference.
Methods 3–5 do not suffer from the accumulated error issue because all the targets are visible in one image frame. However, due to limitations of image resolutions, the accuracy of the pose estimation decreases with the increasing number of the targets observed in one image.
Compared with lines-feature algorithms, points-feature algorithms are more sensitive to the image noise. Figure 7 shows that Methods 1 and 4 are worse than other methods, respectively. Further discussion is given in Section 5.3.

5.1.3. Accuracy vs. the Image Noise Level

In this experiment, the RMS error of T k 1 cc (2 ≤ k ≤ 8) is used to evaluate the effect of the noise level. Synthetic data is same with those in Section 5.1.2. Gaussian noise from 0.0 to 1.0 pixel is added. For each noise level, 100 independent trials are performed.
From Figure 8, the RMS error increases linearly with the noise level. It also shows that the proposed method is superior to other methods. If the noise level of the real DVS is less than 0.5 pixels, the RMS errors of rotation and translation of all the cameras are less than 0.05 deg and 1.1 mm, respectively, which is acceptable for common applications.

5.1.4. Experiments Based on Simulation Images

As shown in Figure 9, we use the 3ds Max software to simulate image sequences. The parameters of the cameras and the targets are same with those in Section 5.1.2. The feature lines are obtained based on feature points extracted by Steger’s method [25]. The error of T k 1 cc (2 ≤ k ≤ 8) is used to evaluate the accuracy.
Figure 10 shows that errors of rotation and translation accumulate with the increasing times of coordinate transformations. The proposed method can reduce the accumulated error due to multiple coordinate transformations. Further discussion is given in Section 5.3.

5.2. Real Data Experiment

As shown in Figure 11a, eight targets are located in an area about 1200 mm × 1200 mm. As the relative poses of the cameras with non-overlapping FOV are mainly determined by the relative poses of the targets, the RMS errors of point pair distances between eight targets are used as calibration errors in the real experiments.
The distance of point pair p m k and p m l is computed according to the calibration result, and is called measurement distances, dm. The targets are also calibrated similarly by a calibrated Canon 60D digital camera. The distances of the same point pairs can be obtained in the same way and are used as the ground truth dt, due to its relatively high accuracy.
Distance error can be computed by Δ d = d m d t . For the proposed method, Methods 1, 3 and 4, RMS error of Δ d ( P 1 1 P 1 k ) , Δ d ( P 2 1 P 2 k ) and Δ d ( P 6 1 P 6 k ) is used to evaluate the accuracy. For Methods 2 and 5, five point pairs are randomly selected and the RMS error of distance error is computed as the calibration error.
The auxiliary camera is a 1/3-in Sony CCD image sensor (ICX673) with a 3.6 mm lens. The image resolution is 720 pixel × 432 pixel. Target parameters are L 1 = 135  mm , L 2 = 70  mm . The image resolution of the Canon device is 1920 pixel × 1280 pixel. The intrinsic parameters of the sensors are calibrated using Bouguet’s calibration toolbox [23], as shown in Table 4.
Figure 12 also shows that the proposed method achieves the best accuracy. The RMS error of point pair distances between target 5 and the reference target of the proposed method and Methods 1–5 are 0.465 mm, 0.828 mm, 3.94 mm, 3.83 mm, 1.92 mm and 21.6 mm, respectively. The proposed method is superior to other methods.

5.3. Discussion

Due to the restriction of image resolutions, the accuracy of the pose estimation decreases with the increasing number of the targets observed in one image. There exists a trade-off between the available features of target projections and the accumulated error from the chain of transformations. The experimental results show that the accumulated error can be effectively adjusted from the constraint of ring-topologies. For the vision sensor such as the Sony CCD sensor, capturing all the targets is not a wise choice. The benefits of direct calculation of the relative poses of the targets are cancelled out by the rise of feature extraction errors.
Moreover, it is not convenient to capture all the targets in some applications, because the auxiliary camera should be far away from the widely distributed targets. As shown in Figure 11b, in order to capture all the checkerboards in one image frame, the targets are pasted on the wall.
The results of simulation images show that the accuracy of Methods 2 and 5 is close to or even better than the proposed method. However, Methods 2 and 5 achieve the worst accuracy in the real experiments. Figure 13 shows that simulation images are very sharp and clear, which greatly benefit the corner extraction of checkerboards. However, real images could not be so ideal.
These results indicate that the proposed method is accurate and robust, especially when dealing with real images. Methods 2 and 5 are not stable against the image quality. However, there is a gap among the results of synthetic data, simulation images and real experiments. There may be some reasons for this.
Firstly, there is a measurement error during the feature extraction. In our method, the line extraction algorithm is a common-used method with acceptable accuracy and good generality. Line extraction algorithms with higher accuracy contribute to improve the accuracy, which will be further studied in the future. Secondly, the targets used in real experiments are printed on paper. They may not be strictly planar, which also leads to measurement errors.
In addition, the Sony CCD vision sensor is not a professional high-precision vision sensor, which is usually used for security cameras and radio controlled vehicles. High-resolution vision sensors can be used to improve the accuracy.

6. Conclusions

In this article, we have developed a new global calibration method for vision sensors in ring-topologies. Line-based calibration targets are placed in each camera’s FOV. Firstly, the relative poses of cameras and targets are initialized and refined based on the principle of vanishing features and the known line length. Next, in order to overcome small or no overlapping FOV between adjacent cameras, an auxiliary camera is used to capture neighboring targets. The relative poses of the targets is initialized in a chainwise manner, followed by nonlinear optimization to minimizing the squared distances between the observed feature lines and the re-projected corner points. Then the transformation matrix between each camera and the reference camera is determined.
The factors that affect the calibration accuracy are analyzed by synthetic data experiments. Synthetic data, simulation images and real data experiments all demonstrate that the proposed method is accurate and robust to image noise. The accumulated error can be adjusted effectively based on the constraint of ring-topologies. Real data experiments indicates that the measurement accuracy of the farthest camera by the proposed method is about 0.465 mm in an area about 1200 mm × 1200 mm.
The poses of targets need not be known previously and can be adjusted according to the distribution of cameras. It does not need to place the targets into different positions, one placement is enough. Our method is simple and flexible and can be applied to different configurations of multiple cameras. It is well suited for the on-site calibration of widely distributed cameras.
In this paper, we focus on the calibration of DVS in ring-topologies, which contains additional constraint. When dealing with DVS in open-topologies, accumulated errors cannot be adjusted. In addition, vanishing points approximate to infinity when feature lines are parallel to the image plane, which leads to higher errors, so the angle between parallel lines and the image plane should be in a certain range to avoid vanishing points approximate to infinity.
Restricted by hardware conditions, experiments using eight sensors mounted on an UAV are temporarily lacking. We plan to apply our method for the calibration of multiple vision sensors mounted on the vehicle in the future. Methods based on the feature lines in both indoor and outdoor environments instead of planar targets will also be investigated.

Acknowledgments

This work is supported by the Industrial Technology Development Program under Grant B1120131046.

Author Contributions

The work presented in this paper has been done in collaboration of all authors. Xiaolong Wu conceived the method, designed the experiments and wrote the paper. Sentang Wu was the project leader and in charge of the direction and supervision. Zhihui Xing and Xiang Jia performed the experiments and analyzed the data. All authors discussed the results together and reviewed the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DVSDistributed vision Sensors
FOVField of view
GCFGlobal coordinate frame
SIFTScale invariant feature transform
CCFCamera coordinate frame
ACFAuxiliary camera coordinate frame
ICFImage coordinate frame
TCFTarget coordinate frame
ECFGround coordinate frame
RMSroot mean square
CCDCharge coupled device

Appendix A

A plane in a 3D space can be represented by an equation a x + b y + c z + d = 0 . Thus, a plane may be represented by the vector p = [ a , b , c , d ] T . A 3D spatial point with homogeneous coordinates x = [ x 1 , x 2 , x 3 , x 4 ] T lies in the plane p if and only if x T p = 0 .
Homogeneous vectors [ x 1 , x 2 , x 3 , x 4 ] T such that x 4 0 corresponds to finite points in 3 . The points with last coordinate x 4 = 0 are known as points at infinity. The set of points at infinity can be written as x = [ x 1 , x 2 , x 3 , 0 ] T . Note that x lies in the plane at infinity, denoted by the vector p = [ 0 , 0 , 0 , 1 ] T , because x T p = 0 .
From Equation (5), note that [ c i , 0 , a i , 0 ] [ a i , 0 , c i , d i ] T = 0 ,   [ c i , 0 , a i , 0 ] [ 0 , 1 , 0 , 0 ] T = 0 , thus the line a i x + c i z + d i = 0 , y = 0 intersects the infinite plane in the infinite point [ c i , 0 , a i , 0 ] T .

References

  1. Lu, R.S.; Li, Y.F. A global calibration method for large-scale multi-sensor visual measurement systems. Sens. Actuators A Phys. 2004, 116, 384–393. [Google Scholar] [CrossRef]
  2. Peng, X.M.; Bennamoun, M.; Wang, Q.B.; Ma, Q.; Xu, Z.Y. A low-cost implementation of a 360 degrees vision distributed aperture system. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 225–238. [Google Scholar] [CrossRef]
  3. Bazargani, H.; Laganiere, R. Camera calibration and pose estimation from planes. IEEE Instrum. Measur. Mag. 2015, 18, 20–27. [Google Scholar] [CrossRef]
  4. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  5. Kumar, R.K.; Ilie, A.; Frahm, J.M.; Pollefeys, M. Simple calibration of non-overlapping cameras with a mirror. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–7.
  6. Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Mirror-based extrinsic camera calibration. In Algorithmic Foundation of Robotics VIII; Springer: Berlin, Germany, 2009; pp. 285–299. [Google Scholar]
  7. Takahashi, K.; Nobuhara, S.; Matsuyama, T. A new mirror-based extrinsic camera calibration using an orthogonality constraint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1051–1058.
  8. Liu, Z.; Zhang, G.J.; Wei, Z.Z.; Sun, J.H. Novel calibration method for non-overlapping multiple vision sensors based on 1D target. Opt. Lasers Eng. 2011, 49, 570–577. [Google Scholar] [CrossRef]
  9. Liu, Z.; Zhang, G.J.; Wei, Z.Z.; Sun, J.H. A global calibration method for multiple vision sensors based on multiple targets. Measur. Sci. Technol. 2011, 22, 125102. [Google Scholar] [CrossRef]
  10. Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional underwater camera design and calibration. Sensors 2015, 15, 6033–6065. [Google Scholar] [CrossRef] [PubMed]
  11. Pagel, F. Calibration of non-overlapping cameras in vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium (IV), San Diego, CA, USA, 21–24 June 2010; pp. 1178–1183.
  12. Sun, J.H.; He, H.B.; Zeng, D.B. Global calibration of multiple cameras based on sphere targets. Sensors 2016, 16, 14. [Google Scholar] [CrossRef] [PubMed]
  13. Ullman, S. The interpretation of structure from motion. Proc. R. Soc. Lond. B Biol. Sci. 1979, 203, 405–426. [Google Scholar] [CrossRef] [PubMed]
  14. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Science & Business Media: London, UK, 2010. [Google Scholar]
  15. Fitzgibbon, A.W.; Zisserman, A. Automatic camera recovery for closed or open image sequences. In Computer Vision—ECCV'98; Springer: Berlin, Germany, 1998; pp. 311–326. [Google Scholar]
  16. Zhang, Z.; Shan, Y. Incremental motion estimation through modified bundle adjustment. In Proceedings of the 2003 International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003.
  17. Ly, D.S.; Demonceaux, C.; Vasseur, P.; Pégard, C. Extrinsic calibration of heterogeneous cameras by line images. Mach. Vis. Appl. 2014, 25, 1601–1614. [Google Scholar] [CrossRef]
  18. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK/New York, NY, USA, 2003. [Google Scholar]
  19. Xu, G.L.; Qi, X.P.; Zeng, Q.H.; Tian, Y.P.; Guo, R.P.; Wang, B.A. Use of land’s cooperative object to estimate UAV’s pose for autonomous landing. Chin. J. Aeronaut. 2013, 26, 1498–1505. [Google Scholar] [CrossRef]
  20. Wang, X.L. Novel calibration method for the multi-camera measurement system. J. Opt. Soc. Korea 2014, 18, 746–752. [Google Scholar] [CrossRef]
  21. Wei, Z.Z.; Shao, M.W.; Zhang, G.J.; Wang, Y.L. Parallel-based calibration method for line-structured light vision sensor. Opt. Eng. 2014, 53, 033101. [Google Scholar] [CrossRef]
  22. Wei, Z.; Liu, X. Vanishing feature constraints calibration method for binocular vision sensor. Opt. Express 2015, 23, 18897–18914. [Google Scholar] [CrossRef] [PubMed]
  23. Bouguet, J.-Y. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed on 28 March 2016).
  24. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  25. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef]
  26. Moré, J.J. The levenberg-marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin, Germany, 1978; pp. 105–116. [Google Scholar]
  27. Diebel, J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix 2006, 58, 1–35. [Google Scholar]
  28. Rose, M.K.; Chamberlain, J.; LaValley, D. Real-Time 360° Imaging System for Situational Awareness; SPIE: Orlando, FL, USA, 2009. [Google Scholar]
Figure 1. Planar target with two mutually orthogonal groups of parallel lines: (a) Planform of target k; (b) Perspective projection of target k onto the image plane.
Figure 1. Planar target with two mutually orthogonal groups of parallel lines: (a) Planform of target k; (b) Perspective projection of target k onto the image plane.
Sensors 16 00838 g001
Figure 2. (a) Two adjacent targets (i, j) captured by the auxiliary camera; (b) The principle of the global calibration.
Figure 2. (a) Two adjacent targets (i, j) captured by the auxiliary camera; (b) The principle of the global calibration.
Sensors 16 00838 g002
Figure 3. Error vs. the pitch angle of camera relative to the target plane: (a) RMS error of rotation vs. the pitch angle; (b) RMS error of translation vs. the pitch angle.
Figure 3. Error vs. the pitch angle of camera relative to the target plane: (a) RMS error of rotation vs. the pitch angle; (b) RMS error of translation vs. the pitch angle.
Sensors 16 00838 g003
Figure 4. Error vs. the yaw angle difference between two adjacent targets: (a) RMS error of rotation vs. the yaw angle difference; (b) RMS error of translation vs. the yaw angle difference.
Figure 4. Error vs. the yaw angle difference between two adjacent targets: (a) RMS error of rotation vs. the yaw angle difference; (b) RMS error of translation vs. the yaw angle difference.
Sensors 16 00838 g004
Figure 5. Error vs. the distance of parallel lines: (a) RMS error of rotation vs. the distance; (b) RMS error of translation vs. the distance.
Figure 5. Error vs. the distance of parallel lines: (a) RMS error of rotation vs. the distance; (b) RMS error of translation vs. the distance.
Sensors 16 00838 g005
Figure 6. Eight cameras mounted on UAV.
Figure 6. Eight cameras mounted on UAV.
Sensors 16 00838 g006
Figure 7. Calibration error of each method. (a,b) RMS errors of rotation vector and translation vector of the proposed method, Method 1 and 2; (c,d) RMS errors of rotation vector and translation vector of Methods 3–5.
Figure 7. Calibration error of each method. (a,b) RMS errors of rotation vector and translation vector of the proposed method, Method 1 and 2; (c,d) RMS errors of rotation vector and translation vector of Methods 3–5.
Sensors 16 00838 g007aSensors 16 00838 g007b
Figure 8. Calibration error vs. the noise level: (a) RMS error of rotation vs. the image noise; (b) RMS error of translation vs. the image noise.
Figure 8. Calibration error vs. the noise level: (a) RMS error of rotation vs. the image noise; (b) RMS error of translation vs. the image noise.
Sensors 16 00838 g008
Figure 9. Simulation images: (a) Target 1 captured by camera 1; (b) Target 2 and Target 3 captured by the auxiliary camera.
Figure 9. Simulation images: (a) Target 1 captured by camera 1; (b) Target 2 and Target 3 captured by the auxiliary camera.
Sensors 16 00838 g009
Figure 10. Calibration error of each method: (a,b) Errors of rotation vector and translation vector of the proposed method, Methods 1 and 2; (c,d) Errors of rotation vector and translation vector of Methods 3, 4 and 5.
Figure 10. Calibration error of each method: (a,b) Errors of rotation vector and translation vector of the proposed method, Methods 1 and 2; (c,d) Errors of rotation vector and translation vector of Methods 3, 4 and 5.
Sensors 16 00838 g010
Figure 11. Global calibration of eight targets: (a) Eight targets and the auxiliary camera in the real experiment; (b) The auxiliary camera captures eight checkerboards in one image frame.
Figure 11. Global calibration of eight targets: (a) Eight targets and the auxiliary camera in the real experiment; (b) The auxiliary camera captures eight checkerboards in one image frame.
Sensors 16 00838 g011
Figure 12. The distance error of each method. (a) RMS errors of the proposed method, Methods 1 and 2; (b) RMS errors of Methods 3, 4 and 5.
Figure 12. The distance error of each method. (a) RMS errors of the proposed method, Methods 1 and 2; (b) RMS errors of Methods 3, 4 and 5.
Sensors 16 00838 g012
Figure 13. Simulation image and real image. (a) Checkerboards simulated by software; (b) Checkerboards captured by the auxiliary camera.
Figure 13. Simulation image and real image. (a) Checkerboards simulated by software; (b) Checkerboards captured by the auxiliary camera.
Sensors 16 00838 g013
Table 1. Definition of the transformation matrices.
Table 1. Definition of the transformation matrices.
T k tc The transformation matrix from TkCF to CkCF
T i i ta The transformation matrix from TiCF to AiCF
T j i ta The transformation matrix from TjCF to AiCF
T i j tt The transformation matrix from TiCF to TjCF
T i j cc The transformation matrix from CiCF to CjCF
Table 2. The positions and orientations of the cameras.
Table 2. The positions and orientations of the cameras.
Camera ID12345678
x (mm)100600600200−200−600−600−100
y (mm)−150−150−150−150−150−150−150−150
z (mm)0−400−500−1000−1000−500−4000
φ (deg)2864125158202235296332
θ (deg)00000000
ϕ (deg)00000000
Table 3. Summary of the calibration methods
Table 3. Summary of the calibration methods
Methods Calibration TargetsAuxiliary Camera Capture
The proposed methodLine-feature targetsNeighboring target pairs
Method 1Point-feature targetsNeighboring target pairs
Method 2CheckerboardsNeighboring target pairs
Method 3Line-feature targetsAll targets in one image
Method 4Point-feature targetsAll targets in one image
Method 5CheckerboardsAll targets in one image
Table 4. Intrinsic parameters of the vision sensors
Table 4. Intrinsic parameters of the vision sensors
fxfyu0v0k1k2p1P2
Sony592.73225460.22694343.24159225.49825−0.401750.15735−0.000090.00111
Canon1561.293731560.15359972.49782597.06160−0.191780.13171−0.00190−0.00129

Share and Cite

MDPI and ACS Style

Wu, X.; Wu, S.; Xing, Z.; Jia, X. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features. Sensors 2016, 16, 838. https://doi.org/10.3390/s16060838

AMA Style

Wu X, Wu S, Xing Z, Jia X. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features. Sensors. 2016; 16(6):838. https://doi.org/10.3390/s16060838

Chicago/Turabian Style

Wu, Xiaolong, Sentang Wu, Zhihui Xing, and Xiang Jia. 2016. "A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features" Sensors 16, no. 6: 838. https://doi.org/10.3390/s16060838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop