Next Article in Journal
New Training Approach for Improving the Spatial Perception and Orientation Ability of Dentistry Students
Previous Article in Journal
An Assessment of the Influence of Dental Porcelain Slurry Preparation on Flexural Strength of Different Feldspathic Porcelains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of the Error Caused by Camera Movement for the Stereo-Vision System

1
College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
2
Institute of Intelligent Optical Measurement and Detection, Shenzhen University, Shenzhen 518060, China
3
School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072, China
4
International Research Laboratory of Impact Dynamics and Its Engineering Application, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(20), 9384; https://doi.org/10.3390/app11209384
Submission received: 7 September 2021 / Revised: 30 September 2021 / Accepted: 1 October 2021 / Published: 9 October 2021
(This article belongs to the Section Optics and Lasers)

Abstract

:
The stereo-vision system plays an increasingly important role in various fields of research and applications. However, inevitable slight movements of cameras under harsh working conditions can significantly influence the 3D measurement accuracy. This paper focuses on the effect of camera movements on the stereo-vision 3D measurement. The camera movements are divided into four categories, viz., identical translations and rotations, relative translation and rotation. The error models of 3D coordinate and distance measurement are established. Experiments were performed to validate the mathematical models. The results show that the 3D coordinate error caused by identical translations increases linearly with the change in the positions of both cameras, but the distance measurement is not affected. For identical rotations, the 3D coordinate error introduced only in the rotating plane is proportional to the rotation angle within 10° while the distance error is zero. For relative translation, both coordinate and distance errors keep linearly increasing with the change in the relative positions. For relative rotation, the relationship between 3D coordinate error and rotation angle can be described as the nonlinear trend similar to a sine-cosine curve. The impact of the relative rotation angle on distance measurement accuracy does not increase monotonically. The relative rotation is the main factor compared to other cases. Even for the occurrence of a rotation angle of 10°, the resultant maximum coordinate error is up to 2000 mm, and the distance error reaches 220%. The results presented are recommended as practice guidelines to reduce the measurement errors.

1. Introduction

In recent years, the stereo-vision system has been used widely in many fields, e.g., industrial manufacturing and inspection [1,2,3,4], experimental mechanics [5,6,7,8], structural health monitoring [9,10,11], medical diagnosis [12] and aerospace engineering [13,14,15]. To obtain accurate 3D information, the two cameras in the stereo-vision system should remain stationary during the whole test period. This is because most stereo-vision systems calibrate the camera parameters at the beginning of the measurement [16,17,18]. Any slight camera movements during the tests can cause the extrinsic parameters (positions and orientations of the cameras) to vary and then decrease measurement accuracy.
However, it is often impractical for the two cameras in a stereoscopic system to stay stationary in harsh working conditions, particularly in an outdoor environment. For example, when performing deflection measurement by digital image correlation in the outdoors for the inspection of bridge structures [19], the changes in the position and orientation of the stereo cameras often occur owing to various factors like wind, oscillations, and the lack of stability of the ground. When the stereo-vision systems are employed in autonomous land navigation [20], the road roughness can cause unpredictable camera movements. A strong vibration induced by a heavy steam hammer at high-speed is an unavoidable factor to compromise the measurement accuracy while using the stereo vision to measure the ram speed in forming workpieces [21]. Furthermore, a stereoscopic camera system has already been demonstrated for wing deformation measurements. Nevertheless, they have to cope with the problems of camera movements induced by the high aerodynamic loads and mechanical vibration during a measurement. Particularly, it is difficult to avoid these movements for in-flight applications [22], since the high vibration level occurs during the dynamic flight maneuvers.
In all of the above cases, one common issue is camera movement, which adversely affects the applicability and versatility of the vision method. Sohn and Kehtarnavaz [20] used mini-max and minimum-mean-square estimators to analyze the error caused by camera movements in a monocular vision-based tracking system while the effects of camera movement on 3D measurement of stereo-vision systems were not involved. Many researchers used monocular camera systems to measure the structural displacements that are parallel to the imaging plane in an outdoor environment [23,24,25,26]. They found that it is difficult to keep the monitoring camera stationary during the entire process, which affected the accuracy of displacement measurements. Several compensation methods [26,27] have been developed to compensate for the camera movements. For instance, to address this challenge of camera movement, Yoon et al. [28] calculated and removed the nonstationary motion of the camera by tracking the background feature points to obtain the accurate absolute displacement. However, the out-of-plane displacement of the structure is neglected. These methods consider structural displacements to be in-plane translations. Zhang et al. [29] presented a hybrid inertial vision-based system that can measure three-dimensional structural displacements by using a monocular camera. This system does not require the camera to remain stationary during the measurements, but the camera movements, i.e., rotations and translations during the measurement process are compensated by using a stationary calibration target in the field of view and an attached tilt sensor.
Kirmse [30] performed an in-flight wing deformation measurement on a Cobra VUT100 airplane using the stereo image pattern correlation technique. They found that relative camera movement led to a decalibration of the stereo system and then affected the precision of the measured 3D position significantly. Xu et al. [31] developed a semi-physical simulation system to investigate the effect of camera parameter error on 3D reconstruction accuracy in a stereo-vision system. They pointed out that extrinsic parameter error has a relatively large impact on system reconstruction accuracy by affecting baseline distance and the angle between cameras. In their works, only the relative movement between the cameras was studied without considering the movement of the two cameras together (identical movement). Yang et al. [32] analyzed the impact of system structure parameters and camera calibration parameters on location accuracy in binocular stereo vision system. However, it also did not include the error caused by identical movement. Satoru and Hiroki [19] performed bridge deflection measurement outside using digital image correlation with a single camera where the effect of camera movement is corrected by perspective transformation. However, the 2D DIC setup requires a perpendicular alignment of the camera towards the planar specimen, which limits the applicability. Kim et al. [33] developed marker-based displacement measurement models for correcting the systematic errors caused by camera movements. However, an assumption made is that the structural displacement occurs on a two-dimensional plane. Bier and Luchowski [34] investigated the influence of input data errors on the quality of reconstructed 3D points in the stereo-vision process while they only considered those particular camera calibration and 3D reconstruction algorithms which employ singular value decomposition (SVD) methods. Reu [35] presented a study of the influence of calibration errors on the uncertainty in 3D position and object motion for the stereo-DIC system using the Monte Carlo approach. However, this “black-box” method lacks the support of an analytically derived model. Wang et al. [36] analyzed the depth measurement error of binocular stereo vision and proposed some strategies to reduce the errors. They had not given the error models of 3D coordinates and distance. Su et al. [37] proposed a new method to calibrate the stereo-DIC system automatically with feature correspondences in an unconstrained scene, but the performance with respect to the feature matching errors rather than the camera movement error was evaluated.
The aforementioned studies primarily focus on analyzing the errors of the camera movements in the monocular vision system, or investigating the camera movement errors using either simulated or numerically altered images. The works attempting to quantify the effect of both identical and relative camera movement in a stereo-vision system on 3D measurement by both analytical and experimental approaches are insufficient. The contributions are that the calculation models of 3D coordinates and distance measurement error are derived by considering the identical and relative movement. The effect characteristic of each case was verified and analyzed, and the main factor was determined as relative rotation. Some strategies are proposed to reduce the measurement errors caused by camera movement.
The rest of this paper is organized as follows. In Section 2, the principle of stereo-vision 3D measurement is introduced. In Section 3, the measurement error models for different types of camera movements are established. In Section 4, four groups of experiments are performed to validate our analysis. Finally, the conclusions of this study are given in Section 5.

2. Principle

As shown in Figure 1, the stereo-vision system consists of two cameras, which are used to capture the images of the same object from different positions at the same time. For any spatial point, the 3D world coordinate is denoted by P ( X w , Y w ,   Z w ) , and the pixel coordinates are p l ( u l , v l ) in the left camera and p r ( u r , v r ) in the right camera. The left camera and right camera coordinate systems are defined as C l X c l Y c l Z c l and C r X c r Y c r Z c r , respectively, and their origins C l and   C r are located at the optical centers.
According to the pinhole imaging principle [38], we have
{ [ X c l / Z c l Y c l / Z c l 1 ] = [ x n l y n l 1 ] = A l 1 [ u l v l 1 ] [ X c r / Z c r Y c r / Z c r 1 ] = [ x n r y n r 1 ] = A r 1 [ u r v r 1 ] ,
where ( x n l , y n l ) = ( X c l / Z c l , Y c l / Z c l ) and ( x n r , y n r ) = ( X c r / Z c r , Y c r / Z c r ) are denoted as the image coordinates after focal length normalization, A l and A r are the intrinsic matrices.
Without loss of generality, the world coordinate system coincides with the left camera coordinate system. Thus, the imaging model of the stereo-vision system can be described as
{ Z w [ x n l y n l 1 ] = [ X w Y w Z w ] Z c r [ x n r y n r 1 ] = [ R l r | T l r ] [ X w Y w Z w 1 ] ,   with   R l r = [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ]   and   T l r = [ T x T y T z ] ,
where   R l r   and   T l r denotes the rotation matrix and translation vector from the left camera coordinate system to the right camera coordinate system. From Equation (2), P ( X w , Y w ,   Z w ) in the world coordinate system can be derived as.
{ X w = x n l Z w Y w = y n l Z w Z w = T x x n r T z x n r ( r 31 x n l + r 32 y n l + r 33 ) ( r 11 x n l + r 12 y n l + r 13 ) = T y y n r T z y n r ( r 31 x n l + r 32 y n l + r 33 ) ( r 21 x n l + r 22 y n l + r 23 ) ,  

3. Camera Movement Errors

In the stereo-vision system, A l , A r ,   R l r   and   T l r have been calibrated before measurement. It is generally assumed that the intrinsic parameters ( A l , A r ) are unaffected during slight camera movements. However, the extrinsic parameters ( R l r ,   T l r ) and the global world coordinate system coinciding with the left camera coordinate system have been altered. The movement of a camera is composed of both translation and rotation, which correspond to the change of position and orientation. To clarify how the 3D coordinate error is varied with the changes of the positions and orientations of two cameras, we have divided the camera movements into four categories as below.
(1)
Identical translations of the left and right cameras when the positions and orientations of their optical centers have the same changes.
(2)
Identical rotations of the two cameras when the orientations of two cameras have the same changes.
(3)
Relative translation when the position of the left camera relative to that of the right camera is changed.
(4)
Relative rotation when the orientation of the left camera relative to that of the right camera is changed.

3.1. Identical Translations

Figure 2 shows the relationship between a pair of camera coordinate systems before and after identical translations. In this case, the global world coordinate system experiences rigid body translations. The translation amount is denoted by T w = [ T w x ,   T w y ,   T w z ] T . Let the world coordinate of any point, before and after identical translations, be X w ( X w , Y w ,   Z w ) and w ( X w , Y w ,   Z w ) , respectively. T w is considered to be the error introduced in the initial 3D coordinate   X w . Then, the 3D coordinate error X w is described as follows.
Δ X w = [ X w Y w Z w ] - [ X w Y w Z w ] = [ Δ T w x Δ T w y Δ T w z ] ,  
From Equation (4), the 3D coordinate error X w   is equal to the identical translation T w .
Suppose that any two points on the measured object are defined as X w 1 ( X w 1 , Y w 1 ,   Z w 1 ) and X w 2 ( X w 2 , Y w 2 ,   Z w 2 ) in the initial world coordinate system. After camera movements,   X w 1 ( X w 1 , Y w 1 ,   Z w 1 ) and   X w 2 ( X w 2 , Y w 2 ,   Z w 2 ) are the corresponding points in the new world coordinate system. The distances between the two selected points before and after the identical translations are expressed as
{ L w = ( X w 1 - X w 2 ) 2 + ( Y w 1 - Y w 2 ) 2 + ( Z w 1 - Z w 2 ) 2 L w = ( X w 1 - X w 2 ) 2 + ( Y w 1 - Y w 2 ) 2 + ( Z w 1 - Z w 2 ) 2 ,  
For the two pairs of points, viz., X w 1 and X w 1 , X w 2 and   X w 2 , the following relations from Equation (4) can be obtained as
{ X w 1 = X w 1 + Δ T w x Y w 1 = Y w 1 + Δ T w y Z w 1 = Z w 1 + Δ T w z , { X w 2 = X w 2 + Δ T w x Y w 2 = Y w 2 + Δ T w y Z w 2 = Z w 2 + Δ T w z  
Combining Equations (5) and (6) yields the following expression
L w = L w  
It was concluded that the distance between any two points remains unchanged after identical translations.
Displacement and strain form an important group of parameters concerning the deformation measurement. Displacement represents the change of position of an object in a certain period of time. Strain refers to the ratio of the change in length of a small line segment to the original length. From Equations (4) and (7), the error in measuring the displacement should be equal to the amount of identical translations, while the strain measurement is not affected.

3.2. Identical Rotations

The identical rotations of two cameras in a stereo-vision system imply that the global world coordinate system experiences a rigid body rotation, as shown in Figure 3. The 3D coordinate error X w can be expressed as
[ Δ X w Δ Y w Δ Z w ] = [ X w Y w Z w ] - [ X w Y w Z w ] = Δ R w [ X w Y w Z w ] [ X w Y w Z w ]  
where R w denotes the transformation from the initial world coordinate system to the final world coordinate system owing to the identical rotations of both cameras.
The rotation matrix R w is a 3 × 3-unit orthogonal matrix, and depends only on three independent variables. Therefore, R w can be represented as
Δ R w = [ cos γ ˜ sin γ ˜ 0 sin γ ˜ cos γ ˜ 0 0 0 1 ] [ 1 0 0 0 cos α ˜ sin α ˜ 0 sin α ˜ cos α ˜ ] [ cos β ˜ 0 sin β ˜ 0 1 0 sin β ˜ 0 cos β ˜ ]  
where ( α ˜ , β ˜ ,   γ ˜ ) are defined as the rotation angles that the initial world coordinate system is transformed to the orientation that coincides with the final world coordinate system.
The unavoidable camera movements are often slight, and then the rotation angles ( α ˜ , β ˜ ,   γ ˜ )   can be set to a small value. By taking the small angle approximation, i.e., sin α ˜ α ˜ ,   cos α ˜ 1 ,   sin β ˜ β ˜ ,   cos β ˜ 1 ,   sin γ ˜ γ ˜ ,   cos γ ˜ 1 , we have
{ Δ X w β ˜ α ˜ γ ˜ X w γ ˜ Y w + ( β ˜ + α ˜ γ ˜ ) Z w Δ Y w ( γ ˜ β ˜ α ˜ ) X w + ( β ˜ γ ˜ α ˜ ) Z w Δ Z w β ˜ X w + α ˜ Y w  
From Equation (10), the 3D coordinate error X w has a linear relationship with the identical rotation angle of the two cameras.
According to Equation (8), the distance between the two measured points after camera movement can be described as
L w = X w 1 X w 2 = Δ R w ( X w 1 X w 2 )  
Since R w = 1 , we have L w = L w . It indicates that the distance between any two points is unaffected by the identical rotations of two cameras.

3.3. Relative Translations

As shown in Figure 4, the relative translation can be regarded as the case in which the left camera is fixed and only the translation of the right camera causes the position between the two cameras to change. The translation vector between the right camera coordinate systems before and after camera movement is expressed by T r r = [ T x ,   T y ,   T z ] T . According to Equation (3), the 3D coordinate error X w can be described as
{ Δ X w = x n l Δ Z w Δ Y w = y n l Δ Z w Δ Z w = Δ T x x n r Δ T z x n r ( r 31 x n l + r 32 y n l + r 33 ) ( r 11 x n l + r 12 y n l + r 13 ) = Δ T y y n r Δ T z y n r ( r 31 x n l + r 32 y n l + r 33 ) ( r 21 x n l + r 22 y n l + r 23 )  
According to Equation (12), X w is linear with the change of relative translation vector, and the coefficient depends on the normalized image coordinates ( x n l , y n l ) and ( x n r , y n r ) , and the relative rotation matrix   R l r . Since each point in the field of view of both cameras has a specific image coordinate, X w varies with different measuring points. Therefore, the 3D coordinate errors of any two points owing to the relative translation are described as
{ Δ X w 1 = X w 1 X w 1 Δ Y w 1 = Y w 1 Y w 1 Δ Z w 1 = Z w 1 Z w 1 , { Δ X w 2 = X w 2 X w 2 Δ Y w 2 = Y w 2 Y w 2 Δ Z w 2 = Z w 2 Z w 2  
Based on Equation (13), the distance between two selected points after camera movement can be expanded with Taylor’s formula [39]. Ignoring the second-order term, the linearized equation of distance error is expressed as follows.
Δ L w = L w L w = Δ X w 1 X w 1 0 - X w 2 0 L w + Δ Y w 1 Y w 1 0 - Y w 2 0 L w + Δ Z w 1 Z w 2 0 - Z w 1 0 L w Δ X w 2 X w 1 0 - X w 2 0 L w Δ Y w 2 Y w 1 0 - Y w 2 0 L w Δ Z w 2 Z w 2 0 - Z w 1 0 L w
Combining Equations (12) to (14), we find that the distance measurement error varies linearly with the change in the relative translation vector.

3.4. Relative Rotation

As shown in Figure 5, the relative rotation between two cameras can be treated as a process in which the left camera coordinate system is fixed while the right camera coordinate system is rotated. The rotation matrix R l r between the two camera coordinate systems after the camera movement can be expressed as
R l r = R l r R r r ,   with   R r r = [ Δ r 11 Δ r 12 Δ r 13 Δ r 21 Δ r 22 Δ r 23 Δ r 31 Δ r 32 Δ r 33 ]  
where R l r denotes the initial rotation matrix, R r r represents the transformation matrix between right camera coordinate systems before and after camera movement.
According to Equation (3), the 3D coordinate error X w introduced by relative rotation can be expressed as
{ Δ X w = x n l Δ Z w Δ Y w = y n l Δ Z w Δ Z w = T x x n r T z x n r ( x n l i = 1 3 r 3 i Δ r i 1 + y n l i = 1 3 r 3 i Δ r i 2 + i = 1 3 r 3 i Δ r i 3 ) ( x n l i = 1 3 r 1 i Δ r i 1 + y n l i = 1 3 r 1 i Δ r i 2 + i = 1 3 r 1 i Δ r i 3 ) T x x n r T z x n r ( x n l r 31 + y n l r 32 + r 33 ) ( x n l r 11 + y n l r 12 + r 13 )  
Similar to Equation (9), R r r can be expressed by three rotation angles α , β , and γ . If the right camera is only subjected to one rotation angle movement, for instance, α ≠ 0, β = 0 and γ = 0, Equation (16) becomes
{ Δ X w = x n l Δ Z w Δ Y w = y n l Δ Z w Δ Z w = T x x n r T z ( x n r y n l r 32 + x n r r 33 y n l r 12 r 13 ) cos Δ α + ( x n r y n l r 33 x n r r 32 y n l r 13 + r 12 ) sin Δ α + x n r x n l r 31 x n l r 11 T x x n r T z x n r ( r 31 x n l + r 32 y n l + r 33 ) ( r 11 x n l + r 12 y n l + r 13 )  
Based on the auxiliary angle formula of the trigonometric function, the denominator of Z w in Equation (17) varies with α according to the shape of a sine-cosine curve, hence, the changes of 3D coordinate measurement error should have a sine-cosine trend. Furthermore, there will be a non-linear variation in the measured distance when the relative rotation occurs in the stereo cameras.

4. Experiments and Results

To verify the analysis and model described in the previous section, four groups of experiments have been performed.

4.1. Identical Translation Experiment

As shown in Figure 6a, the stereo-vision system in this experiment consisted of two MER-1070-14U3x cameras with image resolutions of 3840 × 2748 pixels and lens focal lengths of 8 mm. The left and right cameras are all mounted directly onto an electronic control platform, which consists of a two-dimensional translation stage and a two-axis rotation stage. The accuracy of the translation motion is 0.03 mm, and the rotation accuracy is 0.05°. A flat checkerboard placed at the opposite side of the stereo-vision system is the measured object. The size of each square is 25 mm × 25 mm. Figure 6b shows the high-precision motion controller to achieve the independent translational and rotational movement by the electrical step motors. Figure 6c shows the definition of the coordinate system of the rotational stage. The Z-axis of rotational stage is parallel to the optical axes (i.e., C l Z c l axis and C r Z c r in Figure 1) of the two cameras, and the X and Y-axes are parallel to the camera’s CCD imaging plane.
Before performing translational movement of two cameras, we employ Zhang’s method [39] to calibrate the initial camera parameters ( A l , A r ,   R l r   and T l r ) and the lens distortion parameters (κ1, κ2, κ3, p1, p2). Thereafter, a pair of reference images of the checkerboard is acquired by the left and right cameras simultaneously, as shown in Figure 7a. Subsequently, both the cameras are translated together with the movement of the electronic control platform along the X and Z-axes, respectively. The two cameras are moved by 44 mm with an increment of 4 mm, and 11 image pairs are taken at 11 different positions by the stereo-vision system. Figure 7a also shows the stereo image pair captured in the state of identical translations of 44 mm along the X-axis. The five distortion parameters are used to calculate an ideal distortion-free image. After removing the distortions, we adopt the corner detection algorithm to extract the image coordinates of the corner points in each pair of undistorted images. The world coordinates of the corner points in all the pairs of images can be calculated through the triangulation algorithm. The difference between the 3D world coordinate   X w from the image pair at each translational position, and X w from initial reference image pair, is the 3D coordinate error.
Figure 8a,b shows the average   X w X w of all corner points at each translational position while the two cameras are translated along the X and Z-axes, respectively. It can be seen that the 3D coordinate measure ment errors in the three directions increase linearly with the increasing translation. Particularly, the error in X w direction is higher than that of the other two directions when translating the two cameras along the X-axis. In contrast, the error in Z w direction is higher than that of the other two directions when translating the two cameras along the Z-axis. In this experiment, the X and Z-axes of the translation stage are parallel to the X w and Z w directions of the world coordinate system, respectively. Therefore, the component of 3D coordinates is more greatly affected in the direction along which the two cameras are translated.
Figure 8c,d shows the comparison between the measured and real distance between the adjacent corner points at each translational position. The results show that the impact of identical translation on the distance measurement can be neglected, which is consistent with the theoretical analysis in Section 3.1.

4.2. Identical Rotation Experiment

The experimental setup for identical rotation of the two cameras is the same as that shown in Figure 6. First, a pair of images of the checkerboard are acquired without camera rotation, as shown in Figure 7b. Then both cameras are rotated together from 0° to 10° in an angular step of 1° around the X-axis and Y-axis, respectively. In each rotation pose, a pair of checkerboard images are taken by the left and right cameras, simultaneously. Figure 7b also shows the stereo image pair captured in the state of identical rotations of 10° around the Y-axis.
Figure 9a shows the mean value of the 3D coordinate errors of all the corner points in each rotation pose, while the two cameras are rotated around the X-axis. The 3D coordinate error in Y w direction is significantly proportional to the rotation angle, and the error coefficient is −22.1 mm/degree. The error in the Z w direction experiences minuscule change, whereas the error in the X w direction is always zero. This result can be explained based on the analysis given in Section 3.2. The rotation of both cameras around the X-axis implies that β ˜ = 0 ,   γ ˜ = 0 . Using Equation (10), we can obtain that   X w 0 ,   Y w α ˜ Z w 0 ,   Z w α ˜ Y w 0 . Initially, the mean Z w 0 of all the corner points is 1174 mm. Therefore, the mean error of all corner points in the Y w direction can be approximated as Y ¯ w 20.49 α ˜ . Considering the small deviation between the Z-axis of the rotational stage and the   Z w direction of the world coordinate system, the present experimental results succeed in verifying the theoretical model. When the two cameras are rotated around the Y-axis, similar trends have been observed in Figure 9b. We can conclude that the identical rotation of the two cameras in a stereo-vision system produces errors in coordinates only in the rotating plane.
Figure 9c,d shows the measured distance between the adjacent corner points compared to the real distance of 25 mm. The maximum difference is less than 0.08 mm. Therefore, the effect of the identical rotation on the distance measurement can be negligible. This result agrees with the theoretical prediction from Section 3.2 that the angle of identical rotation should have produced zero distance error.

4.3. Relative Translation Experiment

Figure 10 shows the experimental setup for the relative movement between the two cameras. The left camera is fixed directly onto the optical isolation table while the right camera is mounted onto the electronic control platform. The position or orientation of the right camera is changed by controlling the translational or rotational stage.
In the experiment for relative translation, the right camera is translated in 4 mm increments along the X and Z-axes, respectively. Eleven image pairs are captured at 11 different positions by the two cameras. Figure 7c shows the stereo image pairs captured in the initial state and the state of relative translation (44 mm) along X-axis.
Figure 11a,b shows the 3D coordinate error introduced by the relative translation. Two points from the checkerboard (Corner A and B in Figure 9) are selected to analyze the error variation trend. It can be seen that the 3D coordinate error of any point and the mean error of all the corner points in three directions keep linearly increasing with the increasing translation. Specifically, the 3D coordinate errors vary for different measured points on the object. These results effectively verify the theoretical analysis given in Section 3.3.
Figure 11c,d shows the comparisons of the measured distance between the two adjacent points and the real value of 25 mm. It can be seen that the distance error varies linearly with the increasing translation. This trend shows a good agreement with the prediction using Taylor’s formula. In Figure 10, the 3D coordinate and distance error caused by the translation along the X-axis are larger than those along the Z-axis (its optical axis).

4.4. Relative Rotation Experiment

The experimental setup for the relative rotation is the same as that shown in Figure 10. First, a pair of images of the checkerboard is acquired before rotating the right camera, as shown in Figure 7d. Thereafter, the left and right cameras simultaneously take a pair of checkerboard images every time the rotation stage is rotated in an angular step of 1° around the X and Y-axes, respectively. Figure 7d shows the stereo image pair captured in the state of relative rotation (21°) around the X-axis.
Figure 12a,b shows the 3D coordinate error introduced by the relative rotation around X and Y-axes, respectively. It can be observed that the coordinate error of any point (A or B) and the mean error of all corner points in three directions, keep the nonlinear changes of the sine-cosine curve with the increasing rotation angle. These results effectively verify the theoretical analysis given in Section 3.4. Figure 12c,d shows the comparative results of the measured distance between the two adjacent points and the real value of 25 mm. The distance error is not monotonically increasing with the rotation angle.
Furthermore, it is evident from Figure 12 that the 3D coordinate and distance errors caused by the relative rotation are much higher than those caused by the identical translation and rotation, and the relative translation. Even for the occurrence of a rotation angle of 10° during images acquisition, the resultant maximum coordinate error is up to 2000 mm, and the distance error reaches 220%. Therefore, the stereo-vision system is highly sensitive to the relative rotation between the two cameras.

5. Conclusions

Under harsh working conditions, the perfect stability of the two cameras in a stereo-vision system cannot be realized, especially in an outdoor environment. Any slight movements of the cameras can significantly influence the 3D measurement accuracy. In this paper, we have investigated the impact of the camera movement on 3D measurements in detail. The camera movements are divided into four categories, viz., identical translation and rotation, and relative translation and rotation. For each case, the calculation model of the 3D coordinate and distance measurement error was derived. Experimental verification was performed with high-precision translation and rotation stages. The theoretical analysis and experiments have brought some valuable conclusions as follows.
For identical translations, the 3D coordinate measurement error increases linearly with the increasing change in both cameras’ positions while the measurement of the distance between two points remains unaffected.
For identical rotations, the 3D coordinate error is introduced only in the rotating plane, and it is proportional to the rotation angle within 10°. The identical rotation angle of both cameras has no impact on the distance measurement.
For the relative translation, the coordinate error of any measured point in three directions keeps linearly increasing with the increasing translation. Particularly, the coordinate error varies among different points on the object. The distance measurement error is proportional to the change in positions between the two cameras. Furthermore, the errors of the 3D coordinate and distance, caused by the right camera translation along its own optical axis, are less than for those along other directions.
For the relative rotation, the relationship between the 3D coordinate error of a point and the rotation angle can be described as the nonlinear trend of a sine-cosine curve. The distance measurement error does not increase monotonically with the relative rotation angle. Furthermore, the errors of the 3D coordinate and distance measurement, caused by the relative rotation, are significantly larger than those caused by other cases of the camera movement. Even for the occurrence of a rotation angle of 10° during images acquisition, the resultant maximum coordinate error is up to 2000 mm, and the distance error reaches 220%. The stereo-vision system is highly sensitive to the relative rotation between the two cameras.
In brief, we have meticulously analyzed the effect of the unavoidable slight movements of the two cameras on the 3D measurement of an object. The ensuing results are recommended as a guideline to correct the effect of the camera movement, while using a stereo-vision system in a noisy or outdoor environment.

Author Contributions

Methodology, Y.L.; validation, Y.L., Z.G., X.S. and Y.Y.; investigation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., Z.G. and X.G.; visualization, Y.L.; supervision, X.G. and T.S.; funding acquisition, X.G., T.S. and Q.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 11772268, 11602202, 12072279, 12002215); National Key Research and Development Program of China (No. 2019YFC1511102); Natural Science Basic Research Plan in Shaanxi province of China (No. 2018JQ1060).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, C.; Chen, X.; Xi, J. Determination of optimal measurement configurations for self-calibrating a robotic visual inspection system with multiple point constraints. Int. J. Adv. Manuf. Technol. 2018, 96, 3365–3375. [Google Scholar] [CrossRef]
  2. Luo, Z.; Zhang, K.; Wang, Z.; Zheng, J.; Chen, Y. 3D pose estimation of large and complicated workpieces based on binocular stereo vision. Appl. Opt. 2017, 56, 6822–6836. [Google Scholar] [CrossRef]
  3. Wang, F.; Lü, E.; Wang, Y.; Qiu, G.; Lu, H. Efficient Stereo Visual Simultaneous Localization and Mapping for an Autonomous Unmanned Forklift in an Unstructured Warehouse. Appl. Sci. 2020, 10, 698. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, X.; Su, Y.; Gao, Z.; Xu, T.; Ding, X.; Yu, Q.; Zhang, Q. High-accuracy three-dimensional shape measurement of micro solder paste and printed circuits based on digital image correlation. Opt. Eng. 2018, 57, 054101. [Google Scholar] [CrossRef]
  5. Shao, X.; Dai, X.; Chen, Z.; Dai, Y.; Dong, S.; He, X. Calibration of stereo-digital image correlation for deformation measurement of large engineering components. Meas. Sci. Technol. 2016, 27, 125010. [Google Scholar] [CrossRef]
  6. Zhong, F.; Shao, X.; Quan, C. A comparative study of 3D reconstruction methods in stereo digital image correlation. Opt. Lasers Eng. 2019, 122, 142–150. [Google Scholar] [CrossRef]
  7. Guo, X.; Yuan, Y.; Suo, T.; Su, X.; Liu, Y.; Ge, Z. A novel deformation measurement method for ablation materials in combustion and ablation process. Opt. Lasers Eng. 2020, 134, 106255. [Google Scholar] [CrossRef]
  8. Su, Z.; Pan, J.; Zhang, S.; Wu, S.; Yu, Q.; Zhang, D. Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation. Mech. Syst. Signal Process. 2022, 162, 108072. [Google Scholar] [CrossRef]
  9. Feng, D.; Feng, M.Q. Computer vision for SHM of civil infrastructure: From dynamic response measurement to damage detection—A review. Eng. Struct. 2018, 156, 105–117. [Google Scholar] [CrossRef]
  10. Dworakowski, Z.; Kohut, P.; Gallina, A.; Holak, K.; Uhl, T. Vision-based algorithms for damage detection and localization in structural health monitoring. Struct. Control. Heal. Monit. 2016, 23, 35–50. [Google Scholar] [CrossRef]
  11. Luo, L.; Feng, M.Q.; Wu, Z.Y. Robust vision sensor for multi-point displacement monitoring of bridges in the field. Eng. Struct. 2018, 163, 255–266. [Google Scholar] [CrossRef]
  12. Srivastava, B.; Anvikar, A.R.; Ghosh, S.K.; Mishra, N.; Kumar, N.; Houri-Yafin, A.; Pollak, J.J.; Salpeter, S.J.; Valecha, N. Computer-vision-based technology for fast, accurate and cost effective diagnosis of malaria. Malar. J. 2015, 14, 1–6. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications. Prog. Aerosp. Sci. 2012, 54, 1–58. [Google Scholar] [CrossRef]
  14. Liu, J.; Guo, P.; Sun, X. An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Appl. Sci. 2021, 11, 4538. [Google Scholar] [CrossRef]
  15. Chen, L.; Huang, P.; Cai, J.; Meng, Z.; Liu, Z. A non-cooperative target grasping position prediction model for tethered space robot. Aerosp. Sci. Technol. 2016, 58, 571–581. [Google Scholar] [CrossRef]
  16. Li, W.; Shan, S.; Liu, H. High-precision method of binocular camera calibration with a distortion model. Appl. Opt. 2017, 56, 2368–2377. [Google Scholar] [CrossRef]
  17. Guan, B.; Yu, Y.; Su, A.; Shang, Y.; Yu, Q. Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint. Appl. Opt. 2019, 58, 8511–8521. [Google Scholar] [CrossRef] [PubMed]
  18. Li, X.; Wu, Q.; Wang, Y. Binocular vision calibration method for a long-wavelength infrared camera and a visible spectrum camera with different resolutions. Opt. Express 2021, 29, 3855–3872. [Google Scholar]
  19. Satoru, Y.; Hiroki, U. Bridge deflection measurement using digital image correction with camera movement correction. Mater. Trans. 2012, 53, 285–290. [Google Scholar]
  20. Sohn, W.; Kehtarnavaz, N. Analysis of camera movement errors in vision-based vehicle tracking. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 57–61. [Google Scholar] [CrossRef]
  21. Chen, R.; Li, Z.; Zhong, K.; Liu, X.; Wu, Y.; Wang, C.; Shi, Y. A Stereo-Vision System for Measuring the Ram Speed of Steam Hammers in an Environment with a Large Field of View and Strong Vibrations. Sensors 2019, 19, 996. [Google Scholar] [CrossRef] [Green Version]
  22. Boden, F.; Lawson, N.; Jentink, H.W.; Kompenhans, J. Advanced in-Flight Measurement Techniques; Springer: Berlin, Germany, 2013; pp. 19–20. [Google Scholar]
  23. Lee, J.J.; Shinozuka, M. Real-Time Displacement Measurement of a Flexible Bridge Using Digital Image Processing Techniques. Exp. Mech. 2006, 46, 105–114. [Google Scholar] [CrossRef]
  24. Chang, C.-C.; Xiao, X.H. Three-Dimensional Structural Translation and Rotation Measurement Using Monocular Videogrammetry. J. Eng. Mech. 2010, 136, 840–848. [Google Scholar] [CrossRef]
  25. Won, J.; Park, J.-W.; Park, K.; Yoon, H.; Moon, D.-S. Non-Target Structural Displacement Measurement Using Reference Frame-Based Deepflow. Sensors 2019, 19, 2992. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Chen, J.G.; Davis, A.; Wadhwa, N.; Durand, F.; Freeman, W.T.; Büyüköztürk, O. Video camera-based vibration measurement for civil infrastructure applications. J. Infrastruct. Syst. 2016, 23, B4016013. [Google Scholar] [CrossRef]
  27. Zeinali, Y.; Li, Y.; Rajan, D.; Story, B. Accurate Structural Dynamic Response Monitoring of Multiple Structures using One CCD Camera and a Novel Targets Configuration. In Proceedings of the International Workshop on Structural Health Monitoring, Palo Alto, CA, USA, 12–14 September 2017; pp. 12–14. [Google Scholar]
  28. Yoon, H.; Shin, J.; Spencer, B.F. Structural Displacement Measurement Using an Unmanned Aerial System. Comput. Civ. Infrastruct. Eng. 2018, 33, 183–192. [Google Scholar] [CrossRef]
  29. Zhang, X.; Zeinali, Y.; Story, B.A.; Rajan, D. Measurement of Three-Dimensional Structural Displacement Using a Hybrid Inertial Vision-Based System. Sensors 2019, 19, 4083. [Google Scholar] [CrossRef] [Green Version]
  30. Kirmse, T. Recalibration of a stereoscopic camera system for in-flight wing deformation measurements. Meas. Sci. Technol. 2016, 27, 054001. [Google Scholar] [CrossRef]
  31. Xu, Y.; Zhao, Y.; Wu, F.; Yang, K. Error analysis of calibration parameters estimation for binocular stereo vision system. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 22–23 October 2013; pp. 317–320. [Google Scholar]
  32. Yang, L.; Wang, B.; Zhang, R.; Zhou, H.; Wang, R. Analysis on Location Accuracy for the Binocular Stereo Vision System. IEEE Photon. J. 2018, 10, 1–16. [Google Scholar] [CrossRef]
  33. Kim, J.; Jeong, Y.; Lee, H.; Yun, H. Marker-Based Structural Displacement Measurement Models with Camera Movement Error Correction Using Image Matching and Anomaly Detection. Sensors 2020, 20, 5676. [Google Scholar] [CrossRef]
  34. Bier, A.; Leszek, L. Error analysis of stereo calibration and reconstruction. In Proceedings of The International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, Rocquencourt, France, 4–6 May 2009; pp. 230–241. [Google Scholar]
  35. Reu, P. A Study of the Influence of Calibration Uncertainty on the Global Uncertainty for Digital Image Correlation Using a Monte Carlo Approach. Exp. Mech. 2013, 53, 1661–1680. [Google Scholar] [CrossRef]
  36. Wang, Q.; Yin, Y.; Zou, W.; Xu, D. Measurement error analysis of binocular stereo vision: Effective guidelines for bionic eyes. IET Sci. Meas. Technol. 2017, 11, 829–838. [Google Scholar] [CrossRef]
  37. Su, Z.; Lu, L.; Dong, S.; Yang, F.; He, X. Auto-calibration and real-time external parameter correction for stereo digital image correlation. Opt. Lasers Eng. 2019, 121, 46–53. [Google Scholar] [CrossRef]
  38. Liu, X.; Liu, Z.; Duan, G.; Cheng, J.; Jiang, X.; Tan, J. Precise and robust binocular camera calibration based on multiple constraints. Appl. Opt. 2018, 57, 5130–5140. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Geometric model of stereo-vision 3D measurement.
Figure 1. Geometric model of stereo-vision 3D measurement.
Applsci 11 09384 g001
Figure 2. The relationship between a pair of camera coordinate systems prior and post identical translations in a stereo-vision arrangement.
Figure 2. The relationship between a pair of camera coordinate systems prior and post identical translations in a stereo-vision arrangement.
Applsci 11 09384 g002
Figure 3. The identical rotations of two cameras in a stereo-vision system.
Figure 3. The identical rotations of two cameras in a stereo-vision system.
Applsci 11 09384 g003
Figure 4. Relative translation between two cameras in a stereo-vision system.
Figure 4. Relative translation between two cameras in a stereo-vision system.
Applsci 11 09384 g004
Figure 5. Relative rotation between two cameras in a stereo-vision system.
Figure 5. Relative rotation between two cameras in a stereo-vision system.
Applsci 11 09384 g005
Figure 6. Setup of identical translation and rotation experiments: (a) Stereo-vision system mounted onto the electronic control platform; (b) High precise motion controller for the electronic control platform; (c) The definition of the rotating axis.
Figure 6. Setup of identical translation and rotation experiments: (a) Stereo-vision system mounted onto the electronic control platform; (b) High precise motion controller for the electronic control platform; (c) The definition of the rotating axis.
Applsci 11 09384 g006
Figure 7. (a) The stereo image pairs captured in the initial state and the state of identical translations (44 mm) along X-axis; (b) The stereo image pairs captured in the initial state and the state of identical rotations (10°) around Y-axis; (c) The stereo image pairs captured in the initial state and the state of relative translation (44 mm) along X-axis; (d) The stereo image pairs captured in the initial state and the state of relative rotation (21°) around X-axis.
Figure 7. (a) The stereo image pairs captured in the initial state and the state of identical translations (44 mm) along X-axis; (b) The stereo image pairs captured in the initial state and the state of identical rotations (10°) around Y-axis; (c) The stereo image pairs captured in the initial state and the state of relative translation (44 mm) along X-axis; (d) The stereo image pairs captured in the initial state and the state of relative rotation (21°) around X-axis.
Applsci 11 09384 g007
Figure 8. The 3D coordinates and distance measurement errors introduced by identical translation: (a,b) The average 3D coordinate errors of all corner points at each translational position when the two cameras are translated along the X-axis and Z-axis, respectively; (c,d) The comparison between the measured and real distance between adjacent corner points.
Figure 8. The 3D coordinates and distance measurement errors introduced by identical translation: (a,b) The average 3D coordinate errors of all corner points at each translational position when the two cameras are translated along the X-axis and Z-axis, respectively; (c,d) The comparison between the measured and real distance between adjacent corner points.
Applsci 11 09384 g008
Figure 9. The 3D coordinates and distance measurement errors introduced by the identical rotation: (a,b): the average 3D coordinate errors of all corner points in each rotational pose while the two cameras are rotated around the X and Y-axes, respectively; (c,d): the comparison between the measured and real distance between the adjacent corner points for different rotational poses.
Figure 9. The 3D coordinates and distance measurement errors introduced by the identical rotation: (a,b): the average 3D coordinate errors of all corner points in each rotational pose while the two cameras are rotated around the X and Y-axes, respectively; (c,d): the comparison between the measured and real distance between the adjacent corner points for different rotational poses.
Applsci 11 09384 g009
Figure 10. Experimental setup for relative movement between the two cameras.
Figure 10. Experimental setup for relative movement between the two cameras.
Applsci 11 09384 g010
Figure 11. The 3D coordinates and distance measurement errors introduced by relative translation: (a,b) Impact of relative translation on 3D coordinate measurement errors when the right cameras are translated along the X-axis and Z-axis, respectively; (c,d) Impact of relative translation on distance measurement.
Figure 11. The 3D coordinates and distance measurement errors introduced by relative translation: (a,b) Impact of relative translation on 3D coordinate measurement errors when the right cameras are translated along the X-axis and Z-axis, respectively; (c,d) Impact of relative translation on distance measurement.
Applsci 11 09384 g011aApplsci 11 09384 g011b
Figure 12. The 3D coordinates and distance measurement errors introduced by the relative rotation: (a,b) Impact of the relative rotation on the 3D coordinate measurement errors when the right camera is rotated around the X-axis and Y-axis, respectively; (c,d): Impact of the relative rotation on the distance measurement.
Figure 12. The 3D coordinates and distance measurement errors introduced by the relative rotation: (a,b) Impact of the relative rotation on the 3D coordinate measurement errors when the right camera is rotated around the X-axis and Y-axis, respectively; (c,d): Impact of the relative rotation on the distance measurement.
Applsci 11 09384 g012aApplsci 11 09384 g012b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ge, Z.; Yuan, Y.; Su, X.; Guo, X.; Suo, T.; Yu, Q. Study of the Error Caused by Camera Movement for the Stereo-Vision System. Appl. Sci. 2021, 11, 9384. https://doi.org/10.3390/app11209384

AMA Style

Liu Y, Ge Z, Yuan Y, Su X, Guo X, Suo T, Yu Q. Study of the Error Caused by Camera Movement for the Stereo-Vision System. Applied Sciences. 2021; 11(20):9384. https://doi.org/10.3390/app11209384

Chicago/Turabian Style

Liu, Yan, Zhendong Ge, Yingtao Yuan, Xin Su, Xiang Guo, Tao Suo, and Qifeng Yu. 2021. "Study of the Error Caused by Camera Movement for the Stereo-Vision System" Applied Sciences 11, no. 20: 9384. https://doi.org/10.3390/app11209384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop