Next Article in Journal
Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion
Next Article in Special Issue
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems
Previous Article in Journal
Wavelength-Scanning SPR Imaging Sensors Based on an Acousto-Optic Tunable Filter and a White Light Laser
Previous Article in Special Issue
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras

1
Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China
2
Department of Automation, Tsinghua University, Beijing 100084, China
3
Shenzhen Graduate School, Tsinghua University, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(1), 92; https://doi.org/10.3390/s17010092
Submission received: 2 September 2016 / Revised: 7 December 2016 / Accepted: 9 December 2016 / Published: 5 January 2017
(This article belongs to the Special Issue Imaging: Sensors and Technologies)

Abstract

:
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m).

1. Introduction

ToF cameras, which have been developed rapidly in recent years, are a kind of 3D imaging sensor providing a depth image as well as an amplitude image with a high frame rate. With its advantages of small size, light weight, compact structure and low power consumption, this equipment has shown great application potential in fields such as navigation of ground robots [1], pose estimation [2], 3D object reconstruction [3], identification and tracking of human organs [4] and so on. However, limited by its imaging conditions and influenced by the interference of the external environment, the data acquired by a ToF camera has certain errors, among which is the fact it has no unified correction method for any non-systematic errors caused by the external environment. Therefore, different depth errors must be analyzed, modeled and corrected case by case according to the different causes.
ToF camera errors can be divided into two categories: systematic errors, and non-systematic errors. A systematic error is triggered not only by its intrinsic properties, but also by the imaging conditions of the camera system. The main characteristic of this kind of error is that their form is relatively fixed. These errors can be evaluated in advance, and the correction process is relatively convenient. Systematic errors which can be reduced by calibration under normal circumstances [5] and can be divided into five categories.
A non-systematic error is an error caused by the external environment and noise. The characteristic of this kind of error is that the form is not fixed and random, and it is difficult to establish a unified model to describe and correct such errors. Non-systematic errors are mainly divided into four categories: signal-to-noise ratio, multiple light reception, light scattering and motion blurring [5].
Signal-to-noise ratio errors can be removed by the low amplitude filtering method [6], or an optimized integration time can be decided by using a complex algorithm as per the area to be optimized [7]. Other ways generally reduce the impact of noise by calculating the average of data to determine whether it exceeds a fixed threshold [8,9,10].
Multiple light reception errors mainly exist at surface edges and depressions of the target object. Usually, the errors in surface edges of the target object can be removed by comparing the incidence angle of the adjacent pixels [7,11,12], but there is no efficient solution to remove the errors of depressions in the target object.
Light scattering errors are only related to the position of a target object in the scene; the closer it is to the target object, the stronger the interference will be [13]. In [14], a filter approach based on amplitude and intensity on the basis of choosing an optimum integration time was proposed. Measurements based on multiple frequencies [15,16] and the ToF encoding method [17] both belong to the modeling category, which can solve the impact of sparse scattering. A direct light and global separation method [18] can solve mutual scattering and sub-surface scattering among the target objects.
In [19], the authors proposed detecting transverse moving objects by the combination of a color camera and a ToF camera. In [20], transverse and axial motion blurring were solved by an optical flow method and axial motion estimation. In [21], the authors proposed a fuzzy detection method by using a charge quantity relation so as to eliminate motion blurring.
In addition, some error correction methods cannot distinguish among error types, and uniformly correct the depth errors of ToF cameras. In order to correct the depth error of ToF cameras, a fusion method with a ToF camera and a color camera was also proposed in [22,23]. In [24], a 3D depth frame interpolation and interpolative temporal filtering method was proposed to increase the accuracy of ToF cameras.
Focusing on the non-systematic errors of ToF cameras, this paper starts with the analysis of the impacts of varying external distractions on the depth errors of ToF cameras, such as materials, colors, distances, and lighting. Moreover, based on the particle filter to select the parameters of a SVM error model, an error modeling method based on PF-SVM is proposed, and the depth error correction of ToF cameras is realized as well.
The reminder of the paper is organized as follows: Section 2 introduces the principle and development of ToF cameras. Section 3 analyzes the influence of lighting, material properties, color and distance on the depth errors of ToF cameras through four groups of experiments. In Section 4, a PF-SVM method is adopted to model and correct the depth errors. In Section 5, we present our conclusions and discuss possible future work.

2. Development and Principle of ToF Cameras

In a broad sense, ToF technology is a general term for determining distance by measuring the flight time of light between sensors and the target object surface. According to the different measurement methods of flight time, ToF technology can be classified into pulse/flash, continuous wave, pseudo-random number and compressed sensing [25]. The continuous wave flight time system is also called ToF camera.
ToF cameras were firstly invented at the Stanford Research Institute (SRI) in 1977 [26]. Limited by the detector technology at that time, the technique wasn’t used widely. Fast sampling of receiving light didn’t come true until the lock-in CCD technique was invented in the 1990s [27]. Then, in 1997 Schwarte, who was at the University of Siegen (Germany), put forward a method of measuring the phases and/or magnitudes of electromagnetic waves based on the lock-in CCD technique [28]. With this technique, his team invented the first CCD-based ToF camera prototype [29]. Afterwards, ToF cameras began to develop rapidly. A brief development history is shown in Figure 1.
In Figure 2, the working principle of ToF cameras is illustrated. The signal is modulated on the light source (usually LED) and emitted to the surface of the target object. Then, the phase shift between the emitted and received signals is calculated by measuring the accumulated charge numbers of each pixel on the sensor. Thereby, we can obtain the distance from the ToF camera to the target object.
The received signal is sampled four times at equal intervals for every period (at 1/4 period). From the four samples (ϕ0, ϕ1, ϕ2, ϕ3) of phase ϕ, offset B and amplitude A can be calculated as follows:
φ = arctan ( φ 0 φ 2 φ 1 φ 3 ) ,
B = φ 0 + φ 1 + φ 2 + φ 3 4
A = ( φ 0 φ 2 ) 2 + ( φ 1 φ 3 ) 2 2
Distance D can be derived:
D = 1 2 ( c Δ φ 2 π f ) ,
where D is the distance from ToF camera to the target object, c is light speed and f is the modulation frequency of the signal, Δ φ is phase difference. More details on the principle of ToF cameras can be found in [5].
We list the exterior and parameters of several typical commercial ToF cameras on the market in Table 1.

3. Analysis on Depth Errors of ToF Cameras

The external environment usually has a random and uncertain influence on ToF cameras, therefore, it’s difficult to establish a unified model to describe and correct such errors. In this section, we take the MESA SR4000 camera (Zurich, Switzerland, a camera with good performance [30], which has been used in error analysis [31,32,33] and position estimation [34,35,36]) as an example to analyze the influence of the external environment transformation on the depth error of ToF cameras. The data we get from the experiments provide references for the correction of depth errors in the next step.

3.1. Influence of Lighting, Color and Distance on Depth Errors

During the measurement process of ToF cameras, it seems that the measured objects tend to have different colors, different distances and may be under different lighting conditions. Then, the following question arises: will the difference in lighting, distances and colors affect the measurement results? To answer this question, we conduct the following experiments.
As we know, there are several natural indoor lighting conditions, such as light-sunlight, indoor light-lamp light and no light. This experiment mainly considers the influence of these three lighting conditions on the depth errors of the SR4000. Red, green and blue are three primary colors that can be superimposed into any color. White is the color for measuring error [32,37,38], while reflective papers (tin foil) can reflect all light. Therefore, this experiment mainly considers the influence of these five conditions on the depth errors of the SR4000.
As the measurment target, the white wall is then covered by red, blue, green, white and reflective papers, respectively, as examples of backgrounds with different colors. Since the wall is not completely flat, laser scanners are used to build a wall model. Then we used a 25HSX laser scanner from Surphaser (Redmond, WA, USA) to provide a reference value, because its accuracy is relatively high (0.3 mm). The SR4000 camera is set on the right side of the bracket, while the 3D laser scanner is on the left. The bracket is mounted in the middle of two tripods and the tripods are placed parallel to the white wall. The distances between the tripods and the wall are measured with two parallel tapes. The experimental scene is arranged as shown in Figure 3 below.
The distances from the tripods to the wall are set to 5, 4, 3, 2.5, 2, 1.5, 1, 0.5 m respectively. At each position, we change the lighting conditions and obtain one frame with the laser scanner and 30 frames with the SR4000 camera. To exclude the influence of the integral time, the SR_3D_View software of the SR4000 camera is set to “Auto”.
In order to analyze the depth error, the acquired data are processed in MATLAB. Since the target object can’t fill the image, we select the central region of 90 × 90 pixels of the SR4000 to be analyzed for depth errors. The distance error is defined as:
h i , j = f = 1 n m i , j , f n r i , j ,
g = i = 1 a j = 1 b h i , j s
where hi,j is the mean error of pixel i,j, f is the frame number of the camera, mi,j,f is the distance measured at pixel i,j in Frame f, n = 30, ri,j is the real distance, a and b are the row and column number of the selected region respectively and s is the total number of pixels. The real distance ri,j is provided by the laser scanner.
Figure 4 shows the effects of different lighting conditions on the depth error of the SR4000. As shown in Figure 4, the depth error of the SR4000 is on slightly affected by the lighting conditions (the maximum effect is 2 mm). The depth error increases approximately linearly with distance, and the measurement error value complies with the error test of other Swiss Ranger cameras in [37,38,39,40]. Besides, as seen in the figure, SR4000 is very robust against light changes, and can adapt to various indoor lighting conditions for the lower accuracy requirements.
Figure 5 shows the effects of various colors on the depth errors of the SR4000 camera. As shown in Figure 5, the depth error of the SR4000 is affected by the color of the target object, and it increases linearly with distance. The depth error curve under reflective conditions is quite different from the others. When the distance is 1.5–2 m, the depth error is too large, while at 3–5 m, it is small. When the distance is 5 m, the depth error is 15 mm less than when the color is blue. When the distance is 1.5 m, the depth error when the color is white is 5 mm higher than when the color is green.

3.2. Influence of Material on Depth Errors

During the measurement process of ToF cameras, it seems that the measured objects tend to be of different materials. Then, will this affect the measurement results? For this question, we conducted the following experiments: to analyze the effects of different materials on the depth errors of the SR4000, we chose four common materials in the experiment: ABS plastic, stainless steel, wood and glass. The tripods are arranged as shown in Figure 3 of Section 3.1, and the targets are four 5-cm-thick boards of the different materials, as shown in Figure 6. The tripods are placed parallel to the target and the distance is set to about 1 m, and the experiment is operated under natural light conditions. To differentiate the boards on the depth image, we leave a certain distance between them. Then we acquire one frame with the laser scanner and 30 consecutive frames with the SR4000 camera. The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”.
For the SR4000 and the laser scanner, we select the central regions of 120 × 100 pixels and 750 × 750 pixels, respectively. To calculate the mean thickness of the four boards, we need to measure the distance between the wall and the tripods as well. Section 3.1 described the data processing method and Figure 7 shows the mean errors of the four boards.
As shown in Figure 7, the material affects both the depth errors of the SR4000 and the laser scanner. When the material is wood, the absolute error of the ToF camera is minimal and only 1.5 mm. When the target is the stainless steel board, the absolute error reaches its maximum value and the depth error is 13.4 mm, because, as the reflectivity of the target surface increases, the number of photons received by the light receiver decreases, which leads to a higher measurement error.

3.3. Influence of a Single Scene on Depth Errors

The following experiments were conducted to determine the influence of a single scene on depth errors. The tripods are placed as shown in Figure 3 of Section 3.1, and as shown in Figure 8, the measuring target is a cone, 10 cm in diameter and 15 cm in height. The tripods are placed parallel to the axis of the cone and the distance is set to 1 m. The experiment is operated under natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames with the SR4000 camera. The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”.
As shown in Figure 9, we choose one of the 30 consecutive frames to analyze the errors, extract point cloud data from the selected frame and compare it with the standard cone to calculate the error. The right side in Figure 9 is a color belt of the error distribution, of which the unit is m. As shown in Figure 9, the measurement accuracy of SR4000 is also higher, where the maximal depth error is 0.06 m. The depth errors of the SR4000 mainly locate in the rear profile of the cone. The measured object deformation is small, but, compared with the laser scanner, its point cloud data are sparser.

3.4. Influence of a Complex Scene on Depth Errors

The following experiments were conducted in order to determine the influence of a complex scene on depth errors. The tripods are placed as shown in Figure 3 of Section 3.1 and the measurement target is a complex scene, as shown in Figure 10. The tripods are placed parallel to the wall, and the distance is set to about 1 m. The experiment is operated under natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames with the SR4000 camera. The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”.
We then choose one of the 30 consecutive frames for analysis and, as shown in Figure 11, obtain the point cloud data of the SR4000 and the laser scanner. As shown in Figure 11, there is a small amount of deformation in the shape of the target object measured by the SR4000 compared to the laser scanner, especially on the edge of the sensor where the measured object is clearly curved. However, distortion exists on the border of the point cloud data and artifacts appear on the plant.

3.5. Analysis of Depth Errors

From the above four groups of experiments, the depth errors of the SR4000 are weakly affected by lighting conditions (2 mm maximum under the same conditions). The second factor is the target object color. Under the same conditions, this affects the depth error by a maximum of 5 mm. On the other hand, the material has a great influence on the depth errors of ToF cameras. The greater the reflectivity of the measured object material, the greater the depth error, which increases approximately linearly with the distance between the measured object and ToF camera. In a more complex scene, the depth error of a ToF camera is greater. Above all, lighting, object color, material, distance and complex backgrounds could cause different influences on the depth errors of ToF cameras, but it’s difficult to summarize this in an error law, because the forms of these errors are uncertain.

4. Depth Error Correction for ToF Cameras

In the last section, four groups of experiments were conducted to analyze the influence of several external factors on the depth errors of ToF cameras. The results of our experiments indicate that different factors have different effects on the measurement results, and it is difficult to establish a unified model to describe and correct such errors. For a complex process that is difficult to model mechanically, an inevitable choice is to use actual measurable input and output data to model. Machine learning is proved to be an effective method to establish non-linear process models. It maps the input space to the output space through a connection model, and the model can approximate a non-linear function with any precision. SVM is a new generic learning method developed on the basis of a statistical learning theory framework. It can seek the best compromise between the complexity of the model and learning ability according to limited sample information so as to obtain the best generalization performance [41,42]. Also in the last section of this paper, we learn and model the depth errors of ToF cameras by using a LS-SVM [43] algorithm.
Better parameters generate better SVM recognition performance to build the LS-SVM model. We need to determine the penalty parameter C and Gaussian kernel parameter γ. Cross-validation [44] is a common method which suffers from large computation demands and long running times. A particle filter [45] can be used to approximate the probability distribution of parameters in the parameter state space by spreading a large number of weighted discrete random variables, based on which, this paper puts forward a parameter selection algorithm, which can fit the depth errors of ToF cameras quickly and meet the requirements of correcting the errors. The process of the PF-SVM algorithm is shown in Figure 12 below.

4.1. PF-SVM Algorithm

4.1.1. LS-SVM Algorithm

According to statistical theory, during the process of black-box modeling for non-linear systems, training set {xi,yi}, i = 1,2,…,n is generally given and non-linear function f is established to minimize Equation (8):
f ( x ) = w T φ ( x ) + b ,
min w , b , δ J ( w , δ ) = 1 2 w T w + 1 2 C i = 1 n δ 2 ,
where ϕ(x) is a nonlinear function, and w is the weight. Moreover, Equation (8) satisfies the constraint:
y i = w T φ ( x i ) + b + δ i , i = 1 , 2 n ,
where δi ≥ 0 is the relaxation factor, and C > 0 is the penalty parameter.
The following equation introduces the Lagrange function L to solve the optimization problem in Equation (8):
L = 1 2 w 2 + 1 2 C i = 1 n δ i 2 i = 1 n α i ( φ ( x i ) w + b + δ i y i ) ,
where αi is a Lagrange multiplier.
For i = 1,2,…n by elimination of w and δ, a linear equation can be obtained:
[ 0 e T e G G T + C 1 I ] ( n + 1 ) × ( n + 1 ) [ b α ] = [ 0 y ] ,
where e is an element of one n-dimensional column vector, and I is the n × n unit matrix:
G = [ φ ( x 1 ) T φ ( x 2 ) T φ ( x n ) T ] T ,
According to the Mercer conditions, the kernel function is defined as follows:
K ( x i , x j ) = φ ( x i ) φ ( x j ) ,
We substitute Equations (12) and (13) into Equation (11) to get a linear equation from which α and b can be determined by the least squares method. Then we can obtain the non-linear function approximation of the training data set:
y ( x ) = i = 1 n α i K ( x , x i ) + b ,

4.1.2. PF-SVM Algorithm

The depth errors of ToF cameras mentioned above are used as training sample sets {xi,yi}, i = 1,2,…n, where xi is the camera measurement distance, and yi is the camera measurement error. Then the error correction becomes a black-box modeling problem of a nonlinear system. Our goal is to determine the nonlinear model f and correct the measurement error with it.
The error model of ToF cameras obtained via the LS-SVM method is expressed in Equation (14). In order to seek a group of optimal parameters for the SVM model to approximate the depth errors in the training sample space, we put this model into a Particle Filter algorithm.
In this paper, the kernel function is:
k ( x , y ) = exp ( x y 2 2 γ 2 ) ,
(1) Estimation state.
The estimated parameter state x at time k is represented as:
x 0 j = [ C 0 j γ 0 j ] T ,
where x 0 j is j-th particle when k = 0, C is the penalty parameter and γ is the Gauss kernel parameter.
(2) Estimation Model.
The relationship between parameter state x and parameter α,b in non-linear model y(x) can be expressed by state equation z(α,b):
z ( α , b ) = F ( γ , C ) ,
[ b α 1 α n ] = [ 0 1 1 1 K ( x 1 , x 1 ) K ( x 1 , x n ) 1 K ( x n , x 1 ) K ( x n , x n ) + 1 C ] 1 [ 0 y 1 y n ] ,
where Equation (17) is the deformation of Equation (11).
The relationship between parameter α,b and ToF camera error y(x) can be expressed by observation equation f:
y ( x ) = f ( α , b ) ,
y ( x ) = i = 1 n α i K ( x , x i ) + b ,
where Equation (20) is the non-linear model derived from LS-SVM algorithm.
(3) Description of observation target.
In this paper, we use yi of training set {xi,yi} as the real description of the observation target, namely the real value of the observation:
z = { y i } ,
(4) The calculation of the characteristic and the weight of the particle observation.
This process is executed when each particle is under characteristic observation. Hence the error values of the ToF camera are calculated according to the sampling of each particle in the parameter state x:
z ¯ j ( α ¯ j , b ¯ j ) = F ( γ j , C j ) ,
y ¯ j ( x ) = f ( α ¯ j , b ¯ j ) ,
Here we compute the similarity between the ToF camera error values and the observed target camera values of each particle. The similarity evaluation RMS is defined as follows:
R M S = 1 n i = 1 n ( y ¯ j y i ) 2 ,
where y ¯ j is the observation value of particle j and y i is the real error value. The weight value of each particle is calculated according to the Equation (24):
w ( j ) = 1 2 π σ e R M S 2 2 σ ,
Then the weight values are normalized:
w j = w j j = 1 m w j ,
(5) Resampling
Resampling of the particles is conducted according to the normalized weights. In this process, not only the particles with great weights but also a small part of particles with small weights should be kept down.
(6) Outputting particle set x 0 j = [ C 0 j γ 0 j ] T . This particle set is the optimal LS-SVM parameter.
(7) The measurement error model of ToF cameras can be obtained by introducing the parameter into the LS-SVM model.

4.2. Experimental Results

We’ve performed three groups of experiments to verify the effectiveness of the algorithm. In Experiment 1, the depth error model of ToF cameras was modeled with the experimental data in [32], and the results were compared with the error correction results in the original text. In Experiment 2, the depth error model of ToF cameras was modeled with the data in Section 3.1, and the error correction results under different test conditions were compared. In Experiment 3, the error correction results under different reflectivity and different texture conditions were compared.

4.2.1. Experiment 1

In this experiment, we used the depth error data of the ToF cameras which was obtained from Section 3.2 of [32] as the training sample set. The training set consists of 81 sets of data, where x is the distance measurement of the ToF camera and y is the depth error of the ToF camera, as shown in Figure 13 by blue dots. In the figure, the solid green line represents the error modeling results by using the polynomial given in [32]. It shows that the fitting effect is better when the distance is 1.5–4 m, and the maximum absolute error is 8 mm. However, when the distance is less than 1.5 m or more than 4 m, the error model deviated from the true error values. By using our algorithm, we can obtain the results C = 736 and γ = 0.003. By substituting these two parameters into the abovementioned algorithm, we can also obtain the depth error model of the ToF camera as shown in the figure by the red solid line. For this, it can be seen that the error model can match the real errors well.
In order to verify the validity of the error model, we use the ToF camera depth error data obtained from Section 3.3 of [32] as a test sample set (the measurement conditions are the same as Section 3.2 of [32]). The test sample set consists of 16 sets of data, as shown in Figure 14 by the blue line. In the figure, the solid green line represents the error modeling results by using the polynomial in [32]. It shows that the fitting effect is better when the distance is 1.5–4 m, and the maximum absolute error is 8.6 mm. However, when the distance is less than 1.5 m or more than 4 m, the error model has deviated from the true error value. The results agree with the fitting effect of the aforementioned error model. The model correction results obtained by using our algorithm are shown by the red solid line in the figure. It shows that the results of the error correction are better when the distance is in 0.5–4.5 m, and the absolute maximum error is 4.6 mm. Table 2 gives the detailed performance comparison results of these two error corrections. From Table 2, we can see that, while expanding the range of error correction, this method can also improve the accuracy of the error correction.

4.2.2. Experiment 2

The ToF depth error data of Section 3.1 on the condition of blue background is selected as the training sample set. As shown in Figure 15 by blue asterisks, the training set consists of eight sets of data. The error model established by our algorithm is shown by the blue line in Figure 15. The model can fit the error data well, but the training sample set should be as rich as possible in order to build the accuracy of the model. To verify the applicability of the error model, we use white, green and red background ToF depth error data as test samples, and the data after correction is shown in the figure by the black, green and red lines. It can be seen from the figure that the absolute values of the three groups of residual errors is less than the uncorrected error data after the application of the blue distance error model. The figure also illustrates that this error model is very applicable to the error correction of ToF cameras for different color backgrounds.

4.2.3. Experiment 3

The experimental process similar to that of Section 3.1 hereof was adopted in order to verify the validity of the error modeling method under different reflectivity and different texture conditions. The sample set, including 91 groups of data, involved the depth errors obtained from the white wall surfaces photographed with a ToF camera at different distances, as shown with the blue solid lines in Figure 16. The error model established by use of the algorithm herein is shown with the red solid lines in Figure 16. The figure indicates that this model fits the error data better. With a newspaper fixed on the wall as the test target, the depth errors obtained with a ToF camera at different distances are taken as the test data, as shown with the black solid lines in Figure 16, while the data corrected through the error model created here are shown with the green solid lines in the same figure. It can be seen from the figure that the absolute values of residual errors is less than the uncorrected error data after the application of the distance error model. The figure also illustrates that this error model is very applicable to the full measurement range of ToF cameras.

5. Conclusions

In this paper, we analyzed the influence of some typical external distractions, such as material properties and color of the target object, distance, lighting and so on on the depth errors of ToF cameras. Our experiments indicate that lighting, color, material and distance could cause different influences on the depth errors of ToF cameras. As the distance becomes longer, the depth errors of ToF cameras increase roughly linearly. To further improve the measurement accuracy of ToF cameras, this paper puts forward an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Then, the best parameters with particle filter algorithm on the basis of learning the depth errors of ToF cameras are selected. The experimental results indicate that this method can reduce the depth error from 8.6 mm to 4.6 mm within its full measurement range (0.5–5 m).

Acknowledgments

This research was supported by National Natural Science Foundation of China (No. 61305112).

Author Contributions

Ying He proposed the idea; Ying He, Bin Liang and Yu Zou conceived and designed the experiments; Ying He, Yu Zou, Jin He and Jun Yang performed the experiments; Ying He, Yu Zou, Jin He and Jun Yang analyzed the data; Ying He wrote the manuscript; and Bin Liang provided the guidance for data analysis and paper writing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [Google Scholar] [CrossRef]
  2. Brachmann, E.; Krull, A.; Michel, F.; Gumhold, S.; Shotton, J.; Rother, C. Learning 6D Object Pose Estimation Using 3D Object Coordinates; Springer: Heidelberg, Germany, 2014; Volume 53, pp. 151–173. [Google Scholar]
  3. Tong, J.; Zhou, J.; Liu, L.; Pan, Z.; Yan, H. Scanning 3D full human bodies using kinects. IEEE Trans. Vis. Comput. Graph. 2012, 18, 643–650. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, X.; Fujimura, K. Hand gesture recognition using depth data. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 17–19 May 2004; pp. 529–534.
  5. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef] [Green Version]
  6. Wiedemann, M.; Sauer, M.; Driewer, F.; Schilling, K. Analysis and characterization of the PMD camera for application in mobile robotics. IFAC Proc. Vol. 2008, 41, 13689–13694. [Google Scholar] [CrossRef]
  7. Fuchs, S.; May, S. Calibration and registration for precise surface reconstruction with time of flight cameras. Int. J. Int. Syst. Technol. App. 2008, 5, 274–284. [Google Scholar] [CrossRef]
  8. Guomundsson, S.A.; Aanæs, H.; Larsen, R. Environmental effects on measurement uncertainties of time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2, pp. 113–116.
  9. Rapp, H. Experimental and Theoretical Investigation of Correlating ToF-Camera Systems. Master’s Thesis, University of Heidelberg, Heidelberg, Germany, September 2007. [Google Scholar]
  10. Falie, D.; Buzuloiu, V. Noise characteristics of 3D time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2, pp. 229–232.
  11. Karel, W.; Dorninger, P.; Pfeifer, N. In situ determination of range camera quality parameters by segmentation. In Proceedings of the VIII International Conference on Optical 3-D Measurement Techniques, Zurich, Switzerland, 9–12 July 2007; pp. 109–116.
  12. Kahlmann, T.; Ingensand, H. Calibration and development for increased accuracy of 3D range imaging cameras. J. Appl. Geodesy 2008, 2, 1–11. [Google Scholar] [CrossRef]
  13. Karel, W. Integrated range camera calibration using image sequences from hand-held operation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 945–952. [Google Scholar]
  14. May, S.; Werner, B.; Surmann, H.; Pervolz, K. 3D time-of-flight cameras for mobile robotics. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; Volumes 1–12, pp. 790–795.
  15. Kirmani, A.; Benedetti, A.; Chou, P.A. Spumic: Simultaneous phase unwrapping and multipath interference cancellation in time-of-flight cameras using spectral methods. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–6.
  16. Freedman, D.; Krupka, E.; Smolin, Y.; Leichter, I.; Schmidt, M. Sra: Fast removal of general multipath for ToF sensors. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014.
  17. Kadambi, A.; Whyte, R.; Bhandari, A.; Streeter, L.; Barsi, C.; Dorrington, A.; Raskar, R. Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph. 2013, 32, 167. [Google Scholar] [CrossRef]
  18. Whyte, R.; Streeter, L.; Gree, M.J.; Dorrington, A.A. Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods. Opt. Eng. 2015, 54, 113109. [Google Scholar] [CrossRef] [Green Version]
  19. Lottner, O.; Sluiter, A.; Hartmann, K.; Weihs, W. Movement artefacts in range images of time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 13–14 July 2007; Volumes 1–2, pp. 117–120.
  20. Lindner, M.; Kolb, A. Compensation of motion artifacts for time-of flight cameras. In Dynamic 3D Imaging; Springer: Heidelberg, Germany, 2009; Volume 5742, pp. 16–27. [Google Scholar]
  21. Lee, S.; Kang, B.; Kim, J.D.K.; Kim, C.Y. Motion Blur-free time-of-flight range sensor. Proc. SPIE 2012, 8298, 105–118. [Google Scholar]
  22. Lee, C.; Kim, S.Y.; Kwon, Y.M. Depth error compensation for camera fusion system. Opt. Eng. 2013, 52, 55–68. [Google Scholar] [CrossRef]
  23. Kuznetsova, A.; Rosenhahn, B. On calibration of a low-cost time-of-flight camera. In Proceedings of the Workshop at the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Lecture Notes in Computer Science. Volume 8925, pp. 415–427.
  24. Lee, S. Time-of-flight depth camera accuracy enhancement. Opt. Eng. 2012, 51, 527–529. [Google Scholar] [CrossRef]
  25. Christian, J.A.; Cryan, S. A survey of LIDAR technology and its use in spacecraft relative navigation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Boston, MA, USA, 19–22 August 2013; pp. 1–7.
  26. Nitzan, D.; Brain, A.E.; Duda, R.O. Measurement and use of registered reflectance and range data in scene analysis. Proc. IEEE 1977, 65, 206–220. [Google Scholar] [CrossRef]
  27. Spirig, T.; Seitz, P.; Vietze, O. The lock-in CCD 2-dimensional synchronous detection of light. IEEE J. Quantum Electron. 1995, 31, 1705–1708. [Google Scholar] [CrossRef]
  28. Schwarte, R. Verfahren und vorrichtung zur bestimmung der phasen-und/oder amplitude information einer elektromagnetischen Welle. DE Patent 19,704,496, 12 March 1998. [Google Scholar]
  29. Lange, R.; Seitz, P.; Biber, A.; Schwarte, R. Time-of-flight range imaging with a custom solid-state image sensor. Laser Metrol. Inspect. 1999, 3823, 180–191. [Google Scholar]
  30. Piatti, D.; Rinaudo, F. SR-4000 and CamCube3.0 time of flight (ToF) cameras: Tests and comparison. Remote Sens. 2012, 4, 1069–1089. [Google Scholar] [CrossRef]
  31. Chiabrando, F.; Piatti, D.; Rinaudo, F. SR-4000 ToF camera: Further experimental tests and first applications to metric surveys. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 149–154. [Google Scholar]
  32. Chiabrando, F.; Chiabrando, R.; Piatti, D. Sensors for 3D imaging: Metric evaluation and calibration of a CCD/CMOS time-of-flight camera. Sensors 2009, 9, 10080–10096. [Google Scholar] [CrossRef] [PubMed]
  33. Charleston, S.A.; Dorrington, A.A.; Streeter, L.; Cree, M.J. Extracting the MESA SR4000 calibrations. In Proceedings of the Videometrics, Range Imaging, and Applications XIII, Munich, Germany, 22–25 June 2015; Volume 9528.
  34. Ye, C.; Bruch, M. A visual odometry method based on the SwissRanger SR4000. Proc. SPIE 2010, 7692, 76921I. [Google Scholar]
  35. Hong, S.; Ye, C.; Bruch, M.; Halterman, R. Performance evaluation of a pose estimation method based on the SwissRanger SR4000. In Proceedings of the IEEE International Conference on Mechatronics and Automation, Chengdu, China, 5–8 August 2012; pp. 499–504.
  36. Lahamy, H.; Lichti, D.; Ahmed, T.; Ferber, R.; Hettinga, B.; Chan, T. Marker-less human motion analysis using multiple Sr4000 range cameras. In Proceedings of the 13th International Symposium on 3D Analysis of Human Movement, Lausanne, Switzerland, 14–17 July 2014.
  37. Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera SwissrangerTM. In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006; pp. 136–141.
  38. Weyer, C.A.; Bae, K.; Lim, K.; Lichti, D. Extensive metric performance evaluation of a 3D range camera. Int. Soc. Photogramm. Remote Sens. 2008, 37, 939–944. [Google Scholar]
  39. Mure-Dubois, J.; Hugli, H. Real-Time scattering compensation for time-of-flight camera. In Proceedings of the ICVS Workshop on Camera Calibration Methods for Computer Vision Systems, Bielefeld, Germany, 21–24 March 2007.
  40. Kavli, T.; Kirkhus, T.; Thielmann, J.; Jagielski, B. Modeling and compensating measurement errors caused by scattering time-of-flight cameras. In Proceedings of the SPIE, Two-and Three-Dimensional Methods for Inspection and Metrology VI, San Diego, CA, USA, 10 August 2008.
  41. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  42. Ales, J.S.; Bernhand, S. A tutorialon support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar]
  43. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  44. Zhang, J.; Wang, S. A fast leave-one-out cross-validation for SVM-like family. Neural Comput. Appl. 2016, 27, 1717–1730. [Google Scholar] [CrossRef]
  45. Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aerosp. Electron. Syst. Mag. 2010, 25, 53–82. [Google Scholar] [CrossRef]
Figure 1. Development history of ToF cameras.
Figure 1. Development history of ToF cameras.
Sensors 17 00092 g001
Figure 2. Principle of ToF cameras.
Figure 2. Principle of ToF cameras.
Sensors 17 00092 g002
Figure 3. Experimental scene. (a) Experimental scene; (b) Camera bracket.
Figure 3. Experimental scene. (a) Experimental scene; (b) Camera bracket.
Sensors 17 00092 g003
Figure 4. Influence of lighting on depth errors.
Figure 4. Influence of lighting on depth errors.
Sensors 17 00092 g004
Figure 5. Influence of color on depth errors.
Figure 5. Influence of color on depth errors.
Sensors 17 00092 g005
Figure 6. Four boards made of different materials.
Figure 6. Four boards made of different materials.
Sensors 17 00092 g006
Figure 7. Depth data of two sensors.
Figure 7. Depth data of two sensors.
Sensors 17 00092 g007
Figure 8. The measured cone.
Figure 8. The measured cone.
Sensors 17 00092 g008
Figure 9. Measurement errors of the cone.
Figure 9. Measurement errors of the cone.
Sensors 17 00092 g009
Figure 10. Complex scene.
Figure 10. Complex scene.
Sensors 17 00092 g010
Figure 11. Depth images based on the point cloud of depth sensors.
Figure 11. Depth images based on the point cloud of depth sensors.
Sensors 17 00092 g011
Figure 12. Process of PF-SVM algorithm.
Figure 12. Process of PF-SVM algorithm.
Sensors 17 00092 g012
Figure 13. Depth error and error model.
Figure 13. Depth error and error model.
Sensors 17 00092 g013
Figure 14. Depth error correction results.
Figure 14. Depth error correction results.
Sensors 17 00092 g014
Figure 15. Depth error correction results of various colors and error model.
Figure 15. Depth error correction results of various colors and error model.
Sensors 17 00092 g015
Figure 16. Depth error correction results and error model.
Figure 16. Depth error correction results and error model.
Sensors 17 00092 g016
Table 1. Parameters of typical commercial ToF cameras.
Table 1. Parameters of typical commercial ToF cameras.
ToF CameraMaximum Resolution of Depth ImagesMaximum Frame Rate/fpsMeasurement Rage/mField of View/°AccuracyWeight/gPower/W (Typical/Maximum)
MESA-SR4000 Sensors 17 00092 i001176 × 144500.1–569 × 55±1 cm4709.6/24
Microsoft-Kinect II Sensors 17 00092 i002512 × 424300.5–4.570 × 60±3 cm@2 m55016/32
PMD-Camcube 3.0 Sensors 17 00092 i003200 × 200150.3–7.540 × 40±3 mm@4 m1438-
Table 2. Analysis of depth error correction results.
Table 2. Analysis of depth error correction results.
Comparison ItemsMaximal Error/mmAverage Error/mmVariance/mmOptimal Range/mRunning Time/s
1.5–40.5–4.51.5–40.5–4.51.5–40.5–4.5
This paper’s algorithm4.64.61.992.192.922.45180.5–4.52
Reference [32] algorithm4.68.62.144.3755.3429.4141.5–4-

Share and Cite

MDPI and ACS Style

He, Y.; Liang, B.; Zou, Y.; He, J.; Yang, J. Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras. Sensors 2017, 17, 92. https://doi.org/10.3390/s17010092

AMA Style

He Y, Liang B, Zou Y, He J, Yang J. Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras. Sensors. 2017; 17(1):92. https://doi.org/10.3390/s17010092

Chicago/Turabian Style

He, Ying, Bin Liang, Yu Zou, Jin He, and Jun Yang. 2017. "Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras" Sensors 17, no. 1: 92. https://doi.org/10.3390/s17010092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop