Next Article in Journal
Gamma-Ray Sterilization Effects in Silica Nanoparticles/γ-APTES Nanocomposite-Based pH-Sensitive Polysilicon Wire Sensors
Previous Article in Journal
Power Loss Characteristics of a Sensing Element Based on a Polymer Optical Fiber under Cyclic Tensile Elongation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

1
Fuel Cycle System Engineering Technology Development Division, Korea Atomic Energy Research Institute, Daejeon, 305-353, Korea
2
Department of Electronic Engineering, Myongji University, Yongin-city, 449-728, Korea
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(9), 8751-8768; https://doi.org/10.3390/s110908751
Submission received: 18 August 2011 / Revised: 5 September 2011 / Accepted: 6 September 2011 / Published: 9 September 2011
(This article belongs to the Section Physical Sensors)

Abstract

: In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality.

1. Introduction

To measure the range data of objects under an unknown working environment, a range sensing device has been widely applied to various robotic applications [16]. In these researches, the range sensing device is usually installed on the robot hand and it is equipped with various sensors such as a camera(s), laser-vision sensor(s) and/or sonar(s). Since parameter identification, also referred to as calibration, is crucial to the system accuracy, it is considered as an important step before performing any measurement task. As for the robotic measurement system which integrates a robot with a range sensing device(s), generally, four different calibration procedures should be performed: sensor calibration, hand-to-sensor calibration, robot calibration, and base calibration [7]. In this work, we concentrated on the first two calibrations by assuming that we know the position of the calibration points exactly and that the geometric link parameter errors of the robot manipulator are negligible.

A laser-vision sensor (LVS) consisting of a CCD camera(s) and a laser stripe projector has been frequently used as an active ranging device [19] and a feature detection sensor [10]. It is mathematically modeled based on an optical triangulation principle [2] or a conversion matrix [3] which defines the geometrical relationship between the slit beam coordinates and their corresponding image coordinates. As for the model parameters of the LVS, there exist two parameter sets to be identified: the intrinsic parameters and the extrinsic parameters. As for a camera, the intrinsic parameters model the internal geometry and optical characteristics of the image sensor which determine how light is projected through the lens onto the image plane of the sensor. They consist of the focal length, the lens distortion coefficients, the optical center, and the magnification coefficient of the CCD cell [11]. As the extension of the intrinsic parameters of the camera, we considered the intrinsic parameters of the LVS which consist of the intrinsic camera parameters as well as laser stripe generator parameters such as a baseline distance and a projection angle with respect to a camera coordinate frame. The extrinsic parameters of the LVS are related to the position and orientation of the camera with respect to the robot hand coordinate system.

Most approaches to the calibration of a hand-mounted LVS have made use of the multi-stage technique [16], that is, the camera and laser parameter calibration stages were performed separately. The extrinsic parameters of the omnidirectional laser-vision sensor used in a free-ranging robot were identified after solving the intrinsic parameters based on the existing camera calibration method [3]. On the other hand, the extrinsic parameters were identified first to estimate the orientation and position of a camera with respect to a laser range finder and then the camera intrinsic calibration was performed [5]. The multi-stage technique, however, is known to have drawbacks such as error propagation.

Recently, the online re-calibration of a LVS which is mounted on a Cartesian carriage has been proposed to achieve good resolution and to avoid occlusions when the sensor geometry is modified online [8,9]. In the method, sensor parameters are determined by using the Bezier network without any calibration reference. In the robotic measurement system, the position and orientation of the sensor could be changed by manipulating a robot arm to avoid occlusion problems. Another approach is self-calibration, which aims to improve a practical implementation by introducing physical constraints such as a fixed point, a straight line, a circle, or a sphere to the system [1,4]. In these works, they used a commercial LVS, thus, the intrinsic parameters of the LVS were assumed to be known as a priori. However, it is analyzed in this paper that small amount of inexactness of model parameters could have a considerable effect on the measurement errors.

This paper is organized as follows. In Section 2, we address the hand-mounted laser-vision sensor model and analyze its measurement range and resolution. The parameter identification based on a particle swarm optimization is proposed in Section 3, and its performance is compared with conventional non-linear least square optimization in Section 4. Section 5 illustrates experimental results. The conclusion is given in the final section.

2. Hand-Mounted Laser-Vision Sensor Model

Figure 1 shows a schematic model for the hand-mounted laser-vision sensor (HMLVS). A 3D point P(x, y, z) in the camera coordinate system is transformed into an undistorted image coordinate (Xu, Yu) by using a perspective projection with a pinhole camera geometry. Since the pinhole model is only an approximation of the real camera projection, a nonlinear lens distortion is considered to improve the measurement accuracy [2,11,12]. The distorted or true image coordinate (Xd, Yd) is corrected by using the following equation:

X u = f x z = X d / ( 1 + i = 1 k i r 2 i ) Y u = f y z = Y d / ( 1 + i = 1 k i r 2 i )
where r 2 = X d 2 + Y d 2; f is the effective focal length of the camera; ki represents the coefficients of the radial lens distortion series. Since a sufficient accuracy can be achieved with a first-order distortion, we neglect the high order coefficients and use k = k1. The coordinate (Xu, Yu) on the image plane is transformed to a 2D image pixel (Xf, Yf) in a computer frame memory by using the magnification coefficients (Su, Sv) and a center of the computer frame memory (Cu, Cv) as:
X f = X u C u S u Y f = Y u C v S u

Next, the 3D position of a point is computed through an optical triangulation principle. As shown in Figure 1, a laser stripe generator emits a plane of a beam with an angle θ relative to the ZC axis. The point P(x, y, z) on the object surface is projected onto the digitized image at the pixel (Xf, Yf) and controlled by the effective focal length of the lens f and the baseline distance, H. Accordingly, the LVS can obtain 3D information in the camera coordinate system through measuring the image pixel coordinate (Xf, Yf) which corresponds to the 3D coordinates P(x, y, z) of the illuminated laser point as:

  c P = [ x y z ] = ρ [ X f Y f f ] = ρ U
where ρ = H Y f + f tan θ.

The intrinsic parameters of the LVS model include the intrinsic camera parameters, {f, Su, Sv, Cu, Cv, k}, as well as the mounting parameters of the laser stripe generator with respect to the camera coordinate frame, {H, θ}. Because the LVS is installed on the last link of the robot manipulator, additional extrinsic parameters of the LVS, which define the position and the orientation of the camera frame with respect to the robot hand frame, should be considered.

A kinematic model of a robot manipulator can be modeled by using the Denavit-Hartenberg convention. Let H B T be a 4 × 4 homogeneous transform matrix of a robot manipulator with n degree of freedom between the base frame and the hand frame, that is:

H B T = 1 0 T 2 1 T n n 1 T = [ R N P N 0 1 ]
where i + 1 i T is the homogeneous transformation matrix between two consecutive coordinate frames i and i + 1. If we denote the homogeneous transformation matrix between the hand frame and the camera frame as C H T:
C H T = [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 ] = [ R C P C 0 1 ] ,
where tx, ty and tz denote the position of the camera frame relative to the hand frame; the elements, rij in the rotation matrix RC can be represented as a function of the Euler angle Rx(ω), Ry(ϕ), Rz(φ) as:
R C = [ cos ω cos ϕ cos φ sin ω sin φ cos ω cos ϕ sin φ sin ω cos φ cos ω sin ϕ sin ω cos ϕ cos φ + cos ω sin φ sin ω cos ϕ sin φ + cos ω cos φ sin ω sin ϕ sin ϕ cos φ sin ϕ sin φ cos ϕ ]

Since the transformation matrix between the robot base frame and the camera frame, C B T, can be represented as:

C B T = H B T C H T ,
the position vector of the laser beam reflected by the object surface, BP, is represented relative to the robot base coordinate frame by using Equations (3) and (7) as:
[ B P 1 ] = C B T [ C P 1 ] = [ R N P N 0 1 ] [ R C P C 0 1 ] [ ρ U 1 ]
where CP is the position vector of the reflected laser beam in the camera coordinate system. Therefore, the position of the reflected light with respect to the robot base frame (as shown in Figure 2) is calculated by using the following system model:
B P = ρ R N R C U + R N P C + P N .

The baseline distance (H) and the projection angle (θ) affect the measurement range and resolution of the laser-vision sensor. Here, the resolution is defined as the displacement in the 3D real space per one pixel in the image plane. To investigate these characteristics, we consider a geometric relation as shown in Figure 3, where a reference plane moves parallel to an image plane along with Zc axis. A laser line illuminated horizontally on the reference plane shifts vertically as the distance between the image plane and the reference plane varies. According to Equation (3), the baseline distance acts as a scale factor for the 3D real coordinate of the illuminated laser line. Therefore, as H increases, the measurement range is increased by sacrificing the resolution as shown in Figure 4. For a horizontally illuminated laser line on the reference plane, the resolution about xc axis on the image coordinate system, Δxc / ΔXf, is constant since ΔYf / ΔXf is zero. As the reference plane approaches to the image plane, the illuminated laser line moves vertically upward direction about V axis on the image plane. In this case, the measurement range decreases while the sensor resolution increases as shown in Figures 4 and 5.

The baseline distance and the projection angle are design parameters of the laser-vision sensor, where they should be selected appropriately based on the requirements of target applications. If we design a sensor with a resolution of less than 1 mm/pixel, we could choose the baseline distance of 100 mm and the projection angle is 25 degree. In this case, measurement ranges about xc, yc, and zc are 155.0 mm, 122.3 mm, and 262.3 mm, respectively. To further increase the sensor resolution with the same configuration, measurement range should be decreased. This can be achieved by approaching the sensor to an object by manipulating a robot arm and its permissible distance can be observed by checking the laser line in the V axis of the image plane.

To investigate the effect of the inexactness of the baseline distance and the projection angle on the measurement errors, we carried out simulations. We assumed 1% inexactness of the baseline distance and the projection angle: the baseline distance is set as 100 mm for its real value of 99 mm, and the projection angle is 25 degree for its real value of 22.5 degree. Measurement errors are shown in Figure 6. It is important to note that these errors arisen from the inexactness are larger than the sensor resolution as shown in Figures 4 and 5. Moreover, since the baseline distance and the projection angle are measured with respect to the camera coordinate frame, they should be determined after the origin of the camera coordinates was obtained. In addition, mechanical errors arisen from manufacturing and assembling should be considered for better accuracy. So, they should be dealt with unknown parameters.

3. Parameter Identification

3.1. Objective Function

As a first step to identify the model parameters, we should define an objective function to be optimized. Let q be a vector consisting of the unknown intrinsic and extrinsic parameters of the HMLVS, that is:

q = [ f , S u , S v , C u , C v , k , H , θ , ω , ϕ , φ , t x , t y , t z ] T

For a notational convenience, we rewrite q as:

q = [ q 1 , q 2 , , q n ] T
where n is the number of unknown parameters (in this case, 14).

Searching boundary on parameters is set as:

q i [ q i L , q i U ]
where q i L and q i U denote the lower and upper bounds of qi respectively. Any reasonable interval which may cover the possible parameter values may be chosen as the bound of parameter qi.

In this work, we estimated a best-fit parameter vector q* by minimizing the summed squared error of m nonlinear functions:

q * = arg min q F ( q ) = 1 2 i = 1 m ( f i ( q ) ) 2
where fi (q) ||Ei||2 is a Euclidean norm of the error vector E which is given by:
E = P ρ R N R C U R N P C P N .

3.2. Optimization Techniques

We applied two optimization techniques: nonlinear least squares optimization (NLSO) and particle swarm optimization (PSO). In the following paragraphs, we introduce them in brief. NLSO is a popular algorithm that is frequently used to find the minimum of a multivariate function represented as the sum of squares of the nonlinear functions. Among the various NLSO algorithms, we used the Levenberg-Marquardt algorithm (LMA) which is also referred to as the damped Gauss-Newton Method [13]. The LMA starts with an initial candidate solution. Given a current solution vector qk, the LMA generates the next solution vector qk + 1 by using the following equation:

q k + 1 = q k + Δ q k
where a vector of adjustments for the unknowns, Δqk, is computed as:
Δ q k = [ 2 f ( q k ) ] 1 f ( q k )

This process is repeated until F (qk) or Δqk is sufficiently small; a maximum number of iterations are completed. In the LMA, the Hessian matrix is approximated as:

2 f ( q k ) = J k T J k + λ k I
and the gradient is computed as:
f ( q k ) = J k T f ( q k )
where Jk is a Jacobian matrix which contains the first derivatives of the error vectors. The damping parameter λk is a positive coefficient and it has several effects. When the current solution is far from the correct one, a large damping parameter is chosen so that the procedure tends toward the slow-convergent steepest descent method. On the other hand, when the current solution is close to the correct one, the damping parameter decreases and the LMA behaves like a Newton method. In this work, we used a public domain MINPACK [14] after slightly modifying the package to calculate the Jacobian matrix by using a forward-difference approximation.

The second identification technique considered is a particle swarm optimization (PSO) [15,16]. The PSO is a population-based evolutionary algorithm which is inspired by the social behavior of birds flocking for food. The position of a bird, also referred to as a particle, represents the current solution to the optimization problem. The PSO utilizes swarm intelligence to find the best place in the search. During each epoch, all the particles are accelerated toward their own best position and the global best position found so far by the swarm. This is achieved by calculating a new velocity of each particle ( v i t + 1) according to three observations: its current velocity ( v i t), the distance between each particle’s current position ( q i t) and its previous best position ( p i t), and the distance from the global best position ( p g t) in the swarm:

v i t + 1 = ω v i t + c 1 r 1 ( p i t q i t ) + c 2 r 2 ( p g t q i t )
where i is a particle index; ω is an inertia weight; c1 and c2 represent the weighting factors that pull each particle toward its previous best position and the global best position; r1 and r2 are random numbers uniformly distributed on the interval [0,1].

The inertia weight plays an important role to balance the global and local search abilities; a large inertia weight facilitates a global exploration, while a small one tends to facilitate a local exploration. A preferred weighting function, where the inertial weight is linearly decreased as the iteration proceeds, is described as:

ω = ω max ω max ω min T t
where ωmax is the initial weight; ωmin is the final weight; T is the maximum iteration number; t is the current iteration number.

Once the new velocity of each particle is determined by using Equation (19), the particles update their position using the following equation:

q i t + 1 = q i t + v i t + 1

In this way, the algorithm could converge toward a global solution of the given problem. The evolution is continued until the fitness value reaches the preset value or the maximum iterations are reached. Figure 7 shows how the PSO-based parameter identification works and the evolutionary optimization steps of the PSO are given below:

  • Step 1: Generate swarm and initialize particles in the swarm with random positions and velocities.

  • Step 2: For each particle, evaluate the fitness function.

  • Step 3: Memorize best solutions and a global best solution in the swarm.

  • Step 4: For each particle, update position and velocity by using Equations (19) and (21).

  • Step 5: Repeat Step 2 until predefined conditions are satisfied.

4. Simulations

In this section, we perform simulations to compare the performance of the two optimization techniques. Since the exact solution is known in the case of a simulation experiment, it is possible for us to compare the algorithms as to how the found solutions are close to the true optimum. At first, the synthetic data was generated as follows: given pre-specified laser-vision sensor model parameters and a certain robot pose, we compute 100 sets of data using Equation (9) in which 3D coordinates of the laser points with respect to the robot base frame correspond to randomly generated 2D image coordinates in the pixel frame. The first 50 sets of data are used to identify the model parameters while the other 50 sets of data are used to evaluate the fitness of the found solutions.

In order to investigate the influence of the initial candidate solutions on the convergence performance, we used a control parameter, s which determines the size of the initial parameter bounds. Accordingly, the initial solutions are randomly generated from within a certain parameter bound as:

q i = q i 0 × [ 1 + ( rand ( ) 0.5 ) × s ]
where the function, rand(), calculates a uniformly distributed random number in the range [0,1]; qi0 is a ith nominal parameter used to generate the simulation data. Since some nominal model parameters such as ω, ϕ, φ, and ty are assumed to be zero, we select their bounds manually to avoid a null range.

Next, we set the control parameters of the two optimization techniques. In PSO, we used a swarm size of 50, a maximum inertia weight of 0.9, a minimum inertia weight of 0.4, a maximum velocity of 0.1, and a maximum iteration of 5,000. As for the LMA, the maximum iterations are set as 300 times the number of model parameters.

Since the applied optimization techniques set the initial guesses of the parameters in a random manner, they may seek out different minima depending on the initial conditions. Therefore, we performed several runs for each algorithm to evaluate the performance: 1,000 runs for NLSO and 10 runs for PSO. In addition to these algorithms, a random search (RS) was used only as a method to compare other algorithms. In the RS, an evaluation starts with parameters selected by using Equation (22) and when a better solution is found, it replaces the current solution. We performed 1,000,000 evaluations during a run.

The parameter identification results are listed in Table 1. As for the NLSO, it is possible to find sufficiently good solutions only when the initial estimates are close to the exact solutions. As the initial selection spaces increase, however, the technique has a considerably increased tendency to get trapped in the local minima or not to converge at all. Furthermore, even in the case with a small range of the initial parameters, the average root-mean-square error (rms) of NLSO is considerably high compared to that of the PSO. It indicates that the optimization problem considered has the characteristics of a highly nonconvex and multimodal landscape, even near the global optimum. On the contrary, the PSO consistently found a solution close to a true optimum, although the best fitness value slightly increased as the parameter bounds increased.

The average rms of the PSO with s = 0.2 (it means that the initial selection bounds are enlarged by up to ±20% of nominal parameters) is similar to that of the NLSO with s = 0.0001. This shows that the PSO can identify HMLVS parameters with small errors without the need for good initial estimates and that the enlarging spaces have only a marginal impact on the estimation accuracy.

In order to validate and examine the reliability of the obtained model parameters, we calculate the sum of squared error of randomly generated 50 sets of data which is not used in the optimization by using the model parameters with both best fitness and worst fitness of the PSO. As shown in Figure 8, there is no significant difference between two results with different data sets even though the sum of squared error slightly increases as the parameter searching bounds enlarges. This shows the fact that the selection of input data does not affect on the reliability of the estimation accuracy.

5. Experimental Results

Figure 9 illustrates the 5-DOF robot manipulator (SCORBOT ER-VII) and a reference object. A laser-vision sensor was installed at the last link of the robot manipulator. A laser reflection image is captured and digitized by a frame grabber (Meteor-II, Matrox) linked to a monochrome CCD camera (XC-55, Sony). A checker board pattern with a grid size of 30 mm × 30 mm is employed to provide reference positions. The left-top corner of the pattern is placed at the coordinate (700, 90, 0) mm with respect to the origin of the robot base frame. A robot controller controls the position and the velocity of the robot manipulator and it sends encoder readings of each joint to the PC through the RS-232C link.

As input data for the optimization techniques, we need three kinds of information: joint encoder readings, real coordinates of the reference points, and their corresponding image coordinates. The reference point is a corner point of the checker board pattern, which is given as priori information. The procedures for a data acquisition are as follows:

  • Adjust the robot so that a horizontal line of the laser stripe overlaps the corner points of the square pattern.

  • Capture an image and then extract the pixel coordinate of the reference point; record it in the memory of the PC.

  • Record all the joint angles of the manipulator sent from the joint controller.

  • Repeat (1)–(3) until we obtain the preset number of data sets.

In order to effectively extract a stripe of laser beam, we used a difference image, D(x, y) which subtracts one image frame without a laser beam, F′(x, y), from another image frame with a laser beam, F (x, y). This is achieved through toggling the on/off relay circuit used for providing power to the laser stripe projector:

D ( x , y ) = { F ( x , y ) F ( x , y )       if F ( x , y ) > F ( x , y ) 0 if F ( x , y ) > F ( x , y )

Calibration points are obtained through the matching process between a stripe of laser beam and cross-points of contour lines as shown in Figure 10(d). The developed algorithms including an image processing, robot control procedure, and two optimization techniques were implemented by means of the C++ language.

As described in the previous section, the considered parameter identification problem has highly nonconvex and multimodal characteristics such that the LMA often failed to find a good solution, even starting at an initial candidate solution which is near the true solution. Besides, the PSO consistently found a solution close to a true optimum regardless of the searching spaces. Therefore, we only considered the PSO for a parameter identification in the following experiments. As control parameters of the PSO, we used a swarm size of 50, a maximum inertia weight of 0.9, a minimum inertia weight 0.4, a maximum velocity of 0.1, and a maximum iteration of 5,000. The nominal values of the parameters are listed in Table 2 with their searching bounds, which are selected based on the specifications of the camera and the design parameters of the LVS. We determine the final parameters as those with the lowest fitness value from 20 different runs.

Figure 11 shows the convergence performance of the objective function, where the solid line shows the best fitness values from 20 different experiments, and the dotted line shows the worst fitness values.

Even though the initial particles (candidate solutions) are randomly generated from within the pre-defined range, these two curves are converged to a similar fitness value after approximately 3,000 iterations. This shows the fact that the PSO estimates the parameters with a small error. It takes about 30 seconds to execute the 5,000 generations.

Figure 12 shows the average values and standard deviation of the distance errors between the reference points and the calculated points of the 20 experimental results. The accuracy is computed based on a root-mean-square error (rms) and its average value of 20 experiments is 0.355 mm. By using the constructed HMLVS with parameters identified by the PSO, we measured the 3D range data of cylindrical object with holes, as shown in Figure 13.

To confirm the applicability of the proposed scheme, we carried out experiments measuring four corner points of the top surface of a gauge block whose size is 30 × 60 × 55 mm. A left-bottom of the block is placed at a point (0, −500, 0) mm with respect to the robot base frame. It is different position from the first experiment where the left-top corner of the pattern is placed at the coordinate (700, 90, 0) mm. We start to move the robot from the robot home position for each trial. We adjust the robot pose so as the laser stripe beam overlap the corner points of the top-side of the object. The measurement results are listed in Table 3, where x, y, z represent the Cartesian coordinate of the four corner points, and , , are the measured mean range data. ē and σ represent the mean value of the measurement error and standard deviation respectively. Even though it is different between the calibration region and the measurement region, maximum residual error with 10 trials was about 0.7 mm. This shows the suggested algorithm is robust against the measurement location.

6. Conclusions

In this paper we have proposed a new approach to the problem of a hand-mounted laser-vision sensor system calibration based on a particle swarm optimization. The laser-vision sensor, consisting of a camera with a nonlinear radial lens distortion and a laser stripe generator, was used as a sensor module of the robotic measurement system to measure the range data of an object in the robot base coordinate system; and it was modeled based on the forward kinematics and the optical triangulation principle. The intrinsic and extrinsic parameters of the hand-mounted laser-vision sensor were simultaneously identified through minimizing the overall residual errors between the known reference range data and the estimated data. Simulation and experimental results show that the considered parameter identification problem has highly nonconvex and multimodal characteristics, thus, the nonlinear least square optimizer often failed to find an optimal solution, even when the initial guesses of the model parameters were selected close to the true optimum. On the contrary, the proposed scheme based on the particle swarm optimizer consistently found a stable solution without any need for good initial guesses of the model parameters; thus, it could be a promising tool to identify the model parameters for a hand-mounted laser-vision sensor.

Acknowledgments

This work was supported by Nuclear Research & Development Program of National Research Foundation of Korea (NRF) funded by Ministry of Education, Science & Technology (MEST) Grant Code: 2011-0002247. This work was also partially supported by the Korea government. MKE under the Grant No. I-2010-1-012 of the ETEP. A preliminary version of this manuscript was presented at the International Conference on Convergence and Hybrid Information Technology (ICHIT 2010), 26 August 2010 in Daejeon, Korea and this work represents an extension of that paper in response to an honorable request by a conference committee member for a journal publication after we obtained some new results.

References

  1. Gong, C; Yuan, J; Ni, J. A Self-calibration method for robotic measurement system. ASME J. Manufact. Sci. Eng 2004, 122, 174–181. [Google Scholar]
  2. Char, YY; Gweon, DG. A calibration and range-data extraction algorithm for an omnidirectional laser range finder of a free-ranging mobile robot. Mechatronics 1996, 6, 665–689. [Google Scholar]
  3. Chen, CH; Kak, AC. Modeling and calibration of a structured light scanner for 3-D robot vision. Proceedings of IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3 April 1987; pp. 807–815.
  4. Wei, GQ; Hirzinger, G. Active self-calibration of hand-mounted laser range finders. IEEE Trans. Rob. Automat 1998, 14, 493–497. [Google Scholar]
  5. Zhang, Q; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). Proceedings of IEEE/RSJ International Conference on Intelligent Robotics and Systems, Sendai, Japan, 28 September–2 October 2004; pp. 2301–2306.
  6. Park, JB; Lee, SH; Lee, IJ. Precise 3D lug detection sensor for automatic robot welding using a structured-light vision system. Sensors 2009, 9, 7550–7565. [Google Scholar]
  7. Tsai, RY; Lenz, RK. A new technique for fully autonomous and efficient 3-D rbotics hand/eye calibration. IEEE J. Rob. Automat 1989, 5, 345–358. [Google Scholar]
  8. Muñoz-Rodríguez, JA. Calibration modeling for mobile vision based laser imaging and approximation networks. J. Modern Opt 2010, 57, 1583–1597. [Google Scholar]
  9. Muñoz-Rodríguez, JA. Mobile calibration based on laser metrology and approximation networks. Sensors 2010, 10, 7681–7704. [Google Scholar]
  10. Musa, E. Line-laser-based yarn shadow sensing break sensor. Opt. Lasers Eng 2011, 49, 313–317. [Google Scholar]
  11. Salvi, J; Armangue, X; Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation. Patt. Recog 2002, 35, 1617–1635. [Google Scholar]
  12. Tsai, RY. A versatile camera calibration technique for high-accuracy 3D vision metrology using off-the-shelf TV cameras and lens. IEEE J. Rob. Automat 1987, 5, 345–358. [Google Scholar]
  13. Nocedal, J; Wright, SJ. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  14. Moré, JJ; Garbow, BS; Hillstrom, KE. User Guide for MINPACK-1; Report ANL-80-74p; Argonne National Laboratory: Argonne, IL, USA, 1980. [Google Scholar]
  15. Kennedy, J; Eberhart, RC. Particle swarm optimization. Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948.
  16. Shi, Y; Eberhart, RC. Empirical study of particle swarm optimization. Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1945–1950.
Figure 1. The laser-vision sensor model. The laser-vision sensor consists of a camera and a laser stripe generator. A 3D range data is obtained by using both the camera projection model and the optical triangulation principle. (a) Laser-vision sensor. (b) Laser-vision sensor geometry.
Figure 1. The laser-vision sensor model. The laser-vision sensor consists of a camera and a laser stripe generator. A 3D range data is obtained by using both the camera projection model and the optical triangulation principle. (a) Laser-vision sensor. (b) Laser-vision sensor geometry.
Sensors 11 08751f1 1024
Figure 2. Kinematic model of the hand-mounted laser-vision sensor (HMLVS).
Figure 2. Kinematic model of the hand-mounted laser-vision sensor (HMLVS).
Sensors 11 08751f2 1024
Figure 3. The geometric relation between a 3D illuminated laser point and its 2D image projection.
Figure 3. The geometric relation between a 3D illuminated laser point and its 2D image projection.
Sensors 11 08751f3 1024
Figure 4. Measurement range and resolution for different baseline distances, where the projection angle is 25 degree.
Figure 4. Measurement range and resolution for different baseline distances, where the projection angle is 25 degree.
Sensors 11 08751f4 1024
Figure 5. Measurement range and resolution for different projection angles, where the baseline H is 100 mm.
Figure 5. Measurement range and resolution for different projection angles, where the baseline H is 100 mm.
Sensors 11 08751f5 1024
Figure 6. Measurement errors of the LVS with 1% inexact value of (a) the baseline and (b) the projection angle.
Figure 6. Measurement errors of the LVS with 1% inexact value of (a) the baseline and (b) the projection angle.
Sensors 11 08751f6 1024
Figure 7. Schematic diagram of the PSO-based parameter identification.
Figure 7. Schematic diagram of the PSO-based parameter identification.
Sensors 11 08751f7 1024
Figure 8. Fitness evaluation results of 50 sets of data not used in the parameter identification.
Figure 8. Fitness evaluation results of 50 sets of data not used in the parameter identification.
Sensors 11 08751f8 1024
Figure 9. Hand-mounted laser-vision sensor and the planar calibration pattern.
Figure 9. Hand-mounted laser-vision sensor and the planar calibration pattern.
Sensors 11 08751f9 1024
Figure 10. Detection result of the calibration points.
Figure 10. Detection result of the calibration points.
Sensors 11 08751f10 1024
Figure 11. Convergence of the fitness value.
Figure 11. Convergence of the fitness value.
Sensors 11 08751f11 1024
Figure 12. Calibration results.
Figure 12. Calibration results.
Sensors 11 08751f12 1024
Figure 13. 3D range measurement of a cylindrical object with holes.
Figure 13. 3D range measurement of a cylindrical object with holes.
Sensors 11 08751f13 1024
Table 1. Estimation results with 50 sets of simulation data and with a different number of evaluations: 1,000,000 evaluations for RS, 1,000 runs for NLSO, and 10 runs for PSO.
Table 1. Estimation results with 50 sets of simulation data and with a different number of evaluations: 1,000,000 evaluations for RS, 1,000 runs for NLSO, and 10 runs for PSO.
Optimization methodRoot-mean-square errors
0.00010.0010.010.10.2
RSE (q)best1.25E-27.48E-21.7912.819.1
NLSOE (q)best2.01E-51.70E-47.50E-45.81E-32.49E-2
E (q)worst0.1388.51.26E37.24E21.11E3
F (q)mean1.01E-16.754.67E26.20E26.00E2
F (q)stdev3.13E-220.548.64.43E24.60E2
PSOF (q)best1.69E-57.78E-53.79E-41.41E-32.55E-3
F (q)worst1.50E-42.72E-41.10E-31.04E-21.96E-2
F (q)mean9.34E-51.89E-48.65E-45.55E-31.16E-2
F (q)stdev9.11E-51.58E-46.05E-45.76E-31.13E-3
Table 2. Parameter identification results.
Table 2. Parameter identification results.
ParameterNominal valuesParameter bounds of PSO
Best-fit parameters
LowerUpper
f (mm)86107.70
Su (mm)0.00740.0050.010.0074
Sv (mm)0.00740.0050.010.0075
Cx320300340323.8
Cy240220260238.3
k000.14.8E-4
H (mm)140120150137.3
θ (rad)5π/360π/40.47
ω (rad)0π/16π/16−1.8E-3
ϕ (rad)0π/16π/16−0.05
φ (rad)0π/16π/16−6.8E-3
tx (mm)60408063.2
ty (mm)0−1010−2.3
tz (mm)−45−60−30−46.2
Table 3. Measurement results for verification (mm).
Table 3. Measurement results for verification (mm).
Ccorner pointsxyzēσ
left-top60−5005559.95−499.5655.340.580.15
right-top60−5305560.17−530.0154.650.390.14
right-bottom0−530550.17−529.4855.140.610.21
left-bottom0−50055−0.51−499.9755.050.710.18

Share and Cite

MDPI and ACS Style

Lee, J.K.; Kim, K.; Lee, Y.; Jeong, T. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor. Sensors 2011, 11, 8751-8768. https://doi.org/10.3390/s110908751

AMA Style

Lee JK, Kim K, Lee Y, Jeong T. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor. Sensors. 2011; 11(9):8751-8768. https://doi.org/10.3390/s110908751

Chicago/Turabian Style

Lee, Jong Kwang, Kiho Kim, Yongseok Lee, and Taikyeong Jeong. 2011. "Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor" Sensors 11, no. 9: 8751-8768. https://doi.org/10.3390/s110908751

Article Metrics

Back to TopTop