Next Article in Journal
Application of Geoinformation Systems for Assessment of Effective Integration of Renewable Energy Technologies in the Energy Sector of Ukraine
Previous Article in Journal
Comparative Study of AC Signal Analysis Methods for Impedance Spectroscopy Implementation in Embedded Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration Method for Line-Structured Light Three-Dimensional Measurements Based on a Single Circular Target

School of Mechanical Technology, Wuxi Institute of Technology, Wuxi 214121, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(2), 588; https://doi.org/10.3390/app12020588
Submission received: 6 December 2021 / Revised: 6 January 2022 / Accepted: 6 January 2022 / Published: 7 January 2022

Abstract

:
Single circular targets are widely used as calibration objects during line-structured light three-dimensional (3D) measurements because they are versatile and easy to manufacture. This paper proposes a new calibration method for line-structured light 3D measurements based on a single circular target. First, the target is placed in several positions and illuminated by a light beam emitted from a laser projector. A camera captures the resulting images and extracts an elliptic fitting profile of the target and the laser stripe. Second, an elliptical cone equation defined by the elliptic fitting profile and optical center of the camera is established based on the projective geometry. By combining the obtained elliptical cone and the known diameter of the circular target, two possible positions and orientations of the circular target are determined and two groups of 3D intersection points between the light plane and the circular target are identified. Finally, the correct group of 3D intersection points is filtered and the light plane is progressively fitted. The accuracy and effectiveness of the proposed method are verified both theoretically and experimentally. The obtained results indicate that a calibration accuracy of 0.05 mm can be achieved for an 80 mm × 80 mm planar target.

1. Introduction

Measurement is a process of obtaining quantifiable information that requires a very high accuracy [1]. Currently, three-dimensional (3D) measurements are widely performed in the design and construction of augmented reality, automated quality verification, restoration of lost computer-aided design data, cultural relic protection, surface deformation tracking, and 3D reconstruction for object recognition [2,3,4,5]. Common 3D measurement approaches can be divided into two categories: contact measurements and non-contact measurements. Normally, the applications of contact measurements are limited by the untouchability of a measured object. In contrast, non-contact measurements exhibit a high flexibility [6]. Among the different non-contact measurement methods, line-structured light 3D measurements are frequently conducted because of their simplicity, high precision, and high measurement speed [7].
A general line-structured light 3D measurement process can be described as follows. (1) A line-structure light beam is emitted by a laser generator. (2) A light plane is projected onto the surface of the measured object. (3) Two-dimensional (2D) image coordinates of the intersection points between the light plane and the object surface are obtained by a camera. (4) The corresponding 3D coordinates of the intersection points are calculated by a developed algorithm. In this technique, proper calibration is of utmost importance because it strongly affects the measurement accuracy and simplicity. Usually, the calibration procedure includes camera calibration and light plane calibration processes. Many research studies on camera calibration have been conducted in the past. Zhang’s [8] calibration method based on a planar target is the most widely used approach. To increase its accuracy, various advanced camera calibration techniques have been developed [9,10]. For example, the form of the calibration board was changed from that of a checkerboard to a circular calibration board because the edge of a characteristic circle contained more information and exhibited a higher positioning accuracy. Moreover, multiple methods for calibrating the light plane were also proposed [11,12,13,14,15,16,17,18]. Huynh [11], Xu [12], and Dewar [13] introduced a 3D target to define the light plane calibration points using the cross-ratio invariability principle (CRIP). However, all these three methods were very complex and possessed a low calibration accuracy. Zhou [14] proposed an on-site calibration method using a planar target in which light plane calibration points were obtained through repeated target movements. Liu [15] replaced the points with a Pluck matrix; however, the resulting method was complex and required a plane calibration target. Xu [16] utilized a flat board with four balls as a calibration target, while Wei [17] reported a calibration technique based on a one-dimensional (1D) target. The 3D coordinates of a series of intersection points between the light plane and the 1D target were determined from a known distance between select points of the 1D target. The obtained coordinates were fitted to solve the light plane equation. Liu [18] developed a light-structured light vision sensor based on a single ball target. Its calibration process was relatively simple; however, a high-precision ball target was difficult to manufacture.
In this study, a new method for the on-site calibration of a line-structured light 3D measurement process based on a single circular target is proposed. The main reason for selecting a circular target as the calibration object is its ease of manufacture. It is almost possible to say that a single circle of known size can be used as a calibration target with our proposed method. First, a line-structured light 3D measurement model is established. Second, the light plane is calibrated with a single circular target by performing the following steps: (1) An elliptic fitting profile of the circular target is obtained by the proposed “revisited arc-support line segment (RALS) method”, and sub-pixel points of the light stripe are extracted by another technique based on the Hesse matrix. (2) An elliptical cone equation defined by the elliptic fitting profile and camera optical center is determined. As the diameter of the circular target is known, the position and orientation of the circular target can assume only two values. (3) A camera model is established to obtain two possible light stripe lines, and the light plane equation is progressively solved. Finally, the effectiveness and accuracy of the proposed method are verified both theoretically and experimentally.
This paper is organized as follows. Section 2 outlines the principles of the line-structured light 3D measurement model and light plane calibration based on a single circular target. Section 3 and Section 4 describe the obtained simulation and experimental data, respectively. Finally, Section 5 summarizes the main conclusions from the findings of this work.

2. Principles

2.1. Line-Structured Light 3D Measurement Model

Figure 1 shows a schematic diagram of the line-structured light 3D measurement process. The utilized setup includes a camera and a laser projector. A line laser is cast onto an object by the laser projector, after which the camera captures the light stripe images distorted by the object surface geometry. Finally, the 3D coordinates of the points on the light stripe are determined. By performing a precise line motion, a 3D profile of the object can be obtained.
In Figure 1, the notations (Ow; Xw, Yw, Zw) and (Oc; Xc, Yc, Zc) represent the world coordinate system (WCS) and camera coordinate system (CCS), respectively. For simplicity of analysis, WCS and CCS are set to be identical. (Ouv; u, v) is the camera pixel coordinate system (CPCS), while (On; xn, yn) represents the normalized image coordinate system (NICS). Presumably, there is an arbitrary intersection point P between the light plane and the measured object, where Pc = (Xpc, Ypc, Zpc)T represents a set of coordinates in the CCS. Puv = (up, vp)T denotes the coordinates of an undistorted image point. In addition, Pn = (xpn, ypn)T and Pd = (xpd, ypd)T represent the NICS coordinates of undistorted and distorted physical images, respectively. Generally, the camera produces a lens distortion on Pn, especially in the radial direction. For this reason, a distortion model is described in Ref. [19], which includes the term fd(·) of the distorted projection Pd on the normalized image plane in the following form:
P d = f d ( P d , K , G ) = P n ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [ 2 g 1 x n y n + g 2 ( r 2 + 2 x n 2 ) g 1 ( r 2 + 2 y n 2 + 2 g 2 x n y n ) ] ,
where r 2 = x n 2 + y n 2 ; K = [k1, k2, k3] and G = [g1, g2] represent the radial and tangential distortion coefficient vectors, respectively. Due to the advantages of homogeneous coordinates, the utilized coordinates are converted into homogeneous coordinates by adding 1 as the last element. By taking into account the projection and coordinate transformation, the actual projection on the image plane Puv can be expressed as follows:
[ P u v 1 ] = A [ P d 1 ] = A [ f d ( P n , K , G ) 1 ] ,
where A denotes the camera intrinsic matrix:
A = [ f u 0 u 0 0 f v v 0 0 0 1 ] .
Typically, A consists of five parameters: fu, fv, u0, v0, and γ. The skew factor γ is usually set to zero. fu and fv are the horizontal and vertical focal lengths, respectively, and u0 and v0 are the coordinates of the camera principal point.
As shown in Equation (2), the nonlinear camera model describes the imaging process from CCS to the CPCS. From the geometric viewpoint, Equation (2) characterizes a ray passing through Puv and Pc. To uniquely determine Pc in the CCS, the light plane Π serves as another constraint—i.e., Pc should satisfy the equation of Π:
[ P c T 1 ] Π c = 0
where Πc = (a, b, c, d)T includes the four coefficients of Equation (4).
Generally, Equations (2) and (4) constitute the line-structured light 3D measurement model, which is used to calculate Pc in the CCS from Puv in the CPCS. This model contains several undetermined parameters: the parameters in A, the radial distortion coefficients in K, the tangential distortion coefficients in G, and the coefficients in Πc. Among these parameters, A, K, and G are the intrinsic parameters of the camera, which are not affected by the calibration procedure. To obtain accurate camera parameters, several advanced calibration approaches have been developed. Herein, the intrinsic parameters of the camera are estimated by Zhang’s method [8]. Furthermore, a MATLAB toolbox developed by Bouguet [20] can be directly utilized for their determination. Nevertheless, Πc is changed easily because it is related to the device position and orientation. Therefore, the proper calibration of the light plane Π (i.e., the acquisition of Πc) is the focus of the present research study, which is described in detail in the following sections.

2.2. Light Plane Calibration Based on a Single Circle Target

2.2.1. Image Processing Algorithm

Figure 2a displays the captured image of the circle target during calibration. The 2D data in structure is very important [21]. The light stripe must be extracted and the elliptic image profile of the circle target edge must be fitted. Hence, the first objective of this process is to detect (extract and fit) the ellipse because it contains a light stripe (note that the ellipse can be used as a mask to reduce the search area during light stripe extraction). The complete image processing algorithm is outlined in Figure 3, while the ellipse detection and light stripe extraction procedures are described in detail below.
The existing ellipse detection methods can be grouped into Hough transform [22] and edge following [23] ones. The former techniques are relatively simple but not robust in complex scenarios. In contrast, the latter methods are very precise but require long computation times. Therefore, we have developed an efficient high-quality ellipse detection method [24], which is called the RALS method. Its algorithm consists of four steps: (1) arc-support line segment (LS) extraction, (2) arc-support group forming, (3) initial ellipse set generation, and (4) ellipse clustering and candidate verification.
Step 1. Arc-support LS extraction: Arc-support LS helps prune a straight LS while retaining the arc geometric cues. It has two critical parameters: direction and polarity. A detailed description of the corresponding procedure can be found in Ref. [25].
Step 2. Arc-support groups forming: An elliptic curve may consist of several are-support LS, which require a link to form a group. Two consecutive linkable arc-support LS should satisfy the continuity and convexity conditions. The former condition states that the head of one arc-support LS and the tail of another one should be close enough. According to the convexity condition, the linked LS should move in the same direction (clockwise or anticlockwise). Iteratively, the linked arc-support LS that share the similar geometric properties are called an “arc-support group”.
Step 3. Initial ellipse set generation: An arc-support group may contain all the arc-support LS of a curve or merely a separate arc-support LS. Thus, two complementary methods are utilized to generate the initial ellipse set. (1) From the local perspective, the arc-support LS group with a relatively high saliency score likely represents the dominant component of the polygonal approximation of an ellipse. (2) From the global perspective, the troublesome situation of the arc-support groups of a common ellipse is characterized by a polarity constraint, region restriction, and adaptive inlier criterion.
Step 4. Ellipse clustering and candidate verification: Owing to the presence of duplicates in the initial ellipse set, a hierarchical clustering method based on mean shift [26] has been developed. For convenient clustering, this method decomposes the five-dimensional ellipse parameter space into three low and cascaded dimensional spaces (centers, orientations, and semiaxes). Moreover, to ensure the high quality of the detected ellipses, ellipse candidate verification, which incorporates the stringent regulations for goodness measurements and elliptic geometric properties for refinement, is conducted. E is the elliptic fitting profile, as indicated in Figure 2b.
After the ellipse detection, the light stripe within the ellipse is extracted as well. The automatic sub-pixel extraction process of the light stripe consists of three steps: (1) Gaussian filtering, (2) solving the Hesse matrix, and (3) extracting sub-pixel points.
Step 1. Gaussian filtering: A Gaussian filter is applied to the undistorted image. According to Ref. [27], the standard deviation of the Gaussian filter must satisfy the condition σ < w / 3 , where w is the width of the laser stripe.
Step 2. Solving the Hesse matrix: The Hesse matrix of each pixel (u, v) is defined as:
H = [ r u u r u v r u v r u v ]
where ruu, ruv, and rvv are the second-order partial image derivatives.
Step 3. Extracting sub-pixel points: The largest eigenvalue denotes the normal direction of the light stripe (nu, nv). The sub-pixel coordinate of the light stripe center point (pu, pv) can be expressed as:
( p u , p v ) = ( u 0 + t n u , v 0 + t n v )
where t = n u r u + n v r v n u 2 r u u + 2 n u n v r u v + n v 2 r v v , ru and rv are the first-order partial derivatives of the image, and (u0, v0) is the reference point. If tnu∈[−0.5, 0.5] and tnv∈[−0.5, 0.5], the first-order derivative along (nu, nv) vanishes within the current pixel. If the second-order partial derivative is larger than a threshold, (pu, pv) represent the sub-pixel coordinates of the light stripe. The extracted light stripe is shown in Figure 2c.

2.2.2. Single Circle Position and Posture Measurement

The position and orientation of a circle are determined by the circle center and normal vector of the plane containing the circle, respectively. In this section, it is assumed that the intrinsic parameters of the camera are known. In Figure 4, Oc is the optical center of the camera, (Oc; Xc, Yc, Zc) represents the CCS matching the WCS, (Ouv; u, v) is the CPCS, and R denotes the radius of circle C. Using the image processing algorithm described in Section 2.2.1, the elliptic fitting profile E of the circle target edge can be obtained. The main objectives of the process are (1) solving the elliptic cone equation of Q, (2) converting Q from the CCS to the standard coordinate system (SCS), and (3) determining the parameters of two possible circles.
1.
Solving the elliptic cone equation of Q in the CCS
As illustrated in Section 2.2.1, the profile curve of circle C in the CPCS is determined through ellipse fitting by solving the following equation:
[ P E u v 1 ] E [ P E u v 1 ] T = 0
where E is the coefficient matrix of the elliptic equation of E and PEuv = [uE vE] denotes the pixel coordinates of the ellipse. The elliptic cone Q can be obtained from E and the optical center Oc of the camera. Assuming that AI = A[I 0] is the auxiliary camera matrix, where A denotes the camera intrinsic matrix, the elliptic cone equation of Q is calculated using the back perspective projection model of the camera expressed by the following equation:
[ P Q c 1 ] Q [ P Q c 1 ] T = 0
where PQc = [xQ yQ zQ] contains the coordinates of spatial points on Q in the CCS. Q = A I T E A I represents the coefficient matrix of the elliptic cone equation of Q and Q can be expressed as:
Q = [ W 0 0 0 ] .
Here, W is a 3 × 3 symmetric matrix. Thus, Equation (8) may also be written as:
P Q c W P Q c T = 0
2.
Converting Q from the CCS to SCS
The form of Equation (10) is complex, which complicates the entire computational process. Therefore, the coordinate system of Equation (2) is converted to the SCS. Note that the SCS treats the Oc of the CCS as O c , and the beam, which points from O c to the center of C, matches the z c axis. The x c and y c axes conform to the right-handed coordinate system. Here, R is used to denote the rotation matrix from the SCS to the CCS as well as the conversion matrix:
P Q c = R P Q c
By substituting Equation (11) into Equation (10), the following expression is obtained:
P Q c R 1 W R P Q c T = 0 .
R−1WR is established through diagonalization—i.e., R−1WR = Diag(λ1, λ2, λ3). In other words, after the eigenvalue decomposition of W, R and the corresponding eigenvalues can be determined. Hence, the elliptic cone equation of Qi in the SCS is written as:
λ 1 x Q 2 + λ 2 y Q 2 + λ 3 z Q 2 = 0
To satisfy these requirements, two operations must be performed. (1) The column vectors of R should be unit-orthogonalized. (2) The order of the column vectors of R must be adjusted based on the rules of λ1, λ2, and λ3: (λ1 < λ2 < 0, λ3 > 0) or (λ1 > λ2 > 0, λ3 < 0).
3.
Determining the parameters of two possible circles
As mentioned above, the elliptic cone Q may be uniquely identified, and its equation is expressed by Equation (13) in the SCS. Hence, the determination of the circle position and orientation is equivalent to locating a plane that intersects the elliptic cone Q to form a circular ring with radius R. In other words, the center coordinates of the circle ring are the positional parameter of circle C and the normal vector of the plane is the orientational parameter of circle C. Nonetheless, according to the geometrical relation, there are two planes that satisfy this condition, as shown by the red and blue dotted circles in Figure 3. In Ref. [28], the following expressions of the center and normal vector of the two circle rings formed by the two planes are reported:
{ ( x o , y o , z o ) = ( ± R | λ 3 | ( | λ 1 | | λ 2 | ) | λ 1 | ( | λ 1 | + | λ 3 | ) , 0 , R | λ 1 | ( | λ 2 | + | λ 3 | ) | λ 3 | ( | λ 1 | + | λ 3 | ) ) ( n o x , n o y , n o z ) = ( ± | λ 1 | | λ 2 | | λ 1 | + | λ 3 | , 0 , | λ 2 | + | λ 3 | | λ 1 | + | λ 3 | )
where ( x o , y o , z o ) and ( n o x , n o y , n o z ) represent the center and normal vector of the two circle rings in the SCS, respectively. To unify the coordinate system with the light plane, the latter is transformed to the CCS according to the following relationships:
{ ( x o , y o , z o ) T = R ( x o , y o , z o ) T ( n o x , n o y , n o z ) T = R ( n o x , n o y , n o z ) T

2.2.3. Progressive Solution of the Light Plane Equation

As shown in Figure 5, the light plane intersects the circle target to form a laser stripe at the ith position of the target. Meanwhile, the image plane captures the laser stripe to form the line segment Li. In Figure 2, plane Π1 can be obtained through the camera model, while plane Π2 of the circular target is determined as specified in Section 2.2.2. However, two sets of possible parameters of plane Π2 corresponding to light stripes LsA and LsB are obtained in Section 2.2.2, which causes ambiguity. Generally, LsA and LsB satisfy two requirements: (1) they can be noncoplanar lines and (2) one of them should be on the light plane. Therefore, the following procedure has been proposed:
1.
Initial light plane determination
Step 1: Two positions of the target are introduced, which correspond to four possible light stripes (LsA,1, LsB,1, LsA,2, and LsB,2, where LsA,1 and LsB,1 denote the first target position and LsA,2 and LsB,2 represent the second target position).
Step 2: Four straight lines are combined to form N (N ≤ 4) planes ΠLS,i (i = 1, 2, …, N). The condition N ≤ 4 is applied because a plane does not exist if the two lines are non-coplanar.
Step 3: A new position of the target is introduced, and the two corresponding possible light stripes are judged against the planes formed in step 2. If the two new light stripes are not on the current plane, the latter is not a light plane.
Step 4: If there is only one plane is left after step 3, it is the light plane. Otherwise, step 3 is repeated until only one plane is left.
2.
Progressive refining of the light plane
After one initial light plane is determined, its parameters must be further optimized. A new position of the target is introduced due to the existence of two possible light stripes (e.g., LsA and LsB). If one of these stripes (LsA) is located on the light plane and the other stripe (LsB) is located away from the light plane, the light stripe LsA will be added to fit (refine) the light plane. Finally, the iteration termination conditions are formulated as follows. (1) When the new position of the target is introduced, the variation in the light plane parameters is less than 0.1%. (2) The calibration images are exhausted.

3. Simulations

This section describes a simulation procedure that is performed to verify the proposed method. We have simulated the influences of three different factors (the number of target placements, image noise, and circular target size) on the calibration accuracy. In these simulations, the lens focus is set to 25 mm and the utilized geometrical layout and dimensions are shown in Figure 6. (1) The image plane of the camera is considered in the NICS, not CPCS with millimeter units. (2) Both the light and image planes are perpendicular to the YcOcZc plane. The light plane equation used in geometric calculations is C1yC2z + D = 0. Specifically, the light plane equation applied in Figure 6 is 0.9336y − 0.3584z + 140.04 = 0. The calibration accuracy is determined by calculating the relative errors of the light plane equation parameters.

3.1. Influence of the Number of Target Placements

The diameter of the circular target is 50 mm and Gaussian noise is incorporated into the circular target profile and light stripe used for calibration. The noise level is σ = 0.4 mm. The number of target placements varies between 2, 3, 4, 5, 6, and 7. The relative errors of the calibration data obtained at different placement numbers are depicted in Figure 7, which clearly shows that the calibration accuracy increases as the number of target placements increases. When its value is larger than five, the relative error decreases significantly. Therefore, the calibration accuracy tends to stabilize when the number of target placements reaches a certain threshold. Furthermore, satisfactory results are obtained at a number of target placement greater than five.

3.2. Influence of Image Noise

In this case, the diameter of the circle is 50 mm and the target is placed twice. Gaussian noise is added to the circular target profile and the light stripe used for calibration. The noise level varies from σ = 0 mm to σ = 1 mm at intervals of 0.1 mm. The relative errors of the calibration data obtained at different noise levels are depicted in Figure 8. This shows that the calibration accuracy decreases as the noise increases. In the actual experiment, the pixel size of the camera sensor is 3.5 um. According to the image processing methods described, the extraction process can reach a precision of 0.2 pixels or 0.7 µm, which corresponds to a relatively small error.

3.3. Influence of Circular Target Size

Gaussian noise is incorporated into the circular target profile and the light stripe used for calibration. The noise level is set to σ = 0.6 mm. The target is placed twice and its diameter varies from 10 to 70 mm at intervals of 10 mm. The relative errors of the calibration data obtained at different target diameters are presented in Figure 9. This shows that the calibration precision increases as the circular target diameter increases from 10 to 50 mm. Moreover, the relative error decreases at target diameters above 50 mm. Thus, a circular target with a diameter of 50 mm is manufactured and utilized in the actual experiment.

4. Experiments

4.1. Experimental Setup

The utilized experimental system consists of a digital charge-coupled device (CCD) camera (MV–CE013–50 GM) and a laser projector (650 nm—5 mw—5 V) fixed by optical brackets. The camera is equipped with a megapixel lens with a focal length of 16 mm (Computer M1614–MP2). The CCD camera resolution is 1280 × 960 pixels with a maximum frame rate of 30 frames/s. The laser projector casts a single line laser and its minimum linewidth is 0.2 mm. The working distance of the system is approximately 450 mm. A photograph of the experimental setup is shown in Figure 10.

4.2. Experimental Procedure

Step 1: The camera model parameters are calibrated by Zhang’s method [8]. In practice, the MATLAB toolbox developed by Bouguet [20] is directly utilized to obtain the intrinsic matrix A and the distortion coefficients [K; G] of the camera.
Step 2: The circular target is placed at an appropriate position. After that, the camera captures a calibration image which contains the circular target and light stripe. The captured image is kept undistorted by using the A and [K; G] parameters. The coordinates of the light stripe center and elliptic fitting profile of the circular target edge are obtained according to the image processing method described in Section 2.2.1.
Step 3: The center coordinates of the circular target and two possible normal vectors of the plane containing the circle are calculated as described in Section 2.2.2.
Step 4: Two calibration images are captured at two positions of the circle target by performing steps 2 and 3. As a result, four possible light stripes are identified. The initial light plane is determined using the method described in Section 2.2.3 and further refined by adding a new calibration image.

4.3. Accuracy Evaluation

To estimate the accuracy of the proposed method, the obtained results are compared with those of the method developed in Ref. [14]. The utilized strategy is based on the CRIP illustrated in Figure 11. First, a chessboard is placed inside the measured volume. The grid line l of the chessboard in the horizontal direction intersects the laser stripe at point D, while points A, B, and C represent the corner points on l. The coordinates of A, B, and C are known in the CCS, and the pixel coordinates of a, b, c, and d, which correspond to the pixel points A, B, C, and D in the CPCS, respectively, are extracted. According to the CRIP, the following relations hold true:
{ C R = ( u a u c ) ( u b u d ) ( u b u c ) ( u a u d ) = ( v a u c ) ( v b u d ) ( v b u c ) ( v a v d ) C R = ( X a X c ) ( X b X d ) ( X b X c ) ( X a X d ) = ( Y a Y c ) ( Y b Y d ) ( Y b Y c ) ( Y a Y d ) = ( Z a Z c ) ( Z b Z d ) ( Z b Z c ) ( Z a Z d )
where CR is the cross-ratio. (ua, va), (ub, vb), (uc, vc), and (ud, vd) represent the pixel coordinates of a, b, c, and d, respectively. (XA, YA, ZA), (XB, YB, ZB), (XC, YC, ZC), and (XD, YD, ZD) denote the coordinates of A, B, C, and D in the CCS, respectively. Using the strategy described, the coordinates (XD, YD, ZD) of D can be determined. Therefore, the distances dt,AD (from A to D), dt,BD (from B to D), and dt,CD (from B to D) are considered the ideal evaluation distances. The coordinates of testing point D in the CCS are calculated using the proposed method and the method developed in Ref. [14]. The distances from testing point D to A, B, and C in the CCS are measured distances. The distances dm1 are obtained using the method described in Ref. [14], while the distances dm2 are determined using the proposed method. The 3D coordinates of the ideal points obtained by the principle of cross-ratio invariability and testing points computed by different calibration techniques are listed in Table 1, while the calibration accuracy analysis data are presented in Table 2.

5. Conclusions

In this study, a novel calibration method for line-structured light 3D measurements based on a single circular target is proposed. The circular target can be easily manufactured or found directly in nature. The RALS method is used for extracting the elliptic fitting profile because of its high accuracy and robustness. For extracting the light stripe, the sub-pixel method based on the Hesse matrix is adopted. According to the projective geometry, the elliptical cone equation defined by the elliptic fitting profile and optical center of the camera can be determined. By considering the obtained elliptical cone and known diameter of the circular target, two possible positions and orientations of the circular target are distinguished. For this purpose, two groups of 3D intersection points between the light plane and the circular target are identified. The correct group of these points is filtered, and the light plane is progressively fitted. The effectiveness of the proposed method is verified both theoretically and experimentally, and its measurement accuracy amounts to 0.05 mm.

Author Contributions

J.W. drafted the work or substantively revised it. In addition, X.L. configured the experiments and wrote the codes. J.W. and X.L. calculated the data, wrote the manuscript and plotted the figures. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China, grant number 21KJB460026, the Industry-university-research Cooperation Project of Jiangsu Province, grant number BY2021245, and the Young and Middle-aged Academic Leader of “Qinglan Project” of Universities in Jiangsu Province, grant number [Jiangsu Teacher letter (2020) No. 10].

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Andrew, M.; Luca, M.; David, T.I.; Wilson, M. The quality of measurement results in terms of the structural features of the measurement process. Measurement 2018, 116, 611–620. [Google Scholar]
  2. Jason, R.; Alain, P.; Michael, S.; Artemenko, O.; Stricker, D. 6DoF Object Tracking based on 3D Scans for Augmented Reality Remote Live Support. Computers 2018, 7, 7010006. [Google Scholar]
  3. Piotr, S.; Kraysztof, M.; Jan, R. On-Line Laser Triangulation Scanner for Wood Logs Surface Geometry Measurement. Sensors 2019, 19, 1074. [Google Scholar]
  4. Tang, Y.; Li, L.; Wang, C.; Chen, M.; Feng, W.; Zou, X.; Huang, K. Real-time detection of surface deformation and strain in recycled aggregate concrete-filled steel tubular columns via four-ocular vision. Robot. Comput.-Integr. Manuf. 2019, 59, 36–46. [Google Scholar] [CrossRef]
  5. Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef]
  6. Wang, Z. Review of Real-time Three-dimensional Shape Measurement Techniques. Measurement 2020, 156, 107624. [Google Scholar] [CrossRef]
  7. Xu, X.B.; Fei, Z.W.; Yang, J.; Tan, Z.; Luo, M. Line structured light calibration method and centerline extraction: A review. Results Phys. 2020, 19, 103637. [Google Scholar] [CrossRef]
  8. Zhang, Z.Y. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  9. Huang, L.; Da, F.; Gai, S. Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object. Opt. Lasers Eng. 2019, 115, 32–41. [Google Scholar] [CrossRef]
  10. Wang, Y.; Yuan, F.; Jiang, H.; Hu, Y. Novel camera calibration based on cooperative target in attitude measurement. Opt. -Int. J. Light Electron Opt. 2016, 127, 10457–10466. [Google Scholar] [CrossRef]
  11. Huynh, D.Q.; Owens, R.A.; Hartmann, P.E. Calibration a Structured Light Stripe System: A Novel Approach. Int. J. Comput. Vis. 1999, 33, 73–86. [Google Scholar] [CrossRef]
  12. Xu, G.Y.; Li, L.F.; Zeng, J.C. A new method of calibration in 3D vision system based on structure-light. Chin. J. Comput. 1995, 18, 450–456. [Google Scholar]
  13. Dewar, R. Self-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate system. Robot. Vis. Conf. Proc. 1988, 1, 5–13. [Google Scholar]
  14. Zhou, F.; Zhang, G. Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations. Image Vis. Comput. 2005, 23, 59–67. [Google Scholar] [CrossRef]
  15. Zhang, G.; Zhen, L.; Sun, J.; Wei, Z. Novel calibration method for multi-sensor visual measurement system based on structured light. Opt. Eng. 2010, 49, 258. [Google Scholar] [CrossRef]
  16. Jing, X.; Douet, J.; Zhao, J.; Song, L.; Chen, K. A simple calibration method for structured light-based 3D profile measurement. Opt. Laser Technol. 2013, 48, 187–193. [Google Scholar]
  17. Wei, Z.; Cao, L.; Zhang, G. A novel 1D target-based calibration method with unknown orientation for structured light vision sensor. Opt. Laser Technol. 2010, 42, 570–574. [Google Scholar] [CrossRef]
  18. Liu, Z.; Li, X.; Li, F.; Zhang, G. Calibration method for line-structured light vision sensor based on a single ball target. Opt. Lasers Eng. 2015, 69, 20–28. [Google Scholar] [CrossRef]
  19. Li, X.; Zhang, Z.; Yang, C. Reconstruction method for fringe projection profilometry based on light beams. Appl. Opt. 2016, 55, 9895. [Google Scholar] [CrossRef]
  20. Bouguet, J.Y. Camera Calibration Toolbox Got MATLAB. 2015. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 20 October 2021).
  21. Maratea, A.; Petrosino, A.; Manzo, M. Extended Graph Backbone for Motif Analysis. In Proceedings of the 18th International Conference on Computer Systems and Technologies, Ruse, Bulgaria, 23–24 June 2017; pp. 36–43. [Google Scholar]
  22. Tsuji, S.; Matsumoto, F. Detection of ellipses by a modified Hough transformation. IEEE Trans. Comput. 2006, 27, 777–781. [Google Scholar] [CrossRef]
  23. Chia, A.Y.S.; Rahardja, S.; Rajan, D.; Leung, M.K. A split and merge based ellipse detector with self-correcting capability. IEEE Trans. Image Process. 2011, 20, 1991–2006. [Google Scholar] [CrossRef] [PubMed]
  24. Gioi, R.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  25. Paul, L.R. A note on the least squares fitting of ellipses. Pattern Recognit. Lett. 1993, 14, 799–808. [Google Scholar]
  26. Kulpa, K. On the properties of discrete circles, rings, and disks. Comput. Graph. Image Process. 1979, 10, 348–365. [Google Scholar] [CrossRef]
  27. Ellenberger, S.L. Influence of defocus on measurements in microscope images. Appl. Sci. 2000, 15, 43–50. [Google Scholar]
  28. Zhang, L.J.; Huang, X.X.; Feng, W.C.; Liang, S.; Hu, T. Solution of duality in circular feature with three line configuration. Act Opt. Sin. 2016, 36, 51. [Google Scholar]
Figure 1. Schematic diagram of the line-structured light 3D measurement process.
Figure 1. Schematic diagram of the line-structured light 3D measurement process.
Applsci 12 00588 g001
Figure 2. Calibration images. (a) Captured image of the circle target. (b) Elliptic fitting profile. (c) Extracted light stripe.
Figure 2. Calibration images. (a) Captured image of the circle target. (b) Elliptic fitting profile. (c) Extracted light stripe.
Applsci 12 00588 g002
Figure 3. Flowchart of the complete image processing algorithm.
Figure 3. Flowchart of the complete image processing algorithm.
Applsci 12 00588 g003
Figure 4. Schematic diagram of the circle position and orientation measurement process.
Figure 4. Schematic diagram of the circle position and orientation measurement process.
Applsci 12 00588 g004
Figure 5. Schematic diagram of the light plane determination process.
Figure 5. Schematic diagram of the light plane determination process.
Applsci 12 00588 g005
Figure 6. Geometrical layout and dimensions of the simulation case (unit: mm).
Figure 6. Geometrical layout and dimensions of the simulation case (unit: mm).
Applsci 12 00588 g006
Figure 7. Influence of the number of circular target placements on the calibration accuracy.
Figure 7. Influence of the number of circular target placements on the calibration accuracy.
Applsci 12 00588 g007
Figure 8. Influence of image noise on the calibration accuracy.
Figure 8. Influence of image noise on the calibration accuracy.
Applsci 12 00588 g008
Figure 9. Influence of the circular target diameter on the calibration accuracy.
Figure 9. Influence of the circular target diameter on the calibration accuracy.
Applsci 12 00588 g009
Figure 10. Experimental setup.
Figure 10. Experimental setup.
Applsci 12 00588 g010
Figure 11. Schematic diagram of the evaluation strategy based on the cross-ratio invariability principle.
Figure 11. Schematic diagram of the evaluation strategy based on the cross-ratio invariability principle.
Applsci 12 00588 g011
Table 1. Three-dimensional coordinates of the ideal points and testing points computed by Zhou’s method and the method proposed in this study.
Table 1. Three-dimensional coordinates of the ideal points and testing points computed by Zhou’s method and the method proposed in this study.
No.Pixel Coordinates
(Unit: Pixels)
Cross-Ratio Invariability in the Target Coordinate System (Unit: mm)3D Coordinates of Testing Points Computed by Zhou’s Method (Unit: mm)3D Coordinates of Testing Points Computed by Our Proposed Method (Unit: mm)
1 424.0 631.5 45.715 20 0 3.262 12.453 447.108 3.271 12.486 448.308
2 519.6 641.6 44.284 30 0 13.011 13.432 445.060 13.048 13.470 446.297
3 616.5 653.5 43.920 40 0 22.790 14.577 442.758 22.856 14.619 444.029
4 713.5 665.0 43.133 50 0 32.483 15.672 440.537 32.579 15.719 441.842
5 910.0 690.0 41.093 70 0 51.794 18.018 435.861 51.957 18.075 437.232
6 238.0 607.0 44.514 0 0 −15.993 10.034 451.893 −16.033 10.059 453.023
7 334.0 607.0 45.290 10 0 −6.030 10.022 451.347 −6.046 10.048 452.525
8 430.5 607.0 42.974 20 0 3.961 10.010 450.800 3.972 10.037 452.025
9 526.8 607.0 45.806 30 0 13.907 9.998 450.255 13.947 10.026 451.527
10 720.5 607.0 45.211 50 0 33.841 9.973 449.164 33.944 10.004 450.529
Table 2. Statistical results computed by Zhou’s method and the method proposed in this study. (Unit: mm).
Table 2. Statistical results computed by Zhou’s method and the method proposed in this study. (Unit: mm).
No.dtdm1dm2Δ(dt, dm1)Δ(dt, dm2)
1 10.102 10.010 10.030 0.091 0.072
2 20.081 20.120 20.160 −0.039 −0.080
3 30.111 30.124 30.187 −0.013 −0.076
4 50.213 50.128 50.242 0.085 −0.029
5 10.030 9.978 10.000 0.052 0.030
6 20.059 19.984 20.030 0.075 0.029
7 30.028 29.945 30.017 0.082 0.011
8 50.005 49.909 50.039 0.096 −0.034
RMS error 0.072 0.051
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, J.; Li, X. Calibration Method for Line-Structured Light Three-Dimensional Measurements Based on a Single Circular Target. Appl. Sci. 2022, 12, 588. https://doi.org/10.3390/app12020588

AMA Style

Wang J, Li X. Calibration Method for Line-Structured Light Three-Dimensional Measurements Based on a Single Circular Target. Applied Sciences. 2022; 12(2):588. https://doi.org/10.3390/app12020588

Chicago/Turabian Style

Wang, Jun, and Xuexing Li. 2022. "Calibration Method for Line-Structured Light Three-Dimensional Measurements Based on a Single Circular Target" Applied Sciences 12, no. 2: 588. https://doi.org/10.3390/app12020588

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop