Next Article in Journal
SPARX, a MIMO Array for Ground-Based Radar Interferometry
Previous Article in Journal
Optimal Resource Management and Binary Power Control in Network-Assisted D2D Communications for Higher Frequency Reuse Factor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Camera Rig with Non-Overlapping Views for Dynamic Six-Degree-of-Freedom Measurement

State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(2), 250; https://doi.org/10.3390/s19020250
Submission received: 4 December 2018 / Revised: 28 December 2018 / Accepted: 7 January 2019 / Published: 10 January 2019

Abstract

:
Large-scale measurement plays an increasingly important role in intelligent manufacturing. However, existing instruments have problems with immersive experiences. In this paper, an immersive positioning and measuring method based on augmented reality is introduced. An inside-out vision measurement approach using a multi-camera rig with non-overlapping views is presented for dynamic six-degree-of-freedom measurement. By using active LED markers, a flexible and robust solution is delivered to deal with complex manufacturing sites. The space resection adjustment principle is addressed and measurement errors are simulated. The improved Nearest Neighbor method is employed for feature correspondence. The proposed tracking method is verified by experiments and results with good performance are obtained.

1. Introduction

In recent years, there has been growing interest in intelligent manufacturing [1] of large-scale equipment, such as airplane assembly [2], shipbuilding [3], and spacecraft inspection [4]. As one of the key technologies in intelligent manufacturing, large-scale measurement [5] plays a crucial part in the improvement of product quality and working efficiency. Large-scale measuring instruments are expected to provide adaptive and flexible services to end-users and enable a highly integrated human-machine manufacturing system. However, popular measuring instruments like laser tracker [6], total station [7], and indoor Global Positioning System (iGPS) [8] have problems with portability and flexibility, especially in a narrow space. In practical operation, it is quite a challenging task to measure complex components with high efficiency and accuracy in a narrow ship or spacecraft cabin using any of the above instruments. Besides, operating personnel have no access to real-time visual measuring results due to the lack of interaction with measuring instruments, which makes it more difficult for them to get involved into measurement environment. Augmented Reality (AR) [9] is a novel human-machine interaction tool that combines virtual objects with real environment in a seamless way, thus offering an effective solution to large-scale measurement.
On this background, an immersive human-machine-environment interactive positioning and measuring method is proposed. By the integration of global positioning and local measuring, three-dimensional coordinates of the measured objects can be obtained in the global coordinate system. Then, based on AR, the measuring results and auxiliary information are accurately overlaid onto the measured object in real time using a projector, which enhances the user’s interactive and immersive experiences. With the immersive positioning and measuring helmet (see Figure 1), operating personnel are able to free their hands to carry out assembly and inspection work. The whole system possesses the features of high integration, excellent portability, and powerful functionality. Therefore, the immersive positioning and measuring method gives a huge boost to working efficiency with AR-assisted guidance, and it also represents the developing trend of large-scale measurement in intelligent manufacturing. In order to obtain accurate measuring results and merge virtual information with the real object perfectly, high-accuracy global positioning and tracking method is required [10]. The major task of tracking is to determine the positions and orientations of the helmet in real time, that is, dynamic six-degree-of-freedom (6-DOF) measurement [11].
A number of alternative technologies have been proposed for indoor positioning [12], such as magnetic [13], inertial [14], ultrasound [15], and vision [16]. However, complex working environment, portability, and accuracy requirement pose challenges to these methods. With small operating range, magnetic measurement is prone to distortion. Inertial measurement is of poor accuracy due to the error accumulation with time. As for ultrasound, it is severely affected by obstacles, so it does not apply to manufacturing sites. Compared with the above methods, vision measurement can realize pixel accuracy and large-scale multi-target tracking with excellent flexibility and convenience, which shows great advantages in industrial manufacturing.
The vision-based 6-DOF measurement methods can be classified into two categories: outside-in measurement [17] and inside-out measurement [18]. As for outside-in measurement, cameras are installed in the working environment and markers are fixed on the moving target. Images of markers are taken by the cameras to calculate positions and orientations of the target. The OptiTrack system [19] developed by NaturalPoint is one of the representative outside-in systems, and it produces positional error less than 0.3 mm and rotational error less than 0.05°. However, it is costly to install multiple cameras in large-scale environments, which also bring difficulties to realize exact synchronization. By contrast, inside-out measurement uses cameras mounted on the tracked object to take images of markers in the working environment, which makes it more flexible and easier to extend. As the research focus of robot autonomous navigation, Simultaneous Localization and Mapping (SLAM) [20] relies on sequences of images to recognize the robot’s location and surrounding environment. But the computational load for image correspondence is particularly high, and this view-based approach can hardly meet the accuracy requirements. In order to improve accuracy of reference points, retro reflective targets [21] are used for indoor positioning. Nonetheless, these systems lack of robustness, especially under conditions with varying illumination. Thus, active markers also have been utilized, and HiBall tracking system using LED panels is one of the most successful systems. The HiBall tracking system [22] achieves 0.5 mm and 0.03° of absolute error in a 4.5 m × 8.5 m room. However, it is quite difficult to install the LED panels at industrial spots, hindering the application of this system.
In order to better meet the needs of dynamic 6-DOF measurement for immersive positioning and measuring in manufacturing sites, this paper presents an inside-out measuring method using a multi-camera rig with non-overlapping views. The multi-camera rig is mounted on the integrated helmet for global tracking, and it is effective to increase the field of view as well as reduce the impacts of vision occlusion. Taking images of the cooperative LED markers that are deployed in the surrounding environment, positions and orientations of the helmet are determined through a collinearity equation based space resection adjustment method. As the LED markers are interchangeable with 38.1 mm spherical targets, their three-dimensional coordinates can be accurately obtained with laser tracker or industrial photogrammetry system. Furthermore, a motion information combined Nearest Neighbor (NN) method [23] is adopted to implement the matching of image points and LED markers under dynamic conditions.
The remaining of this paper is structured as follows: Section 2 describes configuration of the multi-camera rig and design of the cooperative object; Section 3 presents the dynamic 6-DOF measurement principle, including space resection adjustment method and feature correspondence method; measurement error is simulated in Section 4, while experiments are carried out in Section 5; finally, conclusions of the work are provided in Section 6.

2. System Hardware

As depicted in Figure 2a, the multi-camera rig consists of one control circuit board and three compact CMOS cameras, which are mounted on a 3D-printed connector with good rigidity. Each camera is able to provide 1280 pixel × 960 pixel resolution with frame rate up to 54 Hz, and the size of unit pixel is 3.75 µm × 3.75 µm. As the field of view of each camera is 34.5° with a 6 mm lens, the angle between two neighboring cameras is set as 35°. Based on this design, there are none overlapping views between two neighboring cameras, so the multi-camera rig can cover larger visible range to avoid vision occlusion. The control circuit board is programmed to synchronize clocks as well as gather images, and it also transmits data to the computer via Ethernet. Consequently, the multi-camera rig is well-positioned to realize global tracking with light weight and high reliability.
The active LED markers are designed as control points to deal with complicated industrial environment. A red LED with a wavelength of 660 nm is installed at the center of a spherical target (see Figure 2b). The LED marker is aligned accurately using TESA-VISIO 300 video measuring machine. Compared with passive markers, active LED markers are less sensitive to illumination, and they can also provide optimal contrast and sharp edges. Moreover, this active LED marker is interchangeable with the 38.1 mm spherically mounted retroflector (SMR) of laser tracker.

3. Dynamic 6-DOF Measurement Principle

3.1. Camera Model

On the basis of pinhole model, a more complex camera model is introduced for high-accuracy vision measurement, including principal point offset and lens distortion. For convenience, the symmetrical plane of image plane is analyzed. As shown in Figure 3, a spatial object point P ( X p , Y p , Z p ) is projected at p ( x p , y p ) on the image plane through the perspective center O c .
In the camera coordinate system ( O c - X c Y c Z c ), X c axis and Y c axis are parallel to x axis and y axis of the image coordinate system respectively, and Z c axis is along the optical axis. On account of lens installation errors, there exists an offset between the principal point ( x 0 , y 0 ) and the center of the image O. Hence, image points coordinates after principal point correction are expressed as:
x c = x p x 0 = ( u p u 0 ) d x , y c = y p y 0 = ( v p v 0 ) d y .
Here, ( u p , v p ) and ( u 0 , v 0 ) stand for pixel coordinates of point p and principal point respectively, and ( d x , d y ) are pixel separations. Besides radial lens distortion and tangential lens distortion, affine and non-orthogonality deformations also cause image point offset. The distortion is generalized into Equation (2):
Δ x = x c r 2 k 1 + x c r 4 k 2 + x c r 6 k 3 + ( r 2 + 2 x c 2 ) p 1 + 2 x c y c p 2 + x c b 1 + y c b 2 , Δ y = y c r 2 k 1 + y c r 4 k 2 + y c r 6 k 3 + 2 x c y c p 1 + ( r 2 + 2 y c 2 ) p 2 + y c b 1 ,
where ( Δ x , Δ y ) denote the correction values for errors in the image plane, x c = x p x 0 and y c = y p y 0 stand for image point coordinates after principal point correction, r = x c 2 + y c 2 refers to the radical distance from image point to optical axis, ( k 1 , k 2 , k 3 ) represent radial distortion coefficients (generally considering the first three radial distortion coefficients), ( p 1 , p 2 ) stand for tangential distortion coefficients (generally ignoring the third tangential distortion), and ( b 1 , b 2 ) refer to affine and non-orthogonality coefficients.
x p = x 0 + Δ x + c r 1 ( X p X 0 ) + r 2 ( Y p Y 0 ) + r 3 ( Z p Z 0 ) r 7 ( X p X 0 ) + r 8 ( Y p Y 0 ) + r 9 ( Z p Z 0 ) , y p = y 0 + Δ y + c r 4 ( X p X 0 ) + r 5 ( Y p Y 0 ) + r 6 ( Z p Z 0 ) r 7 ( X p X 0 ) + r 8 ( Y p Y 0 ) + r 9 ( Z p Z 0 ) ,
The collinearity equations are given by Equation (3) based on the camera model above, where c represents the principal distance, X 0 = [ X 0 , Y 0 , Z 0 ] T are coordinates of perspective center in object coordinate system, and R defines rotation of object coordinates into image coordinates by three independent rotation angles θ , φ , κ about axes X c , Y c , Z c in Equation (4). Thus collinearity equations offer functions of six degrees of freedom ( X 0 , Y 0 , Z 0 , θ , φ , κ ) of the camera.
R = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 = cos φ cos κ cos φ sin κ sin φ cos θ sin κ + sin θ sin φ cos κ cos θ cos κ sin θ sin φ sin κ sin θ cos φ sin θ sin κ cos θ sin φ cos κ sin θ cos κ + cos θ sin φ sin κ cos θ cos φ .

3.2. Feature Points Extraction Method

Using active LED markers as feature points, images with high contrast are acquired. The facula of LED marker is shown in Figure 4. Under this condition, the squared centroid method is adopted for sub-pixel image processing, which achieves high extraction accuracy. The squared centroid sets the gray value squared as the weight in the processing window:
x m = i j x f 2 ( x , y ) i j f 2 ( x , y ) , y m = i j y f 2 ( x , y ) i j f 2 ( x , y ) .
Here ( x m , y m ) refer to the coordinates of centroid, f ( x , y ) is the gray value at the pixel position ( x , y ) . Squared centroid method is computationally fast and easy to implement.

3.3. Space Resection Adjustment Method

In practical measurement, a set of LED markers are deployed in the environment, as shown in Figure 5. The three-dimensional coordinates of each LED marker in the global object coordinate system O- X Y Z are obtained. The parameters of interior orientation and spatial relationships between three cameras are also calibrated in advance,
X 1 = R 21 X 2 + T 21 , X 3 = R 23 X 2 + T 23 .
In Equation (6), ( R 21 , T 21 ) and ( R 23 , T 23 ) are rotation matrices and translations matrices from coordinates X 2 in O 2 - X 2 Y 2 Z 2 to coordinates X 1 in O 1 - X 1 Y 1 Z 1 and X 3 in O 3 - X 3 Y 3 Z 3 respectively. Meanwhile, we assume the coordinate system of multi-camera rig O s - X s Y s Z s is identical with the coordinate system O 2 - X 2 Y 2 Z 2 of camera-2. Therefore, we can establish the reprojection error equations for each visible LED marker on the basis of collinearity equations:
e x j = x j x i 0 Δ x j c i r i 1 ( X j X i 0 ) + r i 2 ( Y j Y i 0 ) + r i 3 ( Z j Z i 0 ) r i 7 ( X j X i 0 ) + r i 8 ( Y j Y i 0 ) + r i 9 ( Z j Z i 0 ) , e y j = y j y i 0 Δ y j c i r i 4 ( X j X i 0 ) + r i 5 ( Y j Y i 0 ) + r i 6 ( Z j Z i 0 ) r i 7 ( X j X i 0 ) + r i 8 ( Y j Y i 0 ) + r i 9 ( Z j Z i 0 ) .
Here j is the serial number of visible LED markers and i ( i = 1 , 2 , 3 ) denotes the number of camera that observes j-th marker. The six degrees of freedom of camera-2 ( R 2 , X 20 ) with respect to the global object coordinate system can be expressed as follows,
X 2 = R 2 ( X X 20 ) .
Substituting Equation (8) into Equation (6), the following relations are obtained,
X 1 = R 21 R 2 ( X X 20 ) + T 21 = R 21 R 2 ( X ( X 20 R 2 1 R 21 1 T 21 ) ) , X 3 = R 23 R 2 ( X X 20 ) + T 23 = R 23 R 2 ( X ( X 20 R 2 1 R 23 1 T 23 ) ) .
Therefore, the six degrees of freedom of camera-1 ( R 1 , X 10 ) and camera-3 ( R 3 , X 30 ) can be expressed by ( R 2 , X 20 ) ,
R 1 = R 21 R 2 , X 10 = X 20 R 2 1 R 21 1 T 21 , R 3 = R 23 R 2 , X 30 = X 20 R 2 1 R 23 1 T 23 .
In addition, as an orthonormal matrix, rotation matrix R 2 satisfies following constraint equations:
f 1 = r 21 2 + r 22 2 + r 23 2 1 = 0 , f 2 = r 24 2 + r 25 2 + r 26 2 1 = 0 , f 3 = r 27 2 + r 28 2 + r 29 2 1 = 0 , f 4 = r 21 r 24 + r 22 r 25 + r 23 r 26 = 0 , f 5 = r 21 r 27 + r 22 r 28 + r 23 r 29 = 0 , f 6 = r 24 r 27 + r 25 r 28 + r 26 r 29 = 0 .
Consequently, there are only six unknown parameters, and the solution requires at least three LED markers which do not lie on a common straight line. A non-linear optimization algorithm is proposed to calculate ( R 2 , X 20 ) , and reprojection error based object function is established as Equation (12) using Lagrange multiplier method [24].
F = j n ( e x j 2 + e y j 2 ) + λ · k = 1 6 f k 2 = min ,
where n stands for the number of visible LED markers, λ is the Lagrange multiplier. With fast convergence rate and strong robustness, Levenberg-Marquardt (LM) algorithm [25] is employed for this optimization problem. In order to obtain global optimal solution, the initial value for optimization is calculated using EPnP algorithm [26]. Eventually, we can determine six degrees of freedom ( X s , Y s , Z s , θ s , φ s , κ s ) of the multi-camera rig derived from R 2 and X 20 .

3.4. Feature Points Correspondence Method

Because there are only several feature points that are almost exactly the same in one image, it is a huge challenge to match corresponding LED markers under dynamic conditions. NN method is proposed for feature matching by searching for the nearest point couples in two images, and each point couple represents the same LED marker. In order to improve robustness, motion information of the multi-camera rig is combined. The motion state vector of the multi-camera rig at time t k is S k = [ X k , Y k , Z k , ν x k , ν y k , ν z k , θ k , φ k , κ k , ω x k , ω y k , ω z k ] , where ( ν x k , ν y k , ν z k ) and ( ω x k , ω y k , ω z k ) denote velocities and angular velocities respectively. Considering that users’ movements are normally slow, the multi-camera rig is assumed to move with constant velocities in the time Δ t between two adjacent frames. Hence, the state of the multi-camera rig S k can be predicted by S k 1 as follows:
X k = X k 1 + ν x k 1 Δ t , Y k = Y k 1 + ν y k 1 Δ t , Z k = Z k 1 + ν z k 1 Δ t , θ k = θ k 1 + ω x k 1 Δ t , φ k = φ k 1 + ω y k 1 Δ t , κ k = κ k 1 + ω z k 1 Δ t .
Then the positions and orientations of three cameras can also be predicted. Based on Equation (3), we project LED markers onto image plane and calculate image coordinates of these predicted feature points. Next, we find the nearest point couples between predicted image and real image using NN method, where the distance between two image points is defined as:
d = ( x i x j ) 2 + ( y i y j ) 2 .
In order to avoid mismatching caused by occlusion and image noise, the ratio of the shortest distance to the second-shortest distance is validated. Furthermore, the reciprocity check is employed to remove outliers. Thus the following steps need to be performed for feature points matching.
  • As for a point P r , i on the real image, we calculate the distances from P r , i to all the points on the predicted image and select its nearest neighbor P p , j that has the shortest distance. If the ratio of the shortest distance to the second-shortest distance is less than the threshold λ , we continue to the next step. If not, we remove the point P r , i as an outlier.
  • We calculate the distances from P p , j to all the points on the real image. Then we check whether P r , i has the shortest distance, and whether the ratio of the shortest distance to the second-shortest distance is less than λ . When both criteria are fulfilled, the nearest point couple ( P r , i , P p , j ) are proved to be correct.
  • By repeating the above process, we complete the feature points matching (see Figure 6). The value of λ is set based on the deployment of LED markers.

3.5. Dynamic Measurement Process

In order to accomplish continuous tracking, the system initialization has to be performed. In the initialization, the multi-camera rig remains stationary and feature points correspondence is completed manually. Once the initial state is determined, 6-DOF of the rig can be calculated real-timely in the measurement field. The complete measurement process is shown in Figure 7.

4. Measurement Error Simulation

On the basis of space resection adjustment method given in the previous section, measurement errors mainly arise from calibration errors of interior orientation parameters, calibration errors of spatial relationship parameters and position errors of LED markers including machining errors and measuring errors. Although measurement accuracy also depends on the focal length, the number and distribution of LED markers [27], they are not discussed in this paper.
By using Monte Carlo simulation technique, the 6-DOF measurement errors are analyzed. The deployment of the multi-camera rig and 15 markers is shown in Figure 8, and in this setup each camera observes 5 non-planar markers. The parameters of the multi-camera rig are set based on the system hardware design described in Section 2. After adding normally distributed noises, the root mean square (RMS) errors between the simulated values of 6-DOF and the true values are calculated. The following simulations are conducted to study the impacts of the above factors on the measurement errors. The sample size is set as 10 4 in each simulation.
Firstly, the impact of the marker position error is studied. The position noises obeying normal distribution are added to each marker in three axes. The standard deviation varies from 0 mm to 0.5 mm. As shown in Figure 9, the RMS error of 6-DOF increases linearly with the marker position error. When a realistic error of 0.2 mm is assumed for the markers, the three-dimensional position of the multi-camera rig is computed to an accuracy of about 0.5 mm.
Since the calibration errors of interior parameters are directly reflected in the errors of image points, we add image point noises that follow normal distribution. For the simulation, the noise is altered in the range of 0.2 pixel, corresponding to 0.75 µm. Figure 10 illustrates a linear relationship between 6-DOF measurement error and image point error. With an image point error of 0.1 pixel or 0.375 µm, the angular error is less than 0.01°.
As part of spatial relationship error, the rotation error between cameras is added to evaluate its influence on 6-DOF measurement. The noise level is changed from 0° to 0.01° and the corresponding measurement error is depicted in Figure 11. There is a clear linear trend for all six degrees of freedom. It can also be observed that the rotation angle about X axis is computed with better robustness than the other two rotation angles.
Finally, a variation of the translation error between cameras is investigated. The noise of relative position is varied within 0.5 mm. As seen from Figure 12, the 6-DOF measurement error indicates again a linear relationship to the translation calibration error [28].
Furthermore, another simulation is carried out to compare the measurement accuracy of a single camera and the multi-camera rig in the same setup above (see Figure 8). With a focal length of 2.4 mm, the single camera covers almost as wide view as the multi-camera rig to observe all 15 markers. Then the normally distributed noises including the marker position noise (0.1 mm), image point noise (0.1 pixel) and spatial relationship noise (only for the multi-camera rig) are added to simulate the 6-DOF measurement error. The spatial relationship noise is composed of the rotation noise (0.005°) and the translation noise (0.1 mm) between cameras, which are typical values for the calibration of non-overlapping cameras. The result in Table 1 shows that the multi-camera rig gives a higher accuracy than the single camera.

5. Experiment

Before the experiments, the multi-camera rig is fixed to a helmet. Interior orientation parameters and spatial relationship parameters of three cameras are calibrated in a large-scale spatial photogrammetric test field. Then the following experiments are conducted to evaluate the performance of the proposed method.

5.1. Static Measurement Experiment

The static measurement experiment is conducted in a 5 m × 5 m × 3 m measurement field (see Figure 13). Ten LED markers are deployed, and their spatial coordinates are measured using Leica AT901 laser tracker.

5.1.1. Measurement Repeatability

The helmet is randomly placed at ten different positions in the measurement field, and five images are captured at each position. Feature points are all extracted and the pixel coordinate repeatability of each point is shown in Figure 14. From the results, we can observe that the extraction precision of feature points in either axis is better than 0.01 pixel.
Then 6-DOF ( X 0 , Y 0 , Z 0 , θ , φ , κ ) of the multi-camera rig are calculated, and 6-DOF measurement repeatability at each position is analyzed. As shown in Figure 15, standard deviations of positions along global coordinate axes are less than 0.5 mm and standard deviations of three rotations are better than 0.01°.

5.1.2. Distance Measurement

The helmet is mounted on a motorized translation stage with a long travel of 1000 mm (see Figure 13), and the straightness error of the translation stage is less than 0.02 mm. The translation stage is placed in the measurement field and set to travel 900 mm each time. The positions of the multi-camera rig are obtained before and after translation, so the travel distance D can be calculated by D = ( X a X b ) 2 + ( Y a Y b ) 2 + ( Z a Z b ) 2 . In addition, a SMR is fixed on the helmet to obtain accurate travel distances as reference values using the laser tracker. Nine sets of results are acquired while the translation stage is placed at nine different positions and directions, and the measurement errors of travel distances are shown in Table 2. Using the multi-camera rig, the RMS error of distance measurement is 0.383 mm.

5.2. Dynamic Measurement Experiment

5.2.1. Operating Speed

In order to evaluate the performance of dynamic measurement, the operating speed is tested using the C++ language in Visual Studio 2013 on a laptop with Inter(R) Core(TM) i7-6700HQ CPU at 2.60 GHz and 8 GB RAM. The test is conducted with ten LED markers and the maximum consuming time is shown in Figure 16. The consuming time of single measurement is approximately 33.9 ms, in which the feature extraction takes up 85% of the entire process.

5.2.2. 6-DOF Measurement

As shown in Figure 17, the helmet and the Leica T-Mac are both mounted on the three-axis turntable to assess the dynamic 6-DOF measurement accuracy. Based on the experiment setup, T-Mac has a 0.01° rotational accuracy and about 30 µm positional accuracy. The spatial relationship between these two devices remains constant, no matter how the turntable rotates. Ten LED markers are deployed about five meters in front of the turntable, and their three-dimensional coordinates are measured using Leica AT901 laser tracker. In consequence, positions and orientations of the helmet and the T-Mac are unified in the laser tracker coordinate system O- X Y Z .
The turntable is set to rotate 20°, 15°, and 10° about its outer axis, middle axis and inner axis respectively at an angular velocity of 5°/s, and then it returns to the starting position. In the feature matching process, NN method is applied with the threshold λ = 0 . 3 . The multi-camera rig and T-Mac are triggered at 20 Hz synchronically, and their motion trajectories are shown in Figure 18.
A test is carried out to validate the proposed feature correspondence method. Here, the sample interval is altered to simulate different angular velocities of the turntable. As for an angular velocity of 10°/s, half of the obtained images are selected with equal interval. Then feature correspondence between adjacent pictures is performed with and without motion prediction respectively. The numbers of image mismatching at different angular velocities are listed in Table 3. The results indicate that using motion prediction helps to identify better correspondences.
Moreover, six degrees of freedom of the helmet in the T-Mac coordinate system O T - X T Y T Z T at each triggering moment are also acquired (see Figure 19). Based on the 600 sets of data obtained, standard deviations of six degrees of freedom are listed in Table 4. Standard deviations of dynamic measurement are slightly larger than those of static measurement. This problem is probably caused by the time synchronization error for triggering the multi-camera rig and the T-Mac, which needs to be further verified.

6. Conclusions

In this paper, a multi-camera rig with excellent portability and high reliability is presented for dynamic 6-DOF measurement. The multi-camera rig increases the entire field of view significantly, while guaranteeing measurement accuracy by space resection adjustment. The LED markers offer a more flexible and robust solution in complex manufacturing sites. The improved Nearest Neighbor method is employed for feature correspondence under dynamic conditions. Besides, the proposed global tracking method is validated by simulations and experiments, which demonstrate good performance of static and dynamic measurement.
Considering that the proposed feature matching method is suitable for slow-moving conditions, inertial measurement unit (IMU) will be utilized in future research. With high measuring frequency, IMU provides accurate compensations of positions and orientations for vision measurement in a short time. Meanwhile, vision measurement is able to effectively correct the drift error of IMU. Therefore, vision-inertial tracking is a promising method to deal with fast and intricate movements.

Author Contributions

Y.R. and Z.N. proposed the idea; L.Y. and Y.R. prepared the materials and tools; Z.N. and J.L. performed the experiments and analyzed the data; Z.N. wrote the paper; J.Z. revised the paper.

Funding

This research was funded by National Key Research and Development Program of China (Grant No. 2017YFF0204802), National Natural Science Foundation of China (51775380, 51835007, 51721003) and Natural Science Foundation of Tianjin (Grant No. 18JCYBJC19400).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  2. Jamshidi, J.; Kayani, A.; Iravani, P.; Maropoulos, P.G.; Summers, M.D. Manufacturing and assembly automation by integrated metrology systems for aircraft wing fabrication. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2010, 224, 25–36. [Google Scholar] [CrossRef]
  3. Lee, D.; Ku, N.; Kim, T.W.; Kim, J.; Lee, K.Y.; Son, Y.S. Development and application of an intelligent welding robot system for shipbuilding. Robot. Comput. Integr. Manuf. 2011, 27, 377–388. [Google Scholar] [CrossRef]
  4. Liu, Y.; Li, S.; Wang, J. Assembly auxiliary system for narrow cabins of spacecraft. Chin. J. Mech. Eng. 2015, 28, 1080–1088. [Google Scholar] [CrossRef]
  5. Franceschini, F.; Galetto, M.; Maisano, D.; Mastrogiacomo, L. Large-scale dimensional metrology (LSDM): From tapes and theodolites to multi-sensor systems. Int. J. Precis. Eng. Manuf. 2014, 15, 1739–1758. [Google Scholar] [CrossRef]
  6. Liu, Z.; Xie, Y.; Xu, J.; Chen, K. Laser tracker based robotic assembly system for large scale peg-hole parts. In Proceedings of the 4th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent, Hong Kong, China, 4–7 June 2014; pp. 574–578. [Google Scholar] [CrossRef]
  7. Keller, F.; Sternberg, H. Multi-Sensor Platform for Indoor Mobile Mapping: System Calibration and Using a Total Station for Indoor Applications. Remote Sens. 2013, 5, 5805–5824. [Google Scholar] [CrossRef] [Green Version]
  8. Schmitt, R.; Nisch, S.; Schönberg, A.; Demeester, F.; Renders, S. Performance evaluation of iGPS for industrial applications. In Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation, Zurich, Switzerland, 15–17 September 2010; pp. 1–8. [Google Scholar] [CrossRef]
  9. Fernández-Caramés, T.M.; Fraga-Lamas, P.; Suárez-Albela, M.; Vilar-Montesinos, M. A Fog Computing and Cloudlet Based Augmented Reality System for the Industry 4.0 Shipyard. Sensors 2018, 18, 1798. [Google Scholar] [CrossRef] [PubMed]
  10. Fang, W.; Zheng, L.; Deng, H.; Zhang, H. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion. Sensors 2017, 17, 1037. [Google Scholar] [CrossRef] [PubMed]
  11. Shi, S.; You, Z.; Zhao, K.; Wang, Z.; Ouyang, C.; Cao, Y. A 6-DOF Navigation Method based on Iterative Closest Imaging Point Algorithm. Sci. Rep. 2017, 7, 17414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Hassan, N.U.; Naeem, A.; Pasha, M.A.; Jadoon, T.; Yuen, C. Indoor Positioning Using Visible LED Lights: A Survey. ACM Comput. Surv. 2015, 48, 1–32. [Google Scholar] [CrossRef]
  13. Carmigniani, J.; Furht, B.; Anisetti, M.; Ceravolo, P.; Damiani, E.; Ivkovic, M. Augmented reality technologies, systems and applications. Multimed. Tools Appl. 2011, 51, 341–377. [Google Scholar] [CrossRef]
  14. Zhou, F.; Duh, H.B.; Billinghurst, M. Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. In Proceedings of the 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, Cambridge, UK, 15–18 September 2008; pp. 193–202. [Google Scholar] [CrossRef]
  15. Nee, A.; Ong, S.; Chryssolouris, G.; Mourtzis, D. Augmented reality applications in design and manufacturing. CIRP Ann. 2012, 61, 657–679. [Google Scholar] [CrossRef]
  16. Bae, H.; Golparvar-Fard, M.; White, J. High-precision vision-based mobile augmented reality system for context-aware architectural, engineering, construction and facility management (AEC/FM) applications. Visual. Eng. 2013, 1, 3. [Google Scholar] [CrossRef] [Green Version]
  17. Pustka, D.; Hülß, J.; Willneff, J.; Pankratz, F.; Huber, M.; Klinker, G. Optical outside-in tracking using unmodified mobile phones. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, GA, USA, 5–8 November 2012; pp. 81–89. [Google Scholar] [CrossRef]
  18. Krum, D.M.; Suma, E.A.; Bolas, M. Augmented reality using personal projection and retroreflection. Pers. Ubiquitous Comput. 2012, 16, 17–26. [Google Scholar] [CrossRef]
  19. OptiTrack—Motion Capture Systems. Available online: www.optitrack.com (accessed on 3 December 2018).
  20. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on simultaneous localization and mapping (SLAM). In Proceedings of the 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), George Town, Malaysia, 27–29 November 2015; pp. 85–90. [Google Scholar] [CrossRef]
  21. Mautz, R.; Tilch, S. Survey of optical indoor positioning systems. In Proceedings of the 2011 International Conference on Indoor Positioning and Indoor Navigation, Guimaraes, Portugal, 21–23 September 2011; pp. 1–7. [Google Scholar] [CrossRef]
  22. Welch, G.; Bishop, G.; Vicci, L.; Brumback, S.; Keller, K.; Colucci, D. HiBall tracker: High-performance wide-area tracking for virtual and augmented environments. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, London, UK, 20–22 December 1999; pp. 1–10. [Google Scholar]
  23. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef] [Green Version]
  24. Evtushenko, Y. Generalized Lagrange multiplier technique for nonlinear programming. J. Optim. Theory Appl. 1977, 21, 121–135. [Google Scholar] [CrossRef] [Green Version]
  25. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Watson, G.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar]
  26. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vis. 2008, 81, 155. [Google Scholar] [CrossRef]
  27. Luhmann, T. Precision potential of photogrammetric 6DOF pose estimation with a single camera. ISPRS J. Photogramm. Remote Sens. 2009, 64, 275–284. [Google Scholar] [CrossRef]
  28. Liu, Z.; Zhang, G.; Wei, Z.; Sun, J. Novel calibration method for non-overlapping multiple vision sensors based on 1D target. Opt. Lasers Eng. 2011, 49, 570–577. [Google Scholar] [CrossRef]
Figure 1. Immersive positioning and measuring helmet.
Figure 1. Immersive positioning and measuring helmet.
Sensors 19 00250 g001
Figure 2. System hardware: (a) multi-camera rig and (b) active LED marker.
Figure 2. System hardware: (a) multi-camera rig and (b) active LED marker.
Sensors 19 00250 g002
Figure 3. Accurate vision measurement camera model.
Figure 3. Accurate vision measurement camera model.
Sensors 19 00250 g003
Figure 4. Facula of LED marker: (a) at a distance of 2 m and (b) at a distance of 5 m.
Figure 4. Facula of LED marker: (a) at a distance of 2 m and (b) at a distance of 5 m.
Sensors 19 00250 g004
Figure 5. Measurement layout.
Figure 5. Measurement layout.
Sensors 19 00250 g005
Figure 6. Feature points matching: (a) real image with real pose, (b) predicted image with estimated pose, and (c) nearest neighbor matching method.
Figure 6. Feature points matching: (a) real image with real pose, (b) predicted image with estimated pose, and (c) nearest neighbor matching method.
Sensors 19 00250 g006
Figure 7. Dynamic six-degree-of-freedom (6-DOF) measurement process.
Figure 7. Dynamic six-degree-of-freedom (6-DOF) measurement process.
Sensors 19 00250 g007
Figure 8. Setup of measurement error simulation.
Figure 8. Setup of measurement error simulation.
Sensors 19 00250 g008
Figure 9. The root mean square (RMS) error of 6-DOF with respect to marker position error: (a) the error of rotation angle and (b) the error of system position.
Figure 9. The root mean square (RMS) error of 6-DOF with respect to marker position error: (a) the error of rotation angle and (b) the error of system position.
Sensors 19 00250 g009
Figure 10. The RMS error of 6-DOF with respect to image point error: (a) the error of rotation angle and (b) the error of system position.
Figure 10. The RMS error of 6-DOF with respect to image point error: (a) the error of rotation angle and (b) the error of system position.
Sensors 19 00250 g010
Figure 11. The RMS error of 6-DOF with respect to angle calibration error: (a) the error of rotation angle and (b) the error of system position.
Figure 11. The RMS error of 6-DOF with respect to angle calibration error: (a) the error of rotation angle and (b) the error of system position.
Sensors 19 00250 g011
Figure 12. The RMS error of 6-DOF with respect to position calibration error: (a) the error of rotation angle and (b) the error of system position.
Figure 12. The RMS error of 6-DOF with respect to position calibration error: (a) the error of rotation angle and (b) the error of system position.
Sensors 19 00250 g012
Figure 13. Static measurement experiment scene.
Figure 13. Static measurement experiment scene.
Sensors 19 00250 g013
Figure 14. Standard deviations of image points pixel coordinates.
Figure 14. Standard deviations of image points pixel coordinates.
Sensors 19 00250 g014
Figure 15. 6-DOF measurement repeatability: (a) standard deviations of rotations and (b) standard deviations of positions.
Figure 15. 6-DOF measurement repeatability: (a) standard deviations of rotations and (b) standard deviations of positions.
Sensors 19 00250 g015
Figure 16. Consuming time of single measurement.
Figure 16. Consuming time of single measurement.
Sensors 19 00250 g016
Figure 17. Dynamic measurement experiment scene.
Figure 17. Dynamic measurement experiment scene.
Sensors 19 00250 g017
Figure 18. Motion trajectories of the helmet and the T-Mac in the laser tracker coordinate system.
Figure 18. Motion trajectories of the helmet and the T-Mac in the laser tracker coordinate system.
Sensors 19 00250 g018
Figure 19. Six degrees of freedom of the helmet in the T-Mac coordinate system: (a) angle about X T axis, (b) angle about Y T axis, (c) angle about Z T axis, (d) position along X T axis, (e) position along Y T axis and (f) position along Z T axis.
Figure 19. Six degrees of freedom of the helmet in the T-Mac coordinate system: (a) angle about X T axis, (b) angle about Y T axis, (c) angle about Z T axis, (d) position along X T axis, (e) position along Y T axis and (f) position along Z T axis.
Sensors 19 00250 g019
Table 1. Measurement accuracy comparison between the single camera and the multi-camera rig.
Table 1. Measurement accuracy comparison between the single camera and the multi-camera rig.
ParameterThe Single CameraThe Multi-Camera Rig
θ s (°)0.01470.0087
φ s (°)0.00670.0084
κ s (°)0.00380.0065
X s (mm)0.7150.629
Y s (mm)1.1910.754
Z s (mm)0.2920.528
Table 2. Measurement error of travel distance (mm).
Table 2. Measurement error of travel distance (mm).
PositionMulti-Camera RigLaser TrackerMeasurement Error
1900.567900.3300.237
2900.436900.2420.194
3900.064900.247−0.183
4899.970900.381−0.411
5899.848900.318−0.470
6899.594900.207−0.613
7899.968900.347−0.379
8900.588900.1850.403
9900.863900.3420.521
Table 3. Numbers of mismatching with and without motion prediction
Table 3. Numbers of mismatching with and without motion prediction
Angular VelocityFeature Correspondence
Without Motion Prediction
Feature Correspondence
With Motion Prediction
5°/s00
10°/s00
20°/s11
30°/s60
40°/s183
50°/s207
Table 4. Standard deviations of dynamic six-degree-of-freedom (6-DOF) measurement
Table 4. Standard deviations of dynamic six-degree-of-freedom (6-DOF) measurement
ParameterStandard Deviation
Angle about X T axis (°)0.0137
Angle about Y T axis (°)0.0140
Angle about Z T axis (°)0.0147
X T position (mm)0.829
Y T position (mm)0.640
Z T position (mm)0.834

Share and Cite

MDPI and ACS Style

Niu, Z.; Ren, Y.; Yang, L.; Lin, J.; Zhu, J. A Multi-Camera Rig with Non-Overlapping Views for Dynamic Six-Degree-of-Freedom Measurement. Sensors 2019, 19, 250. https://doi.org/10.3390/s19020250

AMA Style

Niu Z, Ren Y, Yang L, Lin J, Zhu J. A Multi-Camera Rig with Non-Overlapping Views for Dynamic Six-Degree-of-Freedom Measurement. Sensors. 2019; 19(2):250. https://doi.org/10.3390/s19020250

Chicago/Turabian Style

Niu, Zhiyuan, Yongjie Ren, Linghui Yang, Jiarui Lin, and Jigui Zhu. 2019. "A Multi-Camera Rig with Non-Overlapping Views for Dynamic Six-Degree-of-Freedom Measurement" Sensors 19, no. 2: 250. https://doi.org/10.3390/s19020250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop