Next Article in Journal
Euro VI-d Compliant Diesel Engine’s Sub-23 nm Particle Emission
Previous Article in Journal
RSS Model Improvement Considering Road Conditions for the Application of a Variable Focus Function Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration Procedure of a Multi-Camera System: Process Uncertainty Budget

IDEKO, Basque Research and Technology Alliance (BRTA), 20870 Elgoibar, Spain
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(2), 589; https://doi.org/10.3390/s23020589
Submission received: 30 November 2022 / Revised: 23 December 2022 / Accepted: 28 December 2022 / Published: 4 January 2023

Abstract

:
The Automated six Degrees of Freedom (DoF) definition of industrial components has become an added value in production processes as long as the required accuracy is guaranteed. This is where multi-camera systems are finding their niche in the market. These systems provide, among other things, the ease of automating tracking processes without human intervention and knowledge about vision and/or metrology. In addition, the cost of integrating a new sensor into the complete system is negligible compared to other multi-tracker systems. The increase in information from different points of view in multi-camera systems raises the accuracy, based on the premise that the more points of view, the lower the level of uncertainty. This work is devoted to the calibration procedures of multi-camera systems, which is decisive to achieve high performance, with a particular focus on the uncertainty budget. Moreover, an evaluation methodology has been carried out, which is key to determining the level of accuracy of the measurement system.

1. Introduction

Vision systems where six Degrees of Freedom (DoF) positioning is performed by image processing, have experienced a significant growth in recent years in the industrial sector. Although high-precision systems such as laser trackers are already integrated—through norms and standards—in production lines for large-scale measurements, the high economic cost of these devices clearly stand out, among others. A lower cost alternative to laser trackers are the optical CMMs (Coordinate Measuring Machines), also called vision trackers, optical measurement sensors, or even portable CMMs. These portable measuring devices, which have revolutionized the field of vision metrology. They have been included in the initial processes of production lines, in different industrial environments to support the tasks of high precision inspection, tracking and positioning applications, allowing measurements to be taken more quickly and easily. These systems are composed of two or three pre-calibrated cameras, which provide the position of multiple markers.
Moreover, this technology is also increasingly integrating dynamic tracking functionality to better tackle vibrating or non-static environments [1]. Vibrations in the production factories result from a variety of sources such as production machinery, forklifts or crane bridges and they are a common problem for this type of portable device. The degradation in measurement results is given due to the lack of precise positioning of the mechanical structure. Through a self-referencing alternative, which is not dependent on their mounting structure, it is possible to determine the six degrees of freedom of the sensor. This way, it is becoming the alternative especially in automation tasks with robotic arms [2,3]. The Canadian firm Creaform demonstrates the capabilities of its C-Track device in vibration environments, compared to a poly-articulated arm [1]. The result obtained in a non-vibration scenario was 0.011 mm mean square error and a maximum error of 0.031 mm. In a vibration scenario as a square error of 0.013 mm and a maximum error of 0.037 mm was obtained.
Regarding working volumes, most vision trackers are designed for measuring ranges between 1 and 8 m, and in the case of a laser tracker this is even larger. However, as these systems have a single point of view, their working scenarios are limited by the possibility of having parts of the scene unevaluable due to the lack of visibility. Although manufacturers offer the multi-tracking solution, this alternative drives up production costs, not only in the acquisition of more devices but also adapting facilities. Additionally, it considerably lowers the reliability of the solution, considering, for example, the need to reference (calibrate) the devices between them. In addition, even in small work volumes, both have limitations, resulting in a rather difficult task to adapt these systems. It is worse still to use ‘a sledgehammer to crack a nut’.
The human factor dependency as well as the level of complexity in automation is another challenge. The use of a laser tracker implies the need of highly qualified personnel. Vision trackers in turn, require certain knowledge since in most cases are accompanied by a tracking probe, called optical probe systems. The human intervention—based on experience and knowledge—is linked to results, that is, decision-making through subjective criteria by highly trained personnel is one of the key factors in the final accuracy of these systems.
From this perspective, and in view of the difficulty of adapting the measurement scenarios, this is when multi-camera systems are currently gaining ground. These systems consist of a set of cameras strategically located around the working volume. They stand out mainly for their high flexibility and customization provided by having an indefinite number of cameras located in the way that best suits each application. One of the main advantages of multi-camera systems is the capability to achieve higher levels of accuracy through ad hoc system designs for each. The design of these systems is based on determining the number of cameras and the optimal position of each one to maximize overall precision. This adaptability allows one to achieve high levels of precision for 6DoF measurement and/or tracking. The main purpose is to avoid general solutions such as the commercial solutions cited above where the idea is to try to cover as many applications as possible. In addition, the price of including a new camera is negligible compared to adding any of the previous tracker devices. A multi-camera system is an automated solution where the human factor is minimized. It does not require specialized personnel with machine vision or even metrology knowledge. Furthermore, it is a pre-calibrated solution. In the same way that it is not necessary to have knowledge for its use, it is not necessary to have knowledge of calibration procedures. Automated processes also allow reduced times in the tasks of calibration and/or measurement. CMM programs are always executed following the same instructions and report the same results regardless of the user, avoiding measurement uncertainties due to the user.
Commercial companies such as Zeiss or Hexagon also have their niche here (Zeiss AICell and Aicon 3D Arena, respectively). Quality Gate from the Finnish company MapVision or TubeInspect from Aicon, present multi-camera photogrammetric systems with highly linked inspection processes in the automotive sector and in static scenes. OptiTrack, Qualisys or Vicon, among others, are consolidates Motion Capture systems in the market. The company Tecnatom developed the WiiPA system [4] with this technology.
However, existing multi-camera commercial solutions have a scalability limitation, which results in loss of precision, mainly due to calibration processes. As the work area increases, it involves having to design large and high-precision calibration artefacts. This implies non-cost-effective implementation techniques. The most common calibration procedure consists of using known geometric information (e.g., scale bars or patterns) to estimate the transformation—in terms of position and orientation (extrinsic parameters)—between the cameras. These algorithms are well-known as photogrammetric adjustment or bundle adjustment [5,6,7]. The self-calibration process in [8] is supported by a pattern of 13 markers moved to 200 positions and a scale bar of two markers moved to 27 positions. Photogrammetric adjustment is performed in [9]. By positioning a laser pointer in different positions, the projection of these points forms a virtual object (3D point cloud). However, the 3D evaluation is not reliable as it does not have a ground truth to verify. Perez-Cortes et al. [10] also follows a similar strategy, but instead of using a pointer, a sphere is used in 16 positions, and it is solved through a set of camera projections of the epipolar lines. Robson et al. [11,12] carried out a photogrammetric calibration procedure (bundle adjustment) through the Manhattan Vision Metrology System (VMS) pattern to solve intrinsic parameters, extrinsic parameters, and 3D coordinates in one go. As future lines of this work stand out, the evaluation of the multi-camera system using one or more calibrated scale bars in various orientations within the common intersection volume for all the cameras and the evaluation of the performance specifications through the VDI/VDE 2634 [13] are performed. Usamentiaga et al. [14] present a calibration method for a multi-camera system using a 3D object and laser planes, being detected by the multi-camera system. Perez et al. [15] calibrate it using two spheres and Zhang et al. [16] follow planar pattern methodologies to calibrate both intrinsic and extrinsic parameters. Planar pattern calibration techniques where chess boards [17,18,19,20,21,22,23] or other types of 3D patterns [10] are used have limitations in terms of high-range scenarios as very large patterns would be requested and all cameras can see the same work areas. In this sense, contributions such as Xing et al’s. [24] presents multicamera system calibrations with a reduced shared field of view. The intrinsic parameters of these cameras follow the lens model proposed by Luhmann et al. [25].
The widespread acceptance standard ISO 10360-10: 2016 in advanced manufacturing processes makes a laser tracker the measurement tool for high volume industrial metrology applications. The verification of most vision trackers, in turn, is given by the ASME B89.4.22-2004 or DIN EN ISO 10360-2: 2009 standards. These standards are closely linked to robotic and CMM calibrations, always with probing operations, not reporting the accuracy of the measuring device itself. The optical tracking probes entail introducing a new variable—totally dependent on its geometry—into the measurement chain causing greater uncertainty. For example: the Norwegian company Metronor designed long probes to measure interior areas to allow the tracker to continue tracking it [26]. This solution results in designing new external elements to adapt to different circumstances, making it inefficient and imprecise. In multi-camera systems the cameras can measure everything that is visible without the need to design artefacts for it. Few studies have, however, reported precision data or even a vision-system evaluation or verification procedure, according to guidelines like VDI/VDE 2634-part 1 for optical 3D measurement systems. Geodetic Systems, Inc. (GSI) reports precision results for V-STARS/D offering an accuracy of 14 µm + 14 µm/m for V-STARS/D5, 10 µm + 10 µm/m for V-STARS/D12 or 9 µm + 9 µm/m for INCA4. Möller et al. [27] proposed a stereo system consisting of two AICON MoveInspect HR cameras to increase the precision of the absolute position of an industrial machining robot. The location of the robot’s spindle is measured through a specific adapter mounted on the robot’s tool with retro-reflective markers. They report absolute precision up to 50 µm per m3 in a range between 1 and 2 m3 (conditioned by the markers). It is also concluded that the stereo system can reduce the robot’s absolute positioning error by approximately 0.1 mm compared to a laser-tracker measurement. Since it is a photogrammetry-based system, it depends on several factors, such as camera calibration, marker-detection quality, the image-processing techniques, and resolution. In [8], a study of the uncertainty variables of the tracking of an object in a robotic system is carried out. The number of cameras, positions, angles, size of the object and the type of camera (in terms of sensors) are evaluated in a 4 m3 working area. This is compared against a tracker object with a precision of 0.1 mm (2σ) and 0.2 mrad in angular position. In addition, a comparison of the photogrammetric system is carried out with respect to a laser tracker. A multi-camera system of four cameras in a volume of 2 m × 2 m × 1 m. Thus, using a cross-shaped object, a standard deviation error of 0.07 mm is calculated with a maximum error of 0.14 mm. However, a follow-up to the VDI standard is not considered here either. De Cecco et al. [28] present an uncertainty analysis for the reconstruction of a 3D object. Three stages are defined, multi-stereo, multi-camera, and individual stereo. In [19], a theoretical evaluation of the uncertainty analysis is also carried out during a stereo system calibration.
Our study proposes a quantitative evaluation of a multi-camera system based on its calibration procedure through the identification of potential error sources that influence the measurement chain. In this sense, the calibration process is one of the determining factors to achieve high levels of accuracy. Specifically, this work is focused on the influence of intrinsic and extrinsic parameters and the corresponding propagation in the measurement. It follows the idea of applying different calibration strategies in the two-step calibration procedure. Likewise, a measurement system that follows the VDI 2634-part 1 standard to verify the measurement uncertainty.
The presented approach is divided into two main phases. The first section will identify and diagnose the calibration processes involved in the multi-camera system. Whereas the second handles the error budgeting, indicating the factors that are relatively more important. The paper is organized into six sections. In Section 2 the material and methods used in this work are presented. Section 3 handles the calibration experimentation of the multi-camera system. It provides an overview of all the steps carried out for the calibration, as well as the results obtained in this case study together with the identified variables in each phase. Then, Section 4 illustrates the performance of the measurement system through the verification procedure. This analysis is discussed in Section 5 and Section 6 draws the relevant conclusions.

2. Materials and Methods

The novelty of this paper lies on the error budgeting to establish the relative weight of each determining source in the different calibration processes. A set of verification experiments are carried out according to the VDI 2634-part 1 standard. This guide guarantees a correct evaluation of photogrammetric systems today.
This work presents the measurement evaluation of a set of calibration methodologies. The process is divided into two main scenarios: calibration and measurement. The calibration scenario provides both the camera calibration itself (Figure 1 (left))—considering the camera as an individual measuring instrument—and the definition of a common reference system (Figure 1 (middle)) that represents the multi-camera system, which is basically the determination of the extrinsic camera parameters ([R|t]). In the measurement scenario in turn (Figure 1 (right)), the 3D positioning of a set of markers that follows the geometry suggested by the VDI standard guideline is solved.
This approach analyses two methodologies per each intrinsic and extrinsic calibration process (Figure 2). The intrinsic calibration follows on the one hand, the methodology implemented in [29], where a virtual geometry pattern is optimized to achieve the highest accuracy (Section 3.1.1). On the other hand, a flat pattern composed by retro-reflective targets is photographed in a set of unknown fixed positions (Section 3.1.2). The extrinsic camera calibration, in turn, also follows the virtual grid pattern methodology, but with a different geometry -cube- from the previous one (Section 3.2.1). Moreover, the extrinsic calibration tests are completed with a second strategy of using a 3D pattern set out in the working volume, which is previously measured by a portable photogrammetry system (Section 3.2.2). The output of this system is given by the verification process, where a set of spatial coordinates are measured, again as a virtual grid. The estimation of these results is calculated through the length measuring errors according to LME evaluation guideline by VDI 2634-part 1 [13]. This includes a comparison in terms of length error, between the lengths measured by the photogrammetric system and pre-calibrated scale bars.
The vision system under study in this paper is a multi-camera system. Specifically, it is a stereo-photogrammetric solution (Figure 3 (down)). The layout is composed by two industrial cameras (Teledyne DALSA Genie Nano 4020, 12.4MP, Schneider Optics APO Xenoplan 2.8 16 mm) individually calibrated in the camera calibration scenario. Afterwards, the results need to be carried into the measurement scenario. In both scenarios, images are taken of reflective non-coded targets. The material property of these elements allows the image detection quality to be the same in both laboratory and industrial scenarios. This is also enabled by the active LED illumination (DCM ALB0810A) integrated by each measuring camera system (Figure 3 (up)). In addition, the camera is encapsulated in a housing manufactured for industrial scenario cases, and thus more efficiently mitigates the effects on the device caused by temperature, humidity, or vibrations. Even so, the tests carried out in this work have been conducted in a controlled laboratory where the above noise factors are mitigated as much as possible.

3. Calibration Process

3.1. Camera Calibration: Intrinsic Parameters

The first calibration stage primarily focuses on calibrating the internal camera parameters. Through the optimization of the calibration patterns design, this methodology also allows the camera to be manipulated as an individual measuring instrument. Thus, it can be easily replaced in the measuring system. Through this, it is possible to achieve the maximum level of precision and avoid scalability limitation. The camera calibration consists of calculating the camera focal length and lens distortion parameters (so called intrinsic parameters in machine vision). The 3D coordinates of the pattern, the geometric distribution (position of each marker), and the optical target 2D coordinates are decisive for the calculation of these parameters. As mentioned before, the correct setting for these input variables makes the output intrinsic parameters well determined (Figure 4). A minimum error in this parameter can strongly affect the measurement results.
The widely adopted Brown’s model [30] is used for correcting lens distortions (see Equation (1)).
x ^ = x + ( x c 0 x ) ( k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2   ( x c 0 x ) 2 ) + 2 p 2 ( x c 0 x ) ( y c 0 y ) y ^ = y + ( y c 0 y ) ( k 1 r 2 + k 2 r 4 ) + p 2 ( r 2 + 2   ( y c 0 y ) 2 ) + 2 p 1 ( x c 0 x ) ( y c 0 y )
where:
-
( x ^ , y ^ ) are the corrected point coordinates at the image plane,
-
( x , y ) are the detected (distorted) point coordinates,
-
( c 0 x , c 0 y ) is the distortion centre,
-
( k 1 , k 2 ) are radial distortion coefficients,
-
( p 1 , p 2 ) are tangential distortion coefficients.
-
being r = ( x c 0 x ) 2 + ( y c 0 y ) 2  
A pre-calibrated 3D pattern and Mendikute et al’s. [29] approach are the chosen strategies among the different alternatives to calibrate the camera parameters. The first consists of a flat pattern that is easy-to-use and allows the instrument to be calibrated in situ. In the second, a virtual grid is adapted to achieve, among other things, a well-conditioned extrinsic parameter and hence less uncertainty. The main drawback of the pattern strategy is the amount of solved extrinsic parameters, which propagates errors. The CMM virtual grid method, in turn, is a high-cost procedure that does not allow one to perform calibration in the measurement scenario itself.

3.1.1. CMM Virtual Grid: Pyramid

As previously mentioned, the idea is to define a virtual grid structure following the process explained in [29], where a single target is captured in different images from different 3D positions (Figure 5). For this work to be self-contained, below is a brief description of this technique.
A retroreflective target (10 mm diameter) is placed on a previously calibrated probe (in CMM Zeiss O-Inspect). The uncertainty of the movement process is 0.8 µm (1-sigma). The offset obtained here makes it possible to know the 3D position of the target in the CMM coordinate system (Figure 6). This target is placed in certain predefined 3D positions { X i } C M M generating a virtual calibration pyramid, where the corresponding image is taken. This pyramid is defined by 10 planes and 10 marker positions in each, a total of 1000 positions.
With all this it is possible to determine the position and orientation { [ R | t ] C } C M M of the C camera according to O C M M , as well as the internal parameters of the camera ( K C M M ).
The resolution is defined as the non-linear optimization problem solved by the Gauss–Newton method [31] which minimizes the residual vector   | | r | | 2 norm. The defined calibration geometry is key to have well-conditioned output variables. Hence the need to generate virtual geometries with full freedom. An example of this, is the focal length variable and the extrinsic parameters. The latter is a significant factor due to its propagation in the following calibration stages.

3.1.2. Test-Field Calibration

To go through this calibration process, a 64 marker (8 × 8 dots, 140 × 140 mm) ceramic pattern is used (see Figure 7). It is necessary to underline that detection problems were observed in first pilot tests. Some tilt effects were observed in both detection and projection errors using the distortion pattern from Edmund Optics [Optics, E. (s.f.). Test targets]. To avoid this problem, the same type of marker as in Section 3.1.1 was selected due to the illumination conditions and to have the same detection uncertainty error in both processes. These circular markers were pre-calibrated in an optical CMM (Zeiss O Inspect), with grid uncertainty below 1 micron.
A calibration test-bench is used for calibrating each camera. A set of images is taken on the calibration grid from different points of view. The imaging configuration is principally designed as [32,33] following the calibration configuration for plane test-fields. However, in this work, although eight positions are proposed by Wester–Ebbinghaus, up to 21 positions are included to cover more areas of the image. Extrinsic parameters are calculated in each of the N images { [ R | t ] i } ,   i = 1 N   . Subsequently, along with the processed images, the intrinsic camera parameters ( K T ) are estimated and the extrinsic parameters of each image are refined.

3.1.3. Experimental Evaluation

A repeatability analysis has been performed for each camera. The objective is to evaluate the quality of the calibration and, correspondingly, to assess the accuracy of the integrated predictive models enabling calibration process control.
The experimentation procedure mainly consists of calibrating the two cameras that compose the stereo system. Specifically, the calibration of each of camera is repeated 10 times for both calibration strategies. Table 1 depicts the repeatability of each calibration procedure in terms of intrinsic parameters.
These results not only indicate that a high degree of repeatability is achieved in both processes, but also, the results are definitely similar in all strategies. Attention should be paid to the focal length (f) and distortion centre (cl0, rw0) variables, with 1 µm and 0.2 pixels of standard deviation (1-sigma), respectively. With these variables it is usually difficult to achieve high levels of repeatability but in this case high precision is achieved regardless of the methodology. The intrinsic calibration performance can also be observed in the resulting reprojection error vector after convergence: 0.06 pixels at x-axis and 0.09 pixels at y-axis of standard deviation.

3.2. Layout Calibration: Extrinsic Parameters

Once the cameras are located on the measurement scenario O L , the layout calibration phase is carried out, that is, the extrinsic parameters are solved. As intrinsic phase, here also two types of extrinsic resolution strategies are performed. The first one through a known 3D pattern previously measured with a portable photogrammetric system, and the second one, follows the same procedure as the previous section, but with a cube-based geometry.
It should be noted that, as in the intrinsic calibration scenario, the measurement geometry for the extrinsic calculation is different for each strategy and, moreover, the typology of markers is somewhat different between both of them.
In addition to the uncertainty resolution of the intrinsic parameters of each camera in the previous phase, it is worth including the 3D position uncertainty of each marker, the 3D geometry that composes the pattern and the 2D uncertainty detection of each marker as input factors. The output of this computation will be the extrinsic parameters represented by alpha, beta, gamma for orientation and x,y,z for translation for each camera that composed the layout (Figure 8).

3.2.1. CMM Virtual Grid: Cube

This methodology follows the same steps as Section 3.1.1 with the only difference that instead of using a pyramid, a virtual cube is created that covers the entire working area (see Figure 9 left). In such a way that certain markers will be seen by one or two cameras. It is a 1000-point grid divided into 10 planes, where one of the cameras observes 775 and the other 774 with similar spatial distribution. This difference corresponds to the mechanical assembly error.
The output defines the extrinsic parameters { [ R | t ] C } C M M of each camera in the CMM reference system, that is, the same reference as measurement scenario.
In addition, since the same information is obtained, a calibration of the intrinsic parameters ( K C M M ) is also carried out. This leads to the study of the correlation between both calibration processes since the extrinsic parameters are discarded with previous methods. Therefore, since two types of intrinsic virtual grid calibrations are available in the CMM, from now on both pyramid and cube calibrations will be distinguished as K C M M P and K C M M C , respectively. In addition, repeatability values that complement A repeatability analysis has been performed for each camera. The objective is to evaluate the quality of the calibration and, correspondingly, to assess the accuracy of the integrated predictive models enabling calibration process control.
The experimentation procedure mainly consists of calibrating the two cameras that compose the stereo system. Specifically, the calibration of each of camera is repeated 10 times for both calibration strategies. Table 2 depicts the repeatability of each calibration procedure in terms of intrinsic parameters are shown below.
It can be concluded that the repeatability results for this K C M M C are at the same level of those calculated for the K C M M P case.

3.2.2. Photogrammetry

The calibration procedure of this methodology mainly consists of defining a pattern and resolving its 3D geometry by a photogrammetric measurement (Figure 9 right). Therefore, the extrinsic resolution is performed by taking an image for each camera to the resulting 3D scene. The grid is composed of 400 markers, of which 37 and 43 are observed, respectively.
Since an external device is used to solve the scene, the multi-camera system refers to the zero of the corresponding photogrammetry system { [ R | t ] C } P .

3.2.3. Experimental Evaluation

As in the intrinsic scenario, here the precision of the process is also studied (Table 3). The experimentation consists of the calibration of the extrinsic parameters of both cameras using the virtual grid and the 3D pattern obtained by an external photogrammetry system. In particular, the experiment is repeated 10 times to evaluate the repeatability. The input of both methodologies also experimented with each of the two calibrations resulting from the intrinsic process. In addition, for the photogrammetric extrinsic resolution, the cube-shaped intrinsic resolution is also included.
The obtained results served to confirm that the repeatability (1-sigma) of the process does not differ significantly depending on the input. The repeatability of rotation angles is 1 × 10−5 and the translation precision in turn, is 1 × 10−2 mm for both cases. The error projection ranges between 0.1 and 6 pixels for grid and photogrammetry techniques, respectively.
It should be noted that with photogrammetry strategy there is a slight difference in rotation results—being 1 × 10−4 radians—for one of the cameras. This is mainly due to their orientation (less markers are observed), and to the fact that the error is stressed since the 3D geometry is not homogeneous compared to the virtual grid.

4. Verification Process

This section explains the metrological assessment as well as the results of the outlined verification procedures. As previously described, this task is carried out in the Zeiss© Prismo CMM which can achieve up to 1 microns of accuracy. Moreover, since it is a small measurement vision system, it is possible to validate an intermediate verification to know the level of accuracy of the multi-camera system.
In this sense, the verification methodology developed in this work for the vision system consists of resolving the quality parameter Length Measuring Error (LME). It is, therefore, the measurement of a point in the three-dimensional space, by knowing the projection of the calibrated cameras with known extrinsic parameters. This problem is called triangulation. The detection of the target in both images is required to geometrically determine the target coordinate. In this case, there are three parameters to solve,
θ x = X = [ x   y   z ] T
where X is the 3D coordinates of the target defined in the same measuring frame at which the camera extrinsic parameters are known R k and t k  k = 1…K cameras. Each target 3D coordinate can be expressed as U k = [ u k   v k   w k ] T in each camera frame depending on its extrinsic parameters R k and t k as:
U k =   R k X + t k
For each camera, the 3D coordinate U k can be projected into the corresponding camera 2D image plane as p k and q k coordinates, following the widely assumed pin-hole conic projection model in machine vision [34]. This solution can be solved through a non-linear approximation as previously cited. Thus, the partial derivative of an optical target projected on the image with respect to its spatial coordinates is formulated as follows as:
J x k = ( X k ) 2 × 3 = D P D U X
where DP is defined as
D P = [ 1 w k 0 u k w k 2 0 1 w k v k w k 2 ]
and D U X expresses the partial derivatives of U k target coordinate at the kth camera frame with respect to its X coordinates at the common measuring frame as
D U X = R j
where R k is the rotation matrix corresponding to the kth camera frame.
The following is a more detailed explanation of both verification evaluations based on the results through the described mathematical assumptions.
After inquiring about the standards for the accuracy of metrological vision systems using multiple cameras, it can be said that the VDI-VDE 2634 guideline is the most relevant. This standard consists of three parts from which the first, named Optical 3D measuring systems Imaging systems with point-by-point probing, was selected as it describes how multi-camera systems work. The description of this standard defines how to put the validation bars and the description of the bars themselves (Figure 10).
The positions of the bar are limited by a cube, which is defined by the range of the system. This cube, in turn, defines the length for the bars. The standard is modified to virtually generate the bar using a CMM (Figure 10). It is composed of 32 points for a working area of 215, 320 and 292 mm in x, y, and z of the CMM axis, respectively.

Experimental Evaluation

Following the above experimentation, the measurement is repeated 10 times for all the combinations of both intrinsic— K C M M P (pyramid),  K C M M C (cube) and K T (test-field)—and extrinsic— { [ R | t ] } C M M (cube) and { [ R | t ] } P (photogrammetry)—calibrations.
The virtual cube coordinates are compared against the measurements of a multi-camera system with the results shown in Table 4. These results are the maximum, average and standard deviation of the error distance between the ground truth (CMM) and the multi-camera system in Cartesian coordinates.
In view of the results achieved, we can conclude that if higher levels of accuracy are to be obtained, it is necessary to follow a CMM strategy in terms of extrinsic calibration. Similarly, the test-field strategy has a lower performance. In CMM in turn, there is no clear evidence that any factor (K, RT) has a significant determination in the final measurement. All maximum LME are around 30 µm. These results also indicate that there is no clear correlation between the two calibration procedures, although the combination of both calibrations with virtual cube grid offers a slightly better performance, since they are better coupled along with the propagation of the covariance in the calibration chain.
However, it is necessary to pay attention to the K T   and { [ R | t ] } P . Specifically, the measurement data are analysed in detail considering the combination of all the calibrations of this strategy. As Table 5 depicts, it can be concluded that there is a considerable influence of the extrinsic parameters on the final measure. This is largely due to the calibration procedure of the photogrammetric system. More precisely, the chosen geometry causes occlusions which results in a different number of detected markers in each photogrammetric calibration. This effect does not occur, for example, in the case of the CMM, where all the markers are always detected (it is a virtual grid).
Thus, to evaluate the effect of the extrinsic and intrinsic parameters on the results, a swap of calibration is performed, considering the best and worst results. In this case, the 3rd and the 10th measures are selected. It is confirmed (see Table 6) that the extrinsic variability has the higher effect, making it possible to reach CMM precision level or, conversely, definitely negative results.

5. Discussion

In view of the results obtained in the previous section, we can conclude that, according to the obtained accuracy, the extrinsic parameters are key in the final measurement result. However, beyond that, it is necessary to emphasize that in order to affirm the above, the calibration geometry of both extrinsic calibration strategies must be identical. Otherwise, differences may arise between the extrinsic calibration between the CMM and portable photogrammetry strategies, which is indeed the current situation. In short, geometry is another key factor to take into account in the extrinsic calibration process. Thus, to be impartial for both cases, a common geometry is defined. For this purpose, a set of Spherically Mounted Retroreflectors (SMRs), commonly known as Nests (see Figure 11a), are distributed along the scene to establish a common centre using tools that can be subsequently measured by CMM and photogrammetry.
The measurement process consists of probing a 1.5″ (38.1 mm) stainless-steel sphere on the CMM, defining its centre (see Figure 11b), and then swapping it with a 1.5″ (38.1 mm) Split Bearing Retro-reflective (SBR), detectable by the photogrammetry system (see Figure 11c). This way, it would be possible to define a common and comparable nominal centre between both metrological tools.
From Table 7 it is possible to conclude that regardless of the results obtained in terms of repeatability, there are no meaningful differences between the two methodologies. Moreover, if a second photogrammetric calibration is included, it is essential to perform an accurate calibration to achieve the same level as the CMM.
Following the analysis, and once it has been confirmed that geometry is a determining factor, it is necessary to focus research on the intrinsic calibration process. It is clear that the first results obtained together with the latter, the K T   calibration, is the one that returns the worst result in the final measurement. This is mainly since the chosen intrinsic calibration methodology has limitations in large scale scenarios. The number of images to be taken increases as the scenario becomes bigger. In addition, to cover the entire scenario, it is necessary to manage the different depths with patterns of different sizes in the same process. All this involved incurring more errors.
Similarly, the CMM methodology also has a drawback in large scenarios. Mainly as it is unfeasible to calibrate the sensors in CMM as the scenarios become larger. CMMs of such a size are not commonly available and the procedure is inefficient. Therefore, a new alternative is proposed to calibrate the intrinsic camera parameters focused on large scenarios through portable photogrammetry. Through this technique, in addition to resolving the 3D coordinates, the intrinsic parameters are also found, as it is a self-calibrating system. These parameters will therefore be subsequently applied as inputs to the multi-camera system.
It can be observed through Table 8 that the combination of both intrinsic and extrinsic parameters with photogrammetry achieve the best results. The conclusion is that both parameters are coupled in the same calibration process.

6. Conclusions/Future Work

Vision systems where 6DoF positioning is performed by image processing have become a real alternative to laser trackers in the industrial sector. Like Coordinate Measuring Machines, if the position and orientation of an object with respect to a reference through a laser tracker needed to be found, e.g., to calibrate the TCP of an industrial robot, it is necessary to carry out three consecutive measurements of a stable object and a single tracker, or by including a multi-tracker methodology which implies high costs. This has enabled the design, manufacture, and development of ad-hoc multi-camera systems for each application. At present, it is possible to implement plug&play systems avoiding the above issues. In addition, automatic calibration processes have been obtained, reducing manual intervention to a minimum, thus reducing working times and errors in the processes.
However, to date, most applications that have integrated machine vision measurement systems such as multi-camera systems have always had the main goal of ensuring that errors that accumulated along the entire measurement chain did not affect the final measurement. For instance, an application where a multi-camera system was able to correct the 6DoF positioning of a robotic arm, without going beyond what is necessary for that purpose. If the positioning was guaranteed to be sufficient and correct, the application was validated. For this reason, although the advantages of multi-camera systems have long been proved, their real potential has always been overlooked. This means that multi-camera systems have been constrained by the fears of not meeting the requirements, not being used to their full accuracy potential.
Therefore, a characterisation of the error sources involved in vision systems, and particularly those related to multi-camera systems, has been presented in this work. The main goal has been to evaluate the factors that affect the final measurement, to enable performing what is known as error-budgeting of a measurement system. By means of experimental repeatability tests, it has been possible to carry out the corresponding analysis. Among the different points discussed, it is worth highlighting the identification of the highest number of error sources that influence the measurement chain, to determine the accuracy level that can be achieved, and, on the other hand, the degree of influence of each factor. Specifically, this work focuses on the calibration processes and the different techniques used to evaluate the accuracy of the system. In this work, the VDI2634-part 1 guideline is followed as part of the final verification. In this sense, most of competitors use probing or scanning probes to offer the verification results, without evaluating the system as an individual measuring device.
From the presented data it can be concluded that extrinsic parameters calibration is critical if the geometry and its measurement are correctly determined. Geometry is key in determining extrinsic parameters and this incurs fatal errors and high repeatability. If these conditions are fulfilled and, through the different technologies the scene is correctly measured, no significant changes are observed. So, the next step is to correctly define the intrinsic parameters. Moreover, in this case, it can be confirmed that if both the strategies to calculate the intrinsic and extrinsic parameters are the same, the accuracy is higher, mainly due to the fact of the coupling between both variables of the calibration methodologies.
The performed evaluation, thanks to the knowledge of the contribution of each calibration process in the measurement chain, can be enhanced in the future to estimate and model each of the identified factors. This information could be used to develop simulation processes for the preliminary design of calibration and measurement processes. It could also be used to predict the behaviour of the multi-camera system through designing simulations in the calibration and measurement processes.
It would also be possible to assess in more detail each calibration process focusing on the intrinsic ones. An intermediate characterisation of the intrinsic parameters, i.e., to perform intermediate verifications, by means of dimensional verification rules.
Finally, the motivation to implement dynamic reference, as reported in other studies in the state of the art, should be emphasised.

Author Contributions

Writing—original draft preparation, I.L.; writing—review, P.P. and I.H.; software, I.H.; data curation, P.P.; methodology and calibration process, I.L.; verification process I.L. and P.P.; supervision I.H. and P.P.; investigation, I.L., P.P. and I.H.; project administration I.L.; funding acquisition I.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by project grants FIBREMACH (FTI 971442) of the European Union’s Horizon 2020 research and innovation programme (H2020-EU.3.—H2020-EU.2.1.) and DATA—Data Analytics, Tool efficiency and Automation for end-to-end aircraft life cycle transformation (Expediente: IDI-20211064 from the Spanish CDTI Programas Duales.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no competing interest.

Abbreviations

DoFDegrees of Freedom
CMMCoordinate Measuring Machine
KIntrinsic/internal camera parameters
[R|t]Extrinsic/external camera parameters

References

  1. Larue, J.-F.; Viala, M.; Brown, D.; Mony, C. Dynamic referencing in 3D optical metrology for higher accuracy in shop floor conditions. In Proceedings of the Coordinate Metrology Systems Conference, New Orleans, LA, USA, 16–20 July 2012. [Google Scholar]
  2. Filion, A.; Joubair, A.; Tahan, A.S.; Bonev, I.A. Robot calibration using a portable photogrammetry system. Robot. Comput.-Integr. Manuf. 2018, 49, 77–87. [Google Scholar] [CrossRef]
  3. Boby, R.A. Kinematic Identification of Industrial Robot Using End-Effector Mounted Monocular Camera Bypassing Measurement of 3-D Pose. IEEE/ASME Trans. Mechatron. 2021, 27, 383–394. [Google Scholar] [CrossRef]
  4. Lasagni, F.; Santamaría, M.L.; Alarcón, F.; Aldea-Arévalo, J.; Hernández-Ruiz, S. C-Scan Ultrasonic Generation using Wireless Encoder based on Passive Makers. In Proceedings of the 6th International Symposium on NDT in Aerospace, Madrid, Spain, 12–14 November 2014. [Google Scholar]
  5. Ing, H.S. An analytical treatment of the problem of triangulation by stereophotogrammetry. Photogrammetria 1956, 13, 67–77. [Google Scholar] [CrossRef]
  6. Brown, D.C. A Solution to the General Problem of Multiple Station Analytical Stereotriangulation; D. Brown Associates, Incorporated: Green Bank, WV, USA, 1958. [Google Scholar]
  7. Brown, D. The bundle adjustment—Progress and prospectives. In Proceedings of the XIII Congress of the ISPRS, Helsinki, Finland; 1976. [Google Scholar]
  8. Boochs, F.; Schütze, R.; Raab, C.; Wirth, H.; Meier, J. A flexible multi-camera system for precise tracking of moving effectors. In Proceedings of the 14th IASTED International Conference on Robotics and Applications 2009, Cambridge, MA, USA, 2–4 November 2009. [Google Scholar]
  9. Svoboda, T.; Hug, H.; Gool, L.V. ViRoom—Low Cost Synchronized Multicamera System and Its Self-Calibration; Pattern Recognition: Zurich, Switzerland, 2002; pp. 515–522. [Google Scholar]
  10. Perez-Cortes, J.-C.; Perez, A.J.; Saez-Barona, S.; Guardiola, J.-L.; Salvador, I. A System for In-Line 3D Inspection without Hidden Surfaces. Sensors 2018, 18, 2993. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Robson, S.; MacDonald, L.; Kyle, S.; Boehm, J.; Shortis, M. Optimised multi-camera systems for dimensional control in factory environments. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2018, 232, 1707–1718. [Google Scholar] [CrossRef]
  12. Kyle, S.; Robson, S.; Macdonald, L.; Shortis, M.; Boehm, J. Multi-Camera Systems for Dimensional Control in Factories; European Portable Metrology Conference-EPMC15: Manchester, UK, 2015. [Google Scholar]
  13. VDI/VDE. Optical 3D measuring systems, imaging systems with point-by-point probing. In VDI/VDE 2634, Part 1; Beuth Verlag, Verein Deutscher Ingenieure: Berlin, Germany, 2002; pp. 1–10. [Google Scholar]
  14. Usamentiaga, R.; Garcia, D.F. Multi-camera calibration for accurate geometric measurements in industrial environments. Measurement 2019, 134, 345–358. [Google Scholar] [CrossRef]
  15. Perez, A.J.; Perez-Cortes, J.-C.; Guardiola, J.-L. Simple and precise multi-view camera calibration for 3D reconstruction. Comput. Ind. 2020, 123, 103256. [Google Scholar] [CrossRef]
  16. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  17. Straube, G.; Zhang, C.; Yaroshchuk, A.; Lübbecke, S.; Notni, G. Modelling and calibration of multi-camera-systems for 3D industrial supervision applications. Photonics Educ. Meas. Sci. 2019, 11144, 68–76. [Google Scholar]
  18. Frahm, J.-M.; Köser, K.; Koch, R. Pose Estimation for Multi-Camera Systems; Pattern Recognition: Zurich, Switzerland, 2004; pp. 286–293. [Google Scholar] [CrossRef]
  19. Kraft, M.; Nowicki, M.; Schmidt, A.; Fularz, M.; Skrzypczyński, P. Toward evaluation of visual navigation algorithms on RGB-D data from the first- and second-generation Kinect. Mach. Vis. Appl. 2017, 28, 61–74. [Google Scholar] [CrossRef]
  20. Schmidt, A.; Kasiński, A.; Kraft, M.; Fularz, M.; Domaga, Z. Calibration of the multi-camera registration system for visual navigation benchmarking. Int. J. Adv. Robot. Syst. 2014, 11, 83. [Google Scholar] [CrossRef]
  21. Hayat, A.A.; Boby, R.A.; Saha, S.K. A geometric approach for kinematic identification of an industrial robot using a monocular camera. Robot. Comput.-Integr. Manuf. 2019, 57, 329–346. [Google Scholar] [CrossRef]
  22. Ayyad, A.; Halwani, M.; Swart, D.; Muthusamy, R.; Almaskari, F.; Zweiri, Y. Neuromorphic vision based control for the precise positioning of robotic drilling systems. Robot. Comput.-Integr. Manuf. 2023, 79, 102419. [Google Scholar] [CrossRef]
  23. Hagemann, A.; Knorr, M.; Janssen, H.; Stiller, C. Bias detection and prediction of mapping errors in camera calibration. In Proceedings of the DAGM German Conference on Pattern Recognition, Tübingen, Germany, 28 September–1 October 2020; pp. 30–43. [Google Scholar]
  24. Xing, Z.; Yu, J.; Ma, Y. A new calibration technique for multi-camera systems of limited overlapping field-of-views. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5892–5899. [Google Scholar]
  25. Faugeras, S.J.M.O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–151. [Google Scholar]
  26. Luhmann, T. Close range photogrammetry for industrial applications. ISPRS J. Photogramm. Remote Sens. 2010, 65, 558–569. [Google Scholar] [CrossRef]
  27. Möller, C.; Schmidt, H.C.; Shah, N.H.; Wollnack, J. Enhanced absolute accuracy of an industrial milling robot using stereo camera system. Procedia Technol. 2016, 26, 389–398. [Google Scholar] [CrossRef]
  28. De Cecco, M.; Pertile, M.; Baglivo, L.; Lunardelli, M.; Setti, F.; Tavernini, M. A unified framework for uncertainty, compatibility analysis, and data fusion for multi-stereo 3-d shape estimation. IEEE Trans. Instrum. Meas. 2010, 59, 2834–2842. [Google Scholar] [CrossRef]
  29. Mendikute, A.; Leizea, I.; Yagüe-Fabra, J.A.; Zatarain, M. Self-calibration technique for on-machine spindle-mounted vision systems. Measurement 2018, 113, 71–81. [Google Scholar] [CrossRef]
  30. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. Remote Sens. 1966, 32, 444–462. [Google Scholar]
  31. Madsen, K.; Nielsen, H.B.; Tingleff, O. Methods for Non-Linear Least Squares Problems; Informatics and Mathematical Modelling (IMM) Technical University of Denmark: Copenhagen, Denmark, 2004. [Google Scholar]
  32. Wester-Ebbinghaus, W. Ein Photogrammetrisches System für Sonderanwendungen; Mustererkennung: Zurich, Switzerland, 1983. [Google Scholar]
  33. Wester-Ebbinghaus, W. Verfahren zur Feldkalibrierung von photogrammetrischen Aufnahmekammern im Nahbereich; Mustererkennung: Zurich, Switzerland, 1985; pp. 106–114. [Google Scholar]
  34. Zisserman, R.H.A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
Figure 1. The calibration process of the muti-camera system is divided into two scenarios (camera calibration and measurement). The first scenario (1) is concerned with the camera calibration while the second one focuses on referring the layout frame (2). Finally, LME evaluation is carried out (3).
Figure 1. The calibration process of the muti-camera system is divided into two scenarios (camera calibration and measurement). The first scenario (1) is concerned with the camera calibration while the second one focuses on referring the layout frame (2). Finally, LME evaluation is carried out (3).
Sensors 23 00589 g001
Figure 2. The calibration methodologies studied in this work for the camera calibration (intrinsic camera parameters) are a CMM virtual grid (pyramid/cube) and test-field calibrations, while for the layout calibration a CMM virtual grid (cube) and photogrammetry except for the flat intrinsic calibration pattern and the photogrammetry extrinsic calibration pattern, the rest of the experimental tests are executed in a CMM (ZEISS Prismo 0.9 + L/350 μm). The main goal is to obtain an accurate ground truth in the final verification to determine the error budgeting of the system. More specifically, they have been verified in two measurement scenarios to evaluate the different factors of each calibration process.
Figure 2. The calibration methodologies studied in this work for the camera calibration (intrinsic camera parameters) are a CMM virtual grid (pyramid/cube) and test-field calibrations, while for the layout calibration a CMM virtual grid (cube) and photogrammetry except for the flat intrinsic calibration pattern and the photogrammetry extrinsic calibration pattern, the rest of the experimental tests are executed in a CMM (ZEISS Prismo 0.9 + L/350 μm). The main goal is to obtain an accurate ground truth in the final verification to determine the error budgeting of the system. More specifically, they have been verified in two measurement scenarios to evaluate the different factors of each calibration process.
Sensors 23 00589 g002
Figure 3. The multi-camera system under study is a stereo-photogrammetric device. Each camera is composed by an industrial camera, lens, and LED illumination. All of this is encapsulated in a housing to avoid noisy environments.
Figure 3. The multi-camera system under study is a stereo-photogrammetric device. Each camera is composed by an industrial camera, lens, and LED illumination. All of this is encapsulated in a housing to avoid noisy environments.
Sensors 23 00589 g003aSensors 23 00589 g003b
Figure 4. The factors that affect the correct determination of the intrinsic parameters are the 3D position, 3D geometry and 2D image.
Figure 4. The factors that affect the correct determination of the intrinsic parameters are the 3D position, 3D geometry and 2D image.
Sensors 23 00589 g004
Figure 5. A retroreflective target is placed on a tip, which is previously pre-calibrated to know its 3D position.
Figure 5. A retroreflective target is placed on a tip, which is previously pre-calibrated to know its 3D position.
Sensors 23 00589 g005
Figure 6. The camera calibration process consists of defining a geometry of 3D coordinates { X i } C M M defined in O C M M reference system to compute the intrinsic (K) and extrinsic parameters { [ R | t ] C } C M M of each C camera also defined in the O C M M .
Figure 6. The camera calibration process consists of defining a geometry of 3D coordinates { X i } C M M defined in O C M M reference system to compute the intrinsic (K) and extrinsic parameters { [ R | t ] C } C M M of each C camera also defined in the O C M M .
Sensors 23 00589 g006
Figure 7. The pre-calibrated pattern (left) is located in different positions. In each p position the extrinsic camera parameters are defined and finally, the intrinsic ones are deduced (right).
Figure 7. The pre-calibrated pattern (left) is located in different positions. In each p position the extrinsic camera parameters are defined and finally, the intrinsic ones are deduced (right).
Sensors 23 00589 g007
Figure 8. The factors that affect the correct determination of the intrinsic parameters are the 3D position, 3D geometry, 2D image and the K intrinsic parameters.
Figure 8. The factors that affect the correct determination of the intrinsic parameters are the 3D position, 3D geometry, 2D image and the K intrinsic parameters.
Sensors 23 00589 g008
Figure 9. Virtual cube grid (left). A single retroreflective target is captured in different images from different points of view. Portable photogrammetry system (right). A 3D geometry pattern is resolved by a portable photogrammetry system.
Figure 9. Virtual cube grid (left). A single retroreflective target is captured in different images from different points of view. Portable photogrammetry system (right). A 3D geometry pattern is resolved by a portable photogrammetry system.
Sensors 23 00589 g009
Figure 10. Positions of the bar to verify the accuracy of Optical 3D measuring systems following VDI-VDE 2634.
Figure 10. Positions of the bar to verify the accuracy of Optical 3D measuring systems following VDI-VDE 2634.
Sensors 23 00589 g010
Figure 11. SMR (a) as an element in order to have a common reference geometry for calibration with the CMM through a sphere (b) or the photogrammetry through a split bearing with a retro reflective marker (c).
Figure 11. SMR (a) as an element in order to have a common reference geometry for calibration with the CMM through a sphere (b) or the photogrammetry through a split bearing with a retro reflective marker (c).
Sensors 23 00589 g011
Table 1. The precision results (1-sigma) of the intrinsic parameters of each C camera for pyramid virtual grid and test-field calibration.
Table 1. The precision results (1-sigma) of the intrinsic parameters of each C camera for pyramid virtual grid and test-field calibration.
CameraStrategyf
(mm)
cl0
(Pixel)
rw0
(Pixel)
k1
(Pixel−2)
k2
(Pixel−4)
p1
(Pixel−1)
p2
(Pixel−1)
RMS
xy
C1CMM
( K C M M c 1 )
0.00130.24450.20531.53 × 10−121.95 × 10−191.45 × 10−94.89 × 10−100.0660.058
Test-field ( K T c 1 )0.0010.24340.13244.79 × 10−126.32 × 10−191.68 × 10−91.89 × 10−90.0670.09
C2CMM
( K C M M c 2 )
0.00150.28990.1571.15 × 10−121.48 × 10−198.11 × 10−101.41 × 10−90.0760.09
Test-field ( K T c 2 )0.00110.12440.14884.7 × 10−124.79 × 10−191.8 × 10−91.311 × 10−90.060.089
Table 2. The precision results (1-sigma) of the intrinsic parameters of each C camera for cube virtual grid calibration.
Table 2. The precision results (1-sigma) of the intrinsic parameters of each C camera for cube virtual grid calibration.
CameraStrategyf
(mm)
cl0
(Pixel)
rw0
(Pixel)
k1
(Pixel−2)
k2
(Pixel−4)
p1
(Pixel−1)
p2
(Pixel−1)
RMS
xy
C1CMM
( K C M M c 1 )
0.00240.22670.12412.79 × 10−123.28 × 10−191.45 × 10−98.17 × 10−100.0480.062
C2CMM
( K C M M c 2 )
0.00120.22160.10932.22 × 10−123.03 × 10−199.62 × 10−105.35 × 10−100.0730.095
Table 3. The precision results for the extrinsic calibration procedure, considering the combination of different strategies from intrinsic and extrinsic parameters for both cameras.
Table 3. The precision results for the extrinsic calibration procedure, considering the combination of different strategies from intrinsic and extrinsic parameters for both cameras.
StrategyK
(Input)
Cameraα
(Rad)
β
(Rad)
γ
(Rad)
dX
(mm)
dY
(mm)
dZ
(mm)
{ [ R | t ] } C M M K C M M P C14.12 × 10−54.37 × 10−52.08 × 10−50.0420.0340.055
C22.83 × 10−54.03 × 10−51.36 × 10−50.0560.0350.044
K T C13.18 × 10−54.51 × 10−51.75 × 10−50.0430.0230.07
C23.26 × 10−52.34 × 10−51.42 × 10−50.0290.0360.058
{ [ R | t ] } P K C M M P C12.97 × 10−42.58 × 10−42.97 × 10−40.0780.0220.071
C22.49 × 10−49.34 × 10−52.68 × 10−40.0390.020.075
K C M M C C13.16 × 10−42.26 × 10−43.03 × 10−40.0710.0130.103
C22.47 × 10−49.75 × 10−52.59 × 10−40.0450.0250.091
K T C16.58 × 10−43.13 × 10−42.98 × 10−40.0830.0680.105
C25.6 × 10−42.87 × 10−43.01 × 10−40.0910.0790.113
Table 4. LME maximum, average and standard deviation (k = 1) through VDI guideline for each calibration combination.
Table 4. LME maximum, average and standard deviation (k = 1) through VDI guideline for each calibration combination.
K [ R | t ] LME Max
(mm)
LME µ
(mm)
LME σ
(mm)
K C M M P { [ R | t ] } C M M 0.0390.0180.018
K C M M C 0.030.0130.008
K T 0.0510.0190.012
K C M M P { [ R | t ] } P 0.1080.03280.022
K C M M C 0.1030.0460.034
K T 0.160.0360.0244
Table 5. VDI results for strategy   K T   and { [ R | t ] } P   in a 10-repetition trial.
Table 5. VDI results for strategy   K T   and { [ R | t ] } P   in a 10-repetition trial.
K
[ R | t ] K T { [ R | t ] } P 1
RepeatLME Max
(mm)
LME µ
(mm)
LME σ
(mm)
KT{[R|t]}P10.160.0010.046
20.1710.00080.057
30.3150.0370.139
40.1720.0060.067
50.2490.0270.112
60.1430.0010.061
70.1990.0130.084
80.1960.010.082
90.1410.00030.049
100.0690.0030.024
Table 6. The combination of different repeats of intrinsic and extrinsic calibrations.
Table 6. The combination of different repeats of intrinsic and extrinsic calibrations.
K [ R | t ] LME Max
(mm)
LME µ
(mm)
LME σ
(mm)
K T 3 { [ R | t ] } P 3 0.3150.0370.139
K T 10 { [ R | t ] } P 10 0.0690.0030.024
K T 10 { [ R | t ] } P 3 0.3110.0370.142
K T 3 { [ R | t ] } P 10 0.0650.0010.024
K T 1 { [ R | t ] } P 3 0.3130.0370.139
K T 1 { [ R | t ] } P 10 0.0730.0030.025
Table 7. LME for the experimental test where the extrinsic calibration geometry is the same for different calibration processes. In addition, a second extrinsic calibration process using portable photogrammetry is included as a repeatability test.
Table 7. LME for the experimental test where the extrinsic calibration geometry is the same for different calibration processes. In addition, a second extrinsic calibration process using portable photogrammetry is included as a repeatability test.
K [ R | t ] LME Max
(mm)
LME µ
(mm)
LME σ
(mm)
K C M M P { [ R | t ] } C M M 0.1320.0130.049
K C M M C 0.0790.0070.032
K T 0.2310.0280.065
K C M M P { [ R | t ] } P 0.1140.0100.043
K C M M C 0.0800.0070.030
K T 0.2370.0330.077
K C M M P { [ R | t ] } P 0.130.0140.048
K C M M C 0.0820.0060.032
K T 0.2390.0290.064
Table 8. The photogrammetry intrinsic calibration is included within the different calibration techniques.
Table 8. The photogrammetry intrinsic calibration is included within the different calibration techniques.
K [ R | t ] LME Max
(mm)
LME µ
(mm)
LME σ
(mm)
K C M M P { [ R | t ] } C M M 0.1320.0130.049
K C M M C 0.0790.0070.032
K T 0.2310.0280.065
K P H 0.1140.0140.05
K C M M P { [ R | t ] } P 0.1140.0100.043
K C M M C 0.0800.0070.030
K T 0.2370.0330.077
K P H 0.0630.0060.031
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leizea, I.; Herrera, I.; Puerto, P. Calibration Procedure of a Multi-Camera System: Process Uncertainty Budget. Sensors 2023, 23, 589. https://doi.org/10.3390/s23020589

AMA Style

Leizea I, Herrera I, Puerto P. Calibration Procedure of a Multi-Camera System: Process Uncertainty Budget. Sensors. 2023; 23(2):589. https://doi.org/10.3390/s23020589

Chicago/Turabian Style

Leizea, Ibai, Imanol Herrera, and Pablo Puerto. 2023. "Calibration Procedure of a Multi-Camera System: Process Uncertainty Budget" Sensors 23, no. 2: 589. https://doi.org/10.3390/s23020589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop