Next Article in Journal
Lessons Learnt from Monitoring the Etna Volcano Using an IoT Sensor Network through a Period of Intense Eruptive Activity
Previous Article in Journal
Optoelectronic Torque Measurement System Based on SAPSO-RBF Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extrinsic Calibration for a Modular 3D Scanning Quality Validation Platform with a 3D Checkerboard

1
Biomedical Engineering Lab, Bern University of Applied Sciences, 2502 Biel, Switzerland
2
Laboratory for Movement Biomechanics, ETH Zurich, 8092 Zürich, Switzerland
3
Usability and Interaction Technology Lab, Heilbronn University, 74081 Heilbronn, Germany
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1575; https://doi.org/10.3390/s24051575
Submission received: 22 December 2023 / Revised: 16 February 2024 / Accepted: 26 February 2024 / Published: 29 February 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
Optical 3D scanning applications are increasingly used in various medical fields. Setups involving multiple adjustable systems require repeated extrinsic calibration between patients. Existing calibration solutions are either not applicable to the medical field or require a time-consuming process with multiple captures and target poses. Here, we present an application with a 3D checkerboard (3Dcb) for extrinsic calibration with a single capture. The 3Dcb application can register captures with a reference to validate measurement quality. Furthermore, it can register captures from camera pairs for point-cloud stitching of static and dynamic scenes. Registering static captures from TIDA-00254 to its reference from a Photoneo MotionCam-3D resulted in an error (root mean square error ± standard deviation) of 0.02 mm ± 2.9 mm. Registering a pair of Photoneo MotionCam-3D cameras for dynamic captures resulted in an error of 2.2 mm ± 1.4 mm. These results show that our 3Dcb implementation provides registration for static and dynamic captures that is sufficiently accurate for clinical use. The implementation is also robust and can be used with cameras with comparatively low accuracy. In addition, we provide an extended overview of extrinsic calibration approaches and the application’s code for completeness and service to fellow researchers.

1. Introduction

Optical 3D scanning is widely used in medical applications [1,2,3,4,5,6,7,8,9], and scanning systems that track patient movement in 3D are becoming ubiquitous. Some of these can capture the human upper body, in particular the human back, in various postures and movements such as standing upright, bending forward, and bending sideways. There are two options for capturing the human back during these postures and movements: Follow the patient with a single capturing system or use multiple 3D scanning systems. Using a single capturing system requires a person or actuator to follow the patient’s movement. In addition, the movement of the system must also be tracked, which requires user intervention, is a complex task, and can result in a loss of accuracy [10,11,12,13,14]. Conversely, the use of multiple 3D scanning systems allows the patient’s movement to be captured with minimal user intervention, but this approach requires repeated extrinsic calibration between these systems, which can be time consuming.
Many authors have proposed solutions for extrinsic calibration. However, most extrinsic calibration methods are specific to LiDAR systems used in autonomous driving for long-range measurements. Other methods are specific to LiDAR and color (RGB) mono camera or color mono and depth camera, require texture, require a camera with high resolution, require a time-consuming calibration process involving multicapture, and do not come with implementation details or publicly available code (Table A1 in Appendix A).
Beltran et al. [15] published a toolbox for automatic calibration of sensor pairs consisting of LiDAR and mono and stereo camera devices in any possible combination. Their calibration target contains four round holes and four ArUco markers. Yan et al. [16] published a calibration toolbox for intrinsic and extrinsic LiDAR-camera calibration for autonomous driving vehicles. Their toolbox contains a rich set of sensor calibration methods, including inertial measurement units (IMUs) and radar. Their calibration board contains four round holes and a 2D checkerboard pattern. Domhof et al. [17] published an extrinsic calibration tool to calibrate sensor setups consisting of LiDAR, camera, and radar sensors with a calibration board with four round holes.
Zhang et al. [18] proposed a two-step method for extrinsic calibration between a sparse 3D LiDAR and a thermal camera. The method involved two steps: Extrinsic calibration between LiDAR and a visual camera, followed by extrinsic calibration between the visual camera and the thermal camera. Their 3D checkerboard was derived from work by Rangel et al. [19] and Skala et al. [20].
None of these solutions is applicable to all scanning methodologies, provides sufficient calibration accuracy with a single capture, includes implementation details, and has publicly available code. In this paper, we present a simple, robust, and practical 3D checkerboard, including an algorithm and software, to calibrate different 3D systems with each other with a single capture. The work by Beltran et al., Yan et al., Domhof et al., Zhang et al., Rangel et al., and Skala et al. shows the potential of extrinsic calibration using a calibration target with holes and inspired our 3D checkerboard approach, which can be used to calibrate diverse methodologies such as structured light (SL) [21,22], active stereo (AS) [22,23], and time of flight (ToF) [24]. The validation results for the systems used here are evaluated and compared to classical approaches in the literature.

2. Materials and Methods

The Biomedical Engineering Lab has built a modular quality-validation platform (DMQV, Figure 1) for 3D scanning consisting of various 3D scanners, 3D cameras, and 3D scanning methodologies. A Photoneo (Photoneo s.r.o, Bratislava, Slovakia) MotionCam-3D camera (MotionCam 1; Figure 1A) [21] is used as a reference for static and dynamic captures because it has an accuracy of <0.3 mm. The platform also contains a DLP LightCrafter 4500 pattern projector from Texas Instruments (Texas Instruments, Dallas, Texas, USA; Figure 1B) [25] and three monochrome 2D cameras from HIKROBOT (HIKROBOT, Hangzhou, Zhejiang, China; MV-CA023-10UM; Figure 1C) [26]. The DLP LightCrafter 4500 and 2D cameras from HIKROBOT combined with TIDA-00254, a structured light machine vision application from Texas Instruments [22], can capture high-quality 3D images from static scenes. In combination with BoofCV (Peter Abeles, version 0.41) [23], the system can capture 3D images from dynamic scenes with stereo and trinocular vision. The platform also includes two consumer-grade 3D cameras, the Orbbec (Orbbec, Shenzhen Shi, China) Astra Mini and the Intel (Intel, Santa Clara, California, USA) D415 (Figure 1D), which use single-shot structured light and active stereo, respectively.
The DMQV has been developed to investigate the minimum key parameters required to capture the human back shape [27], build models that allow spinal alignment to be estimated from back shape, and investigate correlations between back shape and spinal alignment.
The Functional Spinal Biomechanics group at ETH used the DMQV platform at the Spiraldynamik MedCenter Zurich and an extended version at the Balgrist University Hospital in Zurich to capture the human back in various postures and movements. The extended platform (Figure 2, left) includes an additional Photoneo MotionCam-3D (MotionCam 2) to capture the patient from above during static and dynamic forward bending. The distance between the patient’s back and MotionCam 1, DLP LightCrafter, and HIKROBOT cameras was 1.1 m, the distance between the patient’s back and the Orbbec Astra Mini and Intel D415 was 0.9 m, and the distance between MotionCam 2 and the floor was 2.2 m.
The DMQV platform was enhanced with a 3D checkerboard (Figure 2, right) to extrinsically calibrate the systems and to register the captures from the systems with each other. The 3Dcb consists of a plane with holes (Figure 2, right, A) arranged in six distinct rows and columns, giving 18 holes in total. Our 3D checkerboard was 21 cm × 29.7 cm (A4) in size, and each hole was 2 cm × 2 cm. The sizes of the checkerboard and holes can be scaled according to the intended application, the distance between the camera and the checkerboard, and the quality of the systems in use. Low-quality systems require a larger checkerboard and larger holes. Situated 10 cm behind the plane with holes is another completely solid plane (Figure 2, right, B). Some systems, such as the Intel D415, tend to smooth over the holes when the pattern projected by the inbuilt projector is not visible in the stereo camera pair; the second plane reflects the pattern, and thus the holes are detected more robustly.
The 3D captures of the 3Dcb from each camera are evaluated in pairs (Algorithm 1, Figure 3): First, the background is cropped by using a rough estimate of the distance between the 3Dcb and the camera. Then a plane is fitted (pcfitplane in MATLAB; MathWorks, Natick, Massachusetts, USA, version R2021b), which keeps only the points from the plane of the 3Dcb with the holes; all points outside of the fitted plane are removed, including the points from the solid plane behind the holes. A principal component analysis (pca in MATLAB) is then performed to align the larger dimension of the 3Dcb to the X-axis and project it into 2D. Next, the projection of the 3Dcb is rasterized with a regular grid, i.e., with a resolution of 3 mm, and labeled with 1 for hole, no nearest neighbor (findNearestNeighbors in MATLAB) found within the resolution, and 0 for board. This binary image is then checked for connected components (bwconncomp in MATLAB), which detects all holes. The median (median in MATLAB) is then calculated for both X and Y coordinates for all components. The median points of all holes are sorted (sort in MATLAB) in X and Y directions (Figure 4, left) to detect the arrangement and orientation of the 3Dcb. The medians are then transformed back into the 3D space using the inverse PCA transform. By exploiting the known arrangement and orientation of the median points, a rigid transformation is estimated (estimateGeometricTransform3D in MATLAB) between pairwise system captures. The estimated rigid transformation can then be applied directly to the captures from each system to transform the point cloud into the coordinate system of all other systems (Figure 4, right).
Algorithm 1. MATLAB pseudocode to calculate the rigid transformation between two 3D scanning systems from a pair of 3D checkerboard captures.
Pc2   %point cloud from capture of 3Dcb from system 2
pc1_ref %point cloud from system 1 (reference coordinate system)
function Checkerboards3d_estimateT(pc2, pc1_ref)
 % remove background
 pc2_noBackground = abs(pc2-distEstimation) <= eps
 pc1_noBackground = abs(pc1_ref-distEstimation) <= eps
for pc = pc2_noBackground, pc1_noBackground do
  % fit plane, do PCA to transform from 3D into 2D
  planeModel = pcfitplane(pc)
  [pcaPlane,coeff,mu] = pca(planeModel)
  % create regular binary grid
  for [x,y] = min(pcaPlane):resolution:max(pcaPlane)
   point_nn = findNearestNeighbors(pcaPlane, [x,y], 1)
   if abs(point_nn-[x,y])>resolution
    zGrid(x,y) = 1
   end if
  end for
  % detect connected components (holes)
  CC = bwconncomp(zGrid)
  % calculate median coordinates of holes, sort them and
  % use inverse PCA to transform back into 3D
  Median_cc = median(CC)
  holeMedians = sort(Median_cc)
  holeMedians3D = holeMedians * transpose(coeff) + mu
end for
 % estimate the geometric transformation between hole medians from both checkerboards
 tFormEst = estimateGeometricTransform3D(holeMedians3D_2, holeMedians3D_1,’rigid’)
return tFormEst
end function
The DMQV platform, including the 3Dcb, was used to capture static standing upright and static and dynamic bending forward and sideways at the Spiraldynamik Med Center Zurich on 72 patients (mean age 54 ± 16 years). The patients attended the medical center with a diverse spectrum of spinal disorders, including back pain, limited range of mobility, or simply for a routine checkup. The extended platform version was used at the Balgrist University Hospital in Zurich on 22 patients who attended with idiopathic scoliosis (mean age 18 ± 4 years). The height of the DMQV platform was adjusted for each patient to optimize the field of view. Therefore, captures of the 3Dcb in the vertical position (0°) and at 45° were taken from all systems after each patient. These 3Dcb captures were used to register the systems to each other with all 94 patients. Three use cases were evaluated (Table 1): Use Case 1 registers captures from left and right camera pairs for static standing upright (Figure 5a), Use Case 2 registers static standing upright captures to its reference capture (Figure 5c), and Use Case 3 registers captures from above and behind for dynamic forward bending (Figure 5d).
The Photoneo MotionCam-3D uses structured light [21]; therefore, the captures from MotionCam 1 and MotionCam 2 for dynamic forward bending (Use Case 3) must be made sequentially. To reduce the delay between the captures, a hardware trigger was used to daisy-chain the two cameras. An iterative closest point (ICP) optimization was performed for the overlapping region after the 3Dcb registration to correct for the remaining delay between the captures.
The metrics used to assess the quality of registration are the root mean square error (RMSE) and standard deviation (SD). The RMSE and SD are calculated using the nearest neighbors of all points for the overlapping region between the captures from the different systems. The overlapping region was defined as follows: a point p 1 from system 1 overlaps if there is at least one point p 2 from system 2 within a specified radius ρ , i.e., 6 mm, around the surface normal n 1 at that point: n 1 × p 2 p 1 < ρ , where p 2 p 1 gives a vector pointing from p 2 to p 1 , the cross product with n 1 gives a vector that is orthogonal to n 1 (definition of the cross product), and taking the norm of this resulting vector gives the distance between point p 2 and p 1 orthogonal to the normal vector n 1 at point p 1 .

3. Results

Use Case 1: Register captures from left and right camera pairs for static standing upright.
The median RMSE for the overlapping region at a distance of 0.9 m for the left and right Orbbec Astra Mini camera pair (Figure 5b) after registration was 3.1 mm (Figure 6, left). The median SD was 1.9 mm. The median RMSE for the overlapping region at a distance of 0.9 m for the left and right Intel D415 camera pair after registration was 3.6 mm (Figure 6, right). The median SD was 2.2 mm.
Use Case 2: Register static standing upright captures to its reference capture.
The median RMSE and SD for the overlapping region at distances between 0.9 m and 1.1 m for the captures from all systems and the MotionCam 1 as reference (Figure 5c) after registration (Figure 7) are shown in Table 2.
Use Case 3: Register captures from above and behind for dynamic forward bending.
The median RMSE for the overlapping regions at a distance of 1.1 m (Figure 5d) for the camera pair above (MotionCam 2) and behind (MotionCam 1) after registration was 6.9 mm (Figure 8, left). The median SD was 1.7 mm. An additional ICP registration for the overlapping region after the 3Dcb registration to correct for the delay between the captures from cameras 1 and 2 resulted in a median RMSE for the overlapping regions of 2.0 mm and a median SD of 1.4 mm (Figure 8, right).

4. Discussion

The median RMSE of 0 mm and the median SD of 0.2 mm for the overlapping region for the captures from the MotionCam 1 with itself as reference (Use Case 2) show that the 3D checkerboard leads to an estimation of a geometric transformation that is sufficiently accurate for clinical use. The higher RMSE and SD values of TIDA-00254, Orbbec Astra Mini, and Intel D415 reflect the accuracy of these systems. The RMSE and SD values are similar to the accuracy values stated by the manufacturers (Table 3), which is to be expected. The Intel D415 in particular tends to smooth out holes, but this is mitigated by the solid plane behind.
Furthermore, interference between these systems required that the captures be made sequentially, within a few seconds, and therefore involuntary movements such as swaying and breathing are included in the error values. Moreover, the voluntary movement between captures increases the error values, especially for dynamic bending (Use Case 3). The ICP registration reduced the median error value to 2.0 mm. Furthermore, the angle between MotionCam 2 and MotionCam 1 was 90°, so the 3Dcb was captured at 45°, and the overlapping area of the human back was also largely captured by both cameras with an incidence angle of 45°.
The literature on the registration of 3D captures and considerations for practical applications mostly focuses on camera–LiDAR calibration, covers only parts of our proposed solution, and mostly presents only results. Beltran et al. [15] used their calibration toolbox with a 1.4 m wide calibration target with four round holes. The resulting mean error for monocular–LiDAR calibration using 30 frames of three calibration target poses was 8.2 mm. Unfortunately, they do not state error values for their real test environment for stereo–stereo calibration. Our practical application showed error values between 0 mm and 3.6 mm for a single target pose with a single frame. We only used distances of around 1.1 m, whereas they used distances up to 6 m. The question arises: Which calibration target and algorithm perform better for which systems and target distances. Since our application is limited to distances around 1.1 m, future work could investigate the performance of our proposed solution at larger distances.
Yan et al. [16] used their calibration toolbox with a 1.2 m wide calibration board with four round holes and a 2D checkerboard pattern. Their toolbox contains a rich set of various sensor calibration methods but is specific to autonomous driving, such as camera–LiDAR calibration. Furthermore, they do not state error values for real test environments.
Domhof et al. [17] used their extrinsic calibration tool with a 1.5 m wide calibration board with four round holes. The resulting mean error for stereo–LiDAR calibration using 29 target locations within approximately 5 m was 15 mm. Stereo–stereo calibration was not investigated.
Zhang et al. [18] used their method for extrinsic calibration with a 3D checkerboard approximately 56 cm wide and consisting of 44 round holes. Low-cost cameras such as the Intel D415 and Orbbec Astra would have failed to detect all holes with a single capture. Furthermore, Zhang et al. suggest collecting more than 40 image pairs with the checkerboard in various positions, angles, and distances. Stereo–stereo calibration was not investigated, no error values were reported, and no implementation details or public code were provided.
The work by Beltran et al. [15], Yan et al. [16], Domhof et al. [17], Zhang et al. [18], Rangel et al. [19], and Skala et al. [20] demonstrates the potential of extrinsic calibration using a calibration target with holes. Unfortunately, none of their work was directly applicable to our specific setup, but it inspired our 3D checkerboard approach. Repeated calibration was necessary because the platform was adjusted to each patient, and the low-cost systems required a very robust method. Therefore, we implemented a robust approach requiring only a single capture. The work by Zhang et al. [18], Rangel et al. [19], Skala et al. [20], and our own research show that our 3D checkerboard and registration approach is applicable to further modalities, such as time of flight, and can even be extended to thermal cameras (Figure 1). This will be the focus of future work.
The holes of the proposed 3Dcb are arranged in a regular pattern on a single plane. The question arises whether a unique but irregular cubic arrangement could improve the estimated 3D transformation. The holes of the proposed 3Dcb are square. We investigated whether round holes would lead to better results; this was not the case, but the shape of the holes could be further investigated.
We also used multiple 3Dcb captures with different positions and locations of the checkerboard, but we did not find significant improvement over a single capture. The comparison between multiple captures and single capture could be further investigated and quantified.

5. Conclusions

The Biomedical Engineering Lab has built a modular quality validation platform for 3D scanning (DMQV) consisting of various 3D scanners, 3D cameras, and 3D scanning methodologies, including a 3D checkerboard for extrinsic calibration. The 3D checkerboard extrinsic calibration can be used to register 3D captures between different systems, to fuse these captures, and to compare various quality validation parameters between all systems. The registration of 3D captures with the 3Dcb requires only a single additional capture. The registration approach does not require texture and is therefore applicable to further modalities such as time of flight. The DMQV has been tested and validated with two studies for the systems detailed in Table 3.
The results show that our 3Dcb implementation provides accurate registration and is robust, simple, fast to use, and generalizable. The median RMSE and median SD of the overlapping regions after the registration of the 3D captures from SL with TIDA (0.02 mm ± 2.9 mm), Astra Mini (1.5 mm ± 4.0 mm), and Intel D415 (1.7 mm ± 3.9 mm) deviate only a few millimeters from the reference capture with the Photoneo MotionCam-3D and reflect the accuracy values provided by the manufacturers (Table 3). In addition, the 3Dcb registrations for the fusion of static captures from low-cost camera pairs and dynamic captures from a Photoneo MotionCam-3D camera pair were achieved with comparable errors.
We provide a complete pipeline for the registration of any 3D scanning methodology, including the CAD model of the 3Dcb and publicly available code (see Supplementary Materials) to facilitate further research on this topic. We believe that the integration of our extrinsic 3D calibration pipline will facilitate the use of setups involving multiple adjustable 3D scanning systems and thereby promote their dissemination in clinical research and practical applications.

Supplementary Materials

The MATLAB code presented in this study is openly available on GitHub at https://github.com/mkaisereth/Extrinsic3DCalibration, (accessed on 28 February 2024).

Author Contributions

M.K. developed the theory, performed the implementations and computations, and wrote the manuscript.; T.B. supervised the findings of this work and reviewed and edited the manuscript; M.B. contributed to the provision of the various 3D systems, ethics application, and data capture and commented on the manuscript; M.W. contributed to the provision of the various 3D systems and commented on the manuscript; S.Ć. contributed to data procurement and ethics application and commented on the manuscript; G.M. contributed to data procurement and commented on the manuscript; V.M.K. contributed to supervision and financing and provided critical feedback on the manuscript. All authors discussed the results, contributed to the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Innosuisse, grant number 47195.1 IP-LS.

Institutional Review Board Statement

The studies were conducted in accordance with the Declaration of Helsinki. The study at the Spiraldynamik MedCenter Zurich was approved by the Ethics Committee of ETH Zurich (EK 2022-N-179, 11 October 2022), and the study at the Balgrist University Hospital was approved by the Cantonal Ethics Committee in Zurich (2022-01672, 22 November 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in ETH Research Collection at https://doi.org/10.3929/ethz-b-000640083.

Acknowledgments

The authors would like to thank Christian Larsen for his outstanding support during the study at Spiraldynamik Med Center Zurich. Furthermore, the authors would like to thank Christoph Laux and Sabrina Catanzaro, for their support during the study at Balgrist University Hospital.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Literature with extrinsic calibration (see Table A2 and Table A3 for an extended overview).
Table A1. Literature with extrinsic calibration (see Table A2 and Table A3 for an extended overview).
Author &
Reference
Sensor TypeCheckerboardMulticapture or
Single Capture
Texture
Required
Code
Available
J. Beltran [15]LiDAR–camera (stereo, mono)Calibration target with four round holes andArUco markersMulticaptureNoVelo2cam
G. Yan [16]LiDAR–camera (mono)Calibration target with four round holes andcheckerboard pattern MulticaptureYesOpenCalib
J. Domhof [17]Radar–camera
(stereo)–LiDAR
Calibration target with four round holesMulticaptureNoMulti_sensor_
calibration
J. Zhang [18]LiDAR–camera–thermal2D checkerboards and
3D checkerboard
MulticaptureYesNo
J. Rangel [19]Thermal–RGB-D camera3D checkerboardMulticaptureYesNo
K. Skala [20]Thermal–RGB-D camera3D checkerboardMulticaptureYesNo
ProposedDepth cameras (Structured light–active stereo–ToF)3D checkerboardSingle captureNoExtrinsic3D
Calibration
Table A2. Extended overview of literature with extrinsic calibration with publicly available code. Columns are author and reference, sensor type, name of toolbox and operating system–platform.
Table A2. Extended overview of literature with extrinsic calibration with publicly available code. Columns are author and reference, sensor type, name of toolbox and operating system–platform.
Author and ReferenceSensor TypeToolbox NameOperating System–Platform
C. Guindel [28]LiDAR–stereoVelo2camROS
J. Beltran [15]LiDAR–camera (stereo, mono)Velo2camROS
R. Unnikrishnan [29]Camera–LiDARLCCTMATLAB
A. Geiger [30]LiDAR–ToF–Camera (stereo)LIBCBDETECTMATLAB
G. Yan [31]LiDAR–camera (mono)OpenCalibC++
G. Yan [16]LiDAR–camera (mono)OpenCalibC++
J. Domhof [17]Radar–camera (stereo)–LiDARMulti_sensor_calibrationROS
J. K. Huang [32]LiDAR–camera (mono)extrinsic_lidar_camera_calibrationMATLAB
A. Dhall [33]LiDAR–camera (mono, stereo)lidar_camera_calibrationROS
M. Velas [34]LiDAR–RGB camera (mono)but_calibration_camera_velodyneROS
L. Yin [35]LiDAR–camera (mono)multimodal_data_studioMATLAB
P. C. Su [36]RGB-D camerasRGBD_CameraNetwork_CalibrationC++
Proposed
(Kaiser et al.)
Depth cameras (Structured light–active stereo–ToF)Extrinsic3DCalibrationMATLAB
Table A3. Extended overview of literature with extrinsic calibration with author and reference, sensor type, checkerboard type or scene, features used for correspondence, whether the approach requires multiple captures or a single capture, publication type, whether the approach requires texture and whether the approach is specific for autonomous driving. Legend: Multicapture (MC), single capture (SC), multitarget (MT), journal (J), conference (C), online (O), no explicit information (X), and not applicable (n.a.).
Table A3. Extended overview of literature with extrinsic calibration with author and reference, sensor type, checkerboard type or scene, features used for correspondence, whether the approach requires multiple captures or a single capture, publication type, whether the approach requires texture and whether the approach is specific for autonomous driving. Legend: Multicapture (MC), single capture (SC), multitarget (MT), journal (J), conference (C), online (O), no explicit information (X), and not applicable (n.a.).
Author and
Reference
Sensor TypeCheckerboard or
Scene
FeaturesMulticapture or
Single Capture
Publication TypeTexture
Required
Autonomous Driving
M. Lindner [37]PMD ToF–RGB camera (mono)2D checkerboardPlaneMCCYesX
S. Fuchs [38]ToF2D checkerboardPlane (dark–bright)MCCYesX
J. Zhu [39]ToF–passive stereo2D checkerboardPlane (dark–bright)MCCYesX
H. Zou [40]Laser profilersSpheresSpheresMCJNoNo
J. Schmidt [41]ToFScenePoint correspondenceMCCIntensityX
H. Lee [42]LiDARPlanar objects from ScenePlane correspondenceMCJNoYes
S. Chen [43]LiDARSpheresSphere center and corresponding pointsMCJNoYes
C. Guindel [28]LiDAR–stereoCalibration target with four round holesPlane, Circles and point correspondenceMCCNoYes
J. Beltran [15]LiDAR–camera (stereo–mono)Calibration target with four round holes and ArUco markersPlane, Circles, point correspondence and ArUco markersMCJNo–YesYes
Y. M. Kim [44]ToF–camera (stereo)2D checkerboardCornersMCCYesNo
D. Scaramuzza [45]LiDAR–camera (mono)SceneNatural featuresMCCYesYes
R. Unnikrishnan [29]Camera–LiDAR2D checkerboardCorners and PlaneMCOYesYes
Q. Zhang [46]Camera–LiDAR2D checkerboardPlaneMCCYesYes
A. Geiger [30]LiDAR–ToF–Camera (stereo)Multiple 2D checkerboardsCorners and PlanesSC (Multitarget)CYesX
L. Zhou [47]Stereo–LiDAR2D checkerboardPlane and Line correspondencesMCCYesYes
Q. Wang [48]LiDAR–camera3x 2D checkerboardPlanes and CornersMCJYesYes
S. Verma [49]Camera (mono)–LiDAR2D checkerboardPlanes and CornersMCCYesYes
J. Ou [50]LiDAR–camera (mono, stereo)2D checkerboardCorners, intensity and PlaneMCJYesYes
X. Gong [51]LiDAR–cameraTrihedronPlanesMCJYesYes
G. Yan [31]LiDAR–camera (mono)Calibration target with four round holes and checkerboard patternCircles and CornersMCOYesYes
Y. An [52]LiDAR–camera (mono)2D checkerboardPlane and CornersMCJYesYes
S. A. Rodriguez F. [53]LiDAR–camera (mono)CircleCircleMCCYesYes
G. Yan [16]LiDAR–camera (mono)Calibration target with four round holes and checkerboard patternCircles and CornersMCJYesYes
J. Zhang [18]LiDAR–camera–thermal2D checkerboards and 3D checkerboardCorners and circlesMCCYesYes
J. Domhof [17]Radar–camera (stereo)–LiDARCalibration target with four round holesCirclesMCCNoYes
E. S. Kim [54]LiDAR–camera (mono)2D checkerboardCorners and PlaneMCJYesYes
Y. Wang [55]LiDAR–camera (mono)Reviewn.a.n.a.Cn.a.Yes
A. Khurana [56]LiDAR–camera (mono, stereo)Reviewn.a.n.a.Jn.a.Yes
J. Nie [57]LiDAR–camera (mono)Reviewn.a.n.a.Cn.a.Yes
D. J. Yeong [58]LiDAR–camera (mono, stereo)Reviewn.a.n.a.Jn.a.Yes
P. An [59]LiDAR–camera (mono)2D checkerboardsCorners and PlaneMultitargetJYesYes
J. Persic [60]LiDAR–RadarTriangular trihedralcorner reflectorTriangle and PlaneMCCNoYes
J. K. Huang [32]LiDAR–camera (mono)Planar square targetsPlane and CornersMCJYesYes
A. Dhall [33]LiDAR–camera (mono, stereo)Planar boards with ArUco tagsCorners and EdgesMCOYesYes
M. Velas [34]LiDAR–RGB camera (mono)Calibration target with four round holesCircles and edgesSCCYesYes
L. Yin [35]LiDAR–camera (mono)2D checkerboardCorners and planeMCJYesYes
H. Liu [61]RGB-D camerasSpheresSphere centerMCJYesX
A. Perez-Yus [62]RGB camera–Depth cameraLine observationsLinesMCJYesYes
C. Daniel Herrera [63]RGB camera–Depth camera2D checkerboardCorners and PlaneMCJYesNo
J. Chaochuan [64]RGB-D camerasCalibration towerCirclesMCJYesNo
Y. C. Kwon [65]RGB-D camerasCircles and spheresCirclesMCJYesNo
Z. Wu [66]RGB camera–Depth camera3D CheckerboardCornersMCCYesNo
R. Avetisyan [67]RGB-D cameras2D Checkerboard and MarkersCorners and MarkersMCCYesNo
R. S. Pahwa [68]PMD depth camera (ToF)2D checkerboardCorners and PlaneMCCYesNo
D. S. Ly [69]Mono camerasSceneLinesMCJYesNo
W. Li [70]3D scanner–optical tracker3D benchmarkPoint set (ICP)MCJNoNo
M. Ruan [71]Depth camerasSpherical targetShere centerMCCNoNo
N. Eichler [72]Depth camerashuman motionSkeletal jointsMCJNoNo
B. S. Park [73]RGB-D cameras3D Charuco boardQR code and feature pointsMCJYesNo
J. Guan [74]Mono camerasSpheresSphere centerMCJYesNo
P. C. Su [36]RGB-D camerasSpheresSphere centerMCJYesNo
J. Rangel [19]Thermal–RGB-D camera3D checkerboardCircular holesMCCYesNo
K. Skala [20]Thermal–RGB-D camera3D checkerboardRectangular holesMCJYesNo
Proposed (Kaiser et al.)Depth cameras (Structured light–active stereo–ToF)3D checkerboardRectangular holes and PlaneSCJNoNo

References

  1. Bassani, T.; Stucovitz, E.; Galbusera, F.; Brayda-Bruno, M. Is rasterstereography a valid noninvasive method for the screening of juvenile and adolescent idiopathic scoliosis? Eur. Spine J. 2019, 28, 526–535. [Google Scholar] [CrossRef] [PubMed]
  2. Marin, L.; Lovecchio, N.; Pedrotti, L.; Manzoni, F.; Febbi, M.; Albanese, I.; Patanè, P.; Pellino, V.C.; Vandoni, M. Acute Effects of Self-Correction on Spine Deviation and Balance in Adolescent Girls with Idiopathic Scoliosis. Sensors 2022, 22, 1883. [Google Scholar] [CrossRef] [PubMed]
  3. Paśko, S.; Glinkowski, W. Combining 3D Structured Light Imaging and Spine X-ray Data Improves Visualization of the Spinous Lines in the Scoliotic Spine. Appl. Sci. 2021, 11, 301. [Google Scholar] [CrossRef]
  4. Ledwoń, D.; Danch-Wierzchowska, M.; Bugdol, M.; Bibrowicz, K.; Szurmik, T.; Myśliwiec, A.; Mitas, A.W. Real-Time Back Surface Landmark Determination Using a Time-of-Flight Camera. Sensors 2021, 21, 6425. [Google Scholar] [CrossRef] [PubMed]
  5. Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, Z.; Zhang, Y.; Fu, C.; Liu, L.; Chen, C.; Xu, W.; Guo, S. Back Shape Measurement and Three-Dimensional Reconstruction of Spinal Shape Using One Kinect Sensor. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 745–749. [Google Scholar] [CrossRef]
  7. Rehouma, H.; Noumeir, R.; Essouri, S.; Jouvet, P. Advancements in Methods and Camera-Based Sensors for the Quantification of Respiration. Sensors 2020, 20, 7252. [Google Scholar] [CrossRef]
  8. Kokabu, T.; Kawakami, N.; Uno, K.; Kotani, T.; Suzuki, T.; Abe, Y.; Maeda, K.; Inage, F.; Ito, Y.M.; Iwasaki, N.; et al. Three-dimensional depth sensor imaging to identify adolescent idiopathic scoliosis: A prospective multicenter cohort study. Sci. Rep. 2019, 9, 9678. [Google Scholar] [CrossRef]
  9. Nam, K.W.; Park, J.; Kim, I.Y.; Kim, K.G. Application of Stereo-Imaging Technology to Medical Field. Healthc. Informatics Res. 2012, 18, 158–163. [Google Scholar] [CrossRef]
  10. Zhang, W.; Sun, X.; Yu, Q. Moving Object Detection under a Moving Camera via Background Orientation Reconstruction. Sensors 2020, 20, 3103. [Google Scholar] [CrossRef]
  11. Chapel, M.-N.; Bouwmans, T. Moving objects detection with a moving camera: A comprehensive review. Comput. Sci. Rev. 2020, 38, 100310. [Google Scholar] [CrossRef]
  12. Liu, T.; Liu, Y. Moving Camera-Based Object Tracking Using Adaptive Ground Plane Estimation and Constrained Multiple Kernels. J. Adv. Transp. 2021, 2021, 8153474. [Google Scholar] [CrossRef]
  13. Jung, S.; Cho, Y.; Lee, K.; Chang, M. Moving Object Detection with Single Moving Camera and IMU Sensor using Mask R-CNN Instance Image Segmentation. Int. J. Precis. Eng. Manuf. 2021, 22, 1049–1059. [Google Scholar] [CrossRef]
  14. Jung, S.; Cho, Y.; Kim, D.; Chang, M. Moving Object Detection from Moving Camera Image Sequences Using an Inertial Measurement Unit Sensor. Appl. Sci. 2020, 10, 268. [Google Scholar] [CrossRef]
  15. Beltran, J.; Guindel, C.; de la Escalera, A.; Garcia, F. Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups. IEEE Trans. Intell. Transp. Syst. 2022, 23, 17677–17689. [Google Scholar] [CrossRef]
  16. Yan, G.; Liu, Z.; Wang, C.; Shi, C.; Wei, P.; Cai, X.; Ma, T.; Liu, Z.; Zhong, Z.; Liu, Y.; et al. OpenCalib: A multi-sensor calibration toolbox for autonomous driving. Softw. Impacts 2022, 14, 100393. [Google Scholar] [CrossRef]
  17. Domhof, J.; Kooij, J.F.P.; Gavrila, D.M. A Joint Extrinsic Calibration Tool for Radar, Camera and Lidar. IEEE Trans. Intell. Veh. 2021, 6, 571–582. [Google Scholar] [CrossRef]
  18. Zhang, J.; Siritanawan, P.; Yue, Y.; Yang, C.; Wen, M.; Wang, D. A Two-step Method for Extrinsic Calibration between a Sparse 3D LiDAR and a Thermal Camera. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision, ICARCV 2018, Singapore, 18–21 November 2018; pp. 1039–1044. [Google Scholar] [CrossRef]
  19. Rangel, J.; Soldan, S.; Kroll, A. 3D Thermal Imaging: Fusion of Thermography and Depth Cameras. e-J. Nondestruct. Test. 2015, 20. Available online: https://www.ndt.net/search/docs.php3?id=17665 (accessed on 16 September 2023).
  20. Skala, K.; Lipić, T.; Sovic, I.; Gjenero, L.; Grubisic, I. 4D thermal imaging system for medical applications. Period. Biol. 2011, 113, 407–416. [Google Scholar]
  21. 3D Camera|MotionCam|Photoneo Focused on 3D. Available online: https://www.photoneo.com/motioncam-3d (accessed on 2 September 2023).
  22. TIDA-00254 Reference Design|TI.com. Available online: https://www.ti.com/tool/TIDA-00254 (accessed on 2 September 2023).
  23. BoofCV. Available online: https://boofcv.org/index.php?title=Main_Page (accessed on 2 September 2023).
  24. Marshall, G.F.; Stutz, G.E. Handbook of Optical and Laser Scanning; Taylor & Francis: Abingdon, UK, 2018. [Google Scholar]
  25. DLPLCR4500EVM Evaluation Board|TI.com. Available online: https://www.ti.com/tool/DLPLCR4500EVM (accessed on 2 September 2023).
  26. VGA Industriekamera USB 3.0 Vision MV-CA023-10UM. Available online: https://www.maxxvision.com/produkte/kameras/usb3-vision-kameras/220/mv-ca023-10um (accessed on 2 September 2023).
  27. Kaiser, M.; Brusa, T.; Wyss, M.; Ćuković, S.; Bertsch, M.; Taylor, W.R.; Koch, V.M. Minimal Required Resolution to Capture the 3D Shape of the Human Back—A Practical Approach. Sensors 2023, 23, 7808. [Google Scholar] [CrossRef]
  28. Guindel, C.; Beltran, J.; Martin, D.; Garcia, F. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, ITSC, Maui, HI, USA, 4–7 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
  29. Unnikrishnan, R. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera. 2005. Available online: www.cs.cmu.edu/ (accessed on 10 August 2023).
  30. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar] [CrossRef]
  31. Yan, G.; He, F.; Shi, C.; Wei, P.; Cai, X.; Li, Y. Joint Camera Intrinsic and LiDAR-Camera Extrinsic Calibration. February 2022. Available online: https://arxiv.org/abs/2202.13708v3 (accessed on 10 August 2023).
  32. Huang, J.K.; Grizzle, J.W. Improvements to Target-Based 3D LiDAR to Camera Calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
  33. Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-Camera Calibration Using 3D-3D Point Correspondences. May 2017. Available online: https://arxiv.org/abs/1705.09785v1 (accessed on 10 August 2023).
  34. Velas, M.; Spanel, M.; Materna, Z.; Herout, A. Calibration of RGB Camera With Velodyne LiDAR. Available online: http://hdl.handle.net/11025/26408 (accessed on 28 February 2024).
  35. Yin, L.; Luo, B.; Wang, W.; Yu, H.; Wang, C.; Li, C. CoMask: Corresponding Mask-Based End-to-End Extrinsic Calibration of the Camera and LiDAR. Remote Sens. 2020, 12, 1925. [Google Scholar] [CrossRef]
  36. Su, P.-C.; Shen, J.; Xu, W.; Cheung, S.-C.S.; Luo, Y. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks. Sensors 2018, 18, 235. [Google Scholar] [CrossRef]
  37. Lindner, M.; Kolb, A. Calibration of the intensity-related distance error of the PMD TOF-camera. In Proceedings of the Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision, Boston, MA, USA, 9–11 September 2007; Volume 6764, p. 67640W. [Google Scholar] [CrossRef]
  38. Fuchs, S.; Hirzinger, G. Extrinsic and depth calibration of ToF-cameras. In Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Anchorage, AK, USA, 24–26 June 2008. [Google Scholar] [CrossRef]
  39. Zhu, J.; Wang, L.; Yang, R.; Davis, J. Fusion of time-of-flight depth and stereo for high accuracy depth maps. In Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Anchorage, AK, USA, 24–26 June 2008. [Google Scholar] [CrossRef]
  40. Zou, H.; Xia, R.; Zhao, J.; Zhang, T.; Zhang, T.; Chen, Y.; Fu, S. Extrinsic calibration method for 3D scanning system with four coplanar laser profilers. Meas. Sci. Technol. 2022, 34, 015906. [Google Scholar] [CrossRef]
  41. Schmidt, J.; Brückner, M.; Denzler, J. Extrinsic self-calibration of time-of-flight cameras using a combination of 3D and intensity descriptors. In Proceedings of the VMV 2011—Vision, Modeling and Visualization, Berlin, Germany, 4–6 October 2011; pp. 269–276. [Google Scholar] [CrossRef]
  42. Lee, H.; Chung, W. Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Planar Objects. Sensors 2022, 22, 7234. [Google Scholar] [CrossRef] [PubMed]
  43. Chen, S.; Liu, J.; Wu, T.; Huang, W.; Liu, K.; Yin, D.; Liang, X.; Hyyppä, J.; Chen, R. Extrinsic Calibration of 2D Laser Rangefinders Based on a Mobile Sphere. Remote. Sens. 2018, 10, 1176. [Google Scholar] [CrossRef]
  44. Kim, Y.M.; Chan, D.; Theobalt, C.; Thrun, S. Design and calibration of a multi-view TOF sensor fusion system. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops, Anchorage, AK, USA, 24–26 June 2008. [Google Scholar] [CrossRef]
  45. Scaramuzza, D.; Harati, A.; Siegwart, R. Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 4164–4169. [Google Scholar] [CrossRef]
  46. Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar] [CrossRef]
  47. Zhou, L.; Li, Z.; Kaess, M. Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar] [CrossRef]
  48. Wang, Q.; Yan, C.; Tan, R.; Feng, Y.; Sun, Y.; Liu, Y. 3d-Cali: Automatic Calibration for Camera and Lidar Using 3d Checkerboard. Measurement 2022, 203, 111971. [Google Scholar] [CrossRef]
  49. Verma, S.; Berrio, J.S.; Worrall, S.; Nebot, E. Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, Auckland, New Zealand, 27–30 October 2019; pp. 3906–3912. [Google Scholar] [CrossRef]
  50. Ou, J.; Huang, P.; Zhou, J.; Zhao, Y.; Lin, L. Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization. Sensors 2022, 22, 2221. [Google Scholar] [CrossRef]
  51. Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef]
  52. An, Y.; Li, B.; Wang, L.; Zhang, C.; Zhou, X. Calibration of a 3D laser rangefinder and a camera based on optimization solution. J. Ind. Manag. Optim. 2021, 17, 427–445. [Google Scholar] [CrossRef]
  53. Rodriguez, F.S.A.; Frémont, V.; Bonnifait, P. Extrinsic calibration between a multi-layer lidar and a camera. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Republic of Korea, 20–22 August 2008; pp. 214–219. [Google Scholar] [CrossRef]
  54. Kim, E.-S.; Park, S.-Y. Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes. Sensors 2019, 20, 52. [Google Scholar] [CrossRef]
  55. Wang, Y.; Li, J.; Sun, Y.; Shi, M. A Survey of Extrinsic Calibration of LiDAR and Camera. Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021), Changsha, China, 24–26 September 2021; Lecture Notes in Electrical Engineering. Springer: Singapore, 2022; Volume 861, pp. 933–944. [Google Scholar]
  56. Khurana, A.; Nagla, K.S. Extrinsic Calibration Methods for Laser Range Finder and Camera: A Systematic Review. Mapan-J. Metrol. Soc. India 2021, 36, 669–690. [Google Scholar] [CrossRef]
  57. Nie, J.; Pan, F.; Xue, D.; Luo, L. A Survey of Extrinsic Parameters Calibration Techniques for Autonomous Devices. In Proceedings of the 33rd Chinese Control and Decision Conference, CCDC 2021, Kunming, China, 22–24 May 2021; pp. 3543–3548. [Google Scholar] [CrossRef]
  58. Yeong, D.J.; Velasco-hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  59. An, P.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Fu, W.; Ma, J. Geometric calibration for LiDAR-camera system fusing 3D-2D and 3D-3D point correspondences. Opt. Express 2020, 28, 2122–2141. [Google Scholar] [CrossRef] [PubMed]
  60. Persic, J.; Markovic, I.; Petrovic, I. Extrinsic 6DoF calibration of 3D LiDAR and radar. In Proceedings of the 2017 European Conference on Mobile Robots, ECMR 2017, Paris, France, 6–8 September 2017. [Google Scholar] [CrossRef]
  61. Liu, H.; Qu, D.; Xu, F.; Zou, F.; Song, J.; Jia, K. Approach for accurate calibration of RGB-D cameras using spheres. Opt. Express 2020, 28, 19058–19073. [Google Scholar] [CrossRef] [PubMed]
  62. Perez-Yus, A.; Fernandez-Moral, E.; Lopez-Nicolas, G.; Guerrero, J.J.; Rives, P. Extrinsic Calibration of Multiple RGB-D Cameras From Line Observations. IEEE Robot. Autom. Lett. 2018, 3, 273–280. [Google Scholar] [CrossRef]
  63. Herrera, C.D.; Kannala, J.; Heikkilä, J. Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2058–2064. [Google Scholar] [CrossRef] [PubMed]
  64. Chaochuan, J.; Ting, Y.; Chuanjiang, W.; Binghui, F.; Fugui, H. An extrinsic calibration method for multiple RGB-D cameras in a limited field of view. Meas. Sci. Technol. 2020, 31, 045901. [Google Scholar] [CrossRef]
  65. Kwon, Y.C.; Jang, J.W.; Hwang, Y.; Choi, O. Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras. Sensors 2019, 19, 1539. [Google Scholar] [CrossRef]
  66. Wu, Z.; Zhu, W.; Zhu, Q. Semi-Transparent Checkerboard Calibration Method for KINECTrs Color and Depth Camera. In Proceedings of the 2018 International Conference on Network, Communication, Computer Engineering (NCCE 2018), Chongqing, China, 26–27 May 2018; pp. 141–148. [Google Scholar] [CrossRef]
  67. Avetisyan, R.; Willert, M.; Ohl, S.; Staadt, O. Calibration of Depth Camera Arrays. 2014. Available online: https://ep.liu.se/ecp/106/006/ecp14106006.pdf (accessed on 28 February 2024).
  68. Pahwa, R.S.; Do, M.N.; Ng, T.T.; Hua, B.S. Calibration of depth cameras using denoised depth images. In Proceedings of the 2014 IEEE International Conference on Image Processing, ICIP 2014, Paris, France, 27–30 October 2014; pp. 3459–3463. [Google Scholar] [CrossRef]
  69. Ly, D.S.; Demonceaux, C.; Vasseur, P.; Pégard, C. Extrinsic calibration of heterogeneous cameras by line images. Mach. Vis. Appl. 2014, 25, 1601–1614. [Google Scholar] [CrossRef]
  70. Li, W.; Fan, J.; Li, S.; Tian, Z.; Zheng, Z.; Ai, D.; Song, H.; Yang, J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front. Neurorobotics 2021, 15, 636772. [Google Scholar] [CrossRef]
  71. Ruan, M.; Huber, D. Extrinsic Calibration of 3D Sensors Using a Spherical Target. In Proceedings of the 2014 2nd International Conference on 3D Vision, Tokyo, Japan, 8–11 December 2014. [Google Scholar]
  72. Eichler, N.; Hel-Or, H.; Shimshoni, I. Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose. Sensors 2022, 22, 8900. [Google Scholar] [CrossRef] [PubMed]
  73. Park, B.S.; Kim, W.; Kim, J.K.; Kim, D.W.; Seo, Y.H. Iterative extrinsic calibration using virtual viewpoint for 3D reconstruction. Signal Process. 2022, 197, 108535. [Google Scholar] [CrossRef]
  74. Guan, J.; Deboeverie, F.; Slembrouck, M.; Van Haerenborgh, D.; Van Cauwelaert, D.; Veelaert, P.; Philips, W. Extrinsic Calibration of Camera Networks Using a Sphere. Sensors 2015, 15, 18985–19005. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Modular quality validation platform (DMQV) for 3D scanning with (A) Photoneo MotionCam-3D M+ (MotionCam 1), (B) DLP LightCrafter 4500, (C) 2D HIKROBOT cameras, (D) Orbbec Astra Mini and Intel D415, and (E) InfiRay Micro III thermal camera.
Figure 1. Modular quality validation platform (DMQV) for 3D scanning with (A) Photoneo MotionCam-3D M+ (MotionCam 1), (B) DLP LightCrafter 4500, (C) 2D HIKROBOT cameras, (D) Orbbec Astra Mini and Intel D415, and (E) InfiRay Micro III thermal camera.
Sensors 24 01575 g001
Figure 2. (Left): Extended DMQV platform with (A) DMQV platform, (B) additional MotionCam-3D M+ from above (MotionCam 2), (C) 3D checkerboard in 45°, and (D) laptop with software. (Right): 3D checkerboard with (A) a plane with holes in the front and (B) a solid plane behind.
Figure 2. (Left): Extended DMQV platform with (A) DMQV platform, (B) additional MotionCam-3D M+ from above (MotionCam 2), (C) 3D checkerboard in 45°, and (D) laptop with software. (Right): 3D checkerboard with (A) a plane with holes in the front and (B) a solid plane behind.
Sensors 24 01575 g002
Figure 3. Flowchart for calculating the rigid transformation between two 3D scanning systems from a pair of 3D checkerboard captures.
Figure 3. Flowchart for calculating the rigid transformation between two 3D scanning systems from a pair of 3D checkerboard captures.
Sensors 24 01575 g003
Figure 4. (Left): Binary 2D projection (blue points) of a 3Dcb capture with sorted (colored arrows) hole medians (red points). (Right): Registered 3D checkerboards from two Photoneo MotionCam-3Ds (green and red) after applying the estimated geometric transformation.
Figure 4. (Left): Binary 2D projection (blue points) of a 3Dcb capture with sorted (colored arrows) hole medians (red points). (Right): Registered 3D checkerboards from two Photoneo MotionCam-3Ds (green and red) after applying the estimated geometric transformation.
Sensors 24 01575 g004
Figure 5. Use Case 1: (a) Example of registration of left (red) and right (green) Orbbec Astra Mini camera pair. (b) Example of the same registration (colored, blue) with overlapping region (black border) of left (red) and right (green) camera pair. Use Case 2: (c) Example of registration of TIDA-00254 (colored) to its reference from Photoneo MotionCam-3D (textured). Use Case 3: (d) Example of registration of the human back surface captured from above (green) and behind (red) with the Photoneo MotionCam-3D.
Figure 5. Use Case 1: (a) Example of registration of left (red) and right (green) Orbbec Astra Mini camera pair. (b) Example of the same registration (colored, blue) with overlapping region (black border) of left (red) and right (green) camera pair. Use Case 2: (c) Example of registration of TIDA-00254 (colored) to its reference from Photoneo MotionCam-3D (textured). Use Case 3: (d) Example of registration of the human back surface captured from above (green) and behind (red) with the Photoneo MotionCam-3D.
Sensors 24 01575 g005
Figure 6. RMSE and SD for overlapping region at a distance of 0.9 m after registration for left and right camera pair for standing upright (Use Case 1). (Left image): Orbbec Astra Mini; (Right image): Intel D415.
Figure 6. RMSE and SD for overlapping region at a distance of 0.9 m after registration for left and right camera pair for standing upright (Use Case 1). (Left image): Orbbec Astra Mini; (Right image): Intel D415.
Sensors 24 01575 g006
Figure 7. RMSE (left image) and SD (right image) for the overlapping region at distances between 0.9 m and 1.1 m after registration against its reference capture from MotionCam 1 for standing upright (Use Case 2); For MotionCam 1 (MC 1), TIDA-00254, BoofCV, Orbbec Astra Mini and Intel D415.
Figure 7. RMSE (left image) and SD (right image) for the overlapping region at distances between 0.9 m and 1.1 m after registration against its reference capture from MotionCam 1 for standing upright (Use Case 2); For MotionCam 1 (MC 1), TIDA-00254, BoofCV, Orbbec Astra Mini and Intel D415.
Sensors 24 01575 g007
Figure 8. RMSE and SD for overlapping regions at a distance of 1.1 m after registration of above (MotionCam 2) and behind (MotionCam 1) camera pair for dynamic forward bending (Use Case 3). (Left image): without ICP; (Right image): with ICP.
Figure 8. RMSE and SD for overlapping regions at a distance of 1.1 m after registration of above (MotionCam 2) and behind (MotionCam 1) camera pair for dynamic forward bending (Use Case 3). (Left image): without ICP; (Right image): with ICP.
Sensors 24 01575 g008
Table 1. Use cases evaluated in this paper.
Table 1. Use cases evaluated in this paper.
Use CasePosture–MovementSystems
1: Register captures from left and right camera pairsStatic standing upright2× Orbbec Astra, 2× Intel D415
2: Register captures to its reference captureStatic standing uprightTIDA-00254, BoofCV, 2× Orbbec Astra, 2× Intel D415, Photoneo MotionCam-3D
3: Register captures from above and behindDynamic forward bending2× Photoneo MotionCam-3D
Table 2. Median RMSE (median SD) for the overlapping region at distances between 0.9 m and 1.1 m for all systems against its reference.
Table 2. Median RMSE (median SD) for the overlapping region at distances between 0.9 m and 1.1 m for all systems against its reference.
MotionCam 1TIDA-00254BoofCV2× Orbbec Astra Mini2× Intel D415
0 mm (0.2 mm)0.02 mm (2.9 mm)0.1 mm (2.1 mm)1.5 mm (4.0 mm)1.7 mm (3.9 mm)
Table 3. Systems evaluated in this paper with methodology (structured light—SL; active stereo—AS) used, and resolution and accuracy stated by the manufacturer.
Table 3. Systems evaluated in this paper with methodology (structured light—SL; active stereo—AS) used, and resolution and accuracy stated by the manufacturer.
SystemMethodologyResolutionAccuracy
Photoneo MotionCam-3D M+SL1680 × 1200 and 1120 × 800error <0.3 mm at 0.9 m
TIDA-00254SL912 × 1140 and 1920 × 1200error ~1 mm at 1 m
BoofCVAS912 × 1140 and 1920 × 1200error ~1 mm at 1 m
Intel D415AS1280 × 720error <2% up to 2 m
Orbbec Astra MiniSL640 × 480error <3 mm at 1 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaiser, M.; Brusa, T.; Bertsch, M.; Wyss, M.; Ćuković, S.; Meixner, G.; Koch, V.M. Extrinsic Calibration for a Modular 3D Scanning Quality Validation Platform with a 3D Checkerboard. Sensors 2024, 24, 1575. https://doi.org/10.3390/s24051575

AMA Style

Kaiser M, Brusa T, Bertsch M, Wyss M, Ćuković S, Meixner G, Koch VM. Extrinsic Calibration for a Modular 3D Scanning Quality Validation Platform with a 3D Checkerboard. Sensors. 2024; 24(5):1575. https://doi.org/10.3390/s24051575

Chicago/Turabian Style

Kaiser, Mirko, Tobia Brusa, Martin Bertsch, Marco Wyss, Saša Ćuković, Gerrit Meixner, and Volker M. Koch. 2024. "Extrinsic Calibration for a Modular 3D Scanning Quality Validation Platform with a 3D Checkerboard" Sensors 24, no. 5: 1575. https://doi.org/10.3390/s24051575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop