Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (157)

Search Parameters:
Keywords = extrinsic calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4599 KiB  
Article
Investigation of the Effect of Gate Oxide Screening with Adjustment Pulse on Commercial SiC Power MOSFETs
by Michael Jin, Monikuntala Bhattacharya, Hengyu Yu, Jiashu Qian, Shiva Houshmand, Atsushi Shimbori, Marvin H. White and Anant K. Agarwal
Electronics 2025, 14(7), 1366; https://doi.org/10.3390/electronics14071366 - 28 Mar 2025
Viewed by 152
Abstract
This paper presents a method to recover the negative threshold voltage shift during high field gate oxide screening of 1.2 kV 4H-SiC MOSFETs with an additional adjustment gate voltage pulse. To reduce field failure rates of the MOSFETs in operation, manufacturers perform a [...] Read more.
This paper presents a method to recover the negative threshold voltage shift during high field gate oxide screening of 1.2 kV 4H-SiC MOSFETs with an additional adjustment gate voltage pulse. To reduce field failure rates of the MOSFETs in operation, manufacturers perform a screening treatment to remove devices with extrinsic defects in the oxide. Current gate oxide screening procedures are limited to oxide fields at or below ~9 MV/cm for short durations (<1 s), which is not enough to remove all the devices with extrinsic defects. The results show that by implementing a lower field gate pulse, the threshold voltage shift can be partially recovered, and therefore the maximum screening field and time can be increased. However, both the initial screening pulse and the adjustment pulse require careful calibration to prevent significant degradation of the device threshold voltage, on-resistance, interface state density, or intrinsic lifetime. With a well calibrated set of pulses, higher screening fields can be utilized without significantly damaging the devices. This leads to an improvement in the overall screening efficiency of the process, reducing the number of devices with extrinsic oxide defects entering the field, and improving the reliability of the SiC MOSFETs in operation. Full article
Show Figures

Figure 1

22 pages, 6484 KiB  
Article
A Perspective Distortion Correction Method for Planar Imaging Based on Homography Mapping
by Chen Wang, Yabin Ding, Kai Cui, Jianhui Li, Qingpo Xu and Jiangping Mei
Sensors 2025, 25(6), 1891; https://doi.org/10.3390/s25061891 - 18 Mar 2025
Viewed by 149
Abstract
In monocular vision measurement, a barrier to implementation is the perspective distortion caused by manufacturing errors in the imaging chip and non-parallelism between the measurement plane and its image, which seriously affects the accuracy of pixel equivalent and measurement results. This paper proposed [...] Read more.
In monocular vision measurement, a barrier to implementation is the perspective distortion caused by manufacturing errors in the imaging chip and non-parallelism between the measurement plane and its image, which seriously affects the accuracy of pixel equivalent and measurement results. This paper proposed a perspective distortion correction method for planar imaging based on homography mapping. Factors causing perspective distortion from the camera’s intrinsic and extrinsic parameters were analyzed, followed by constructing a perspective transformation model. Then, a corrected imaging plane was constructed, and the model was further calibrated by utilizing the homography between the measurement plane, the actual imaging plane, and the corrected imaging plane. The nonlinear and perspective distortions were simultaneously corrected by transforming the original image to the corrected imaging plane. The experiment measuring the radius, length, angle, and area of a designed pattern shows that the root mean square errors will be 0.016 mm, 0.052 mm, 0.16°, and 0.68 mm2, and the standard deviations will be 0.016 mm, 0.045 mm, 0.033° and 0.65 mm2, respectively. The proposed method can effectively solve the problem of high-precision planar measurement under perspective distortion. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 30440 KiB  
Article
A Method for the Calibration of a LiDAR and Fisheye Camera System
by Álvaro Martínez, Antonio Santo, Monica Ballesta, Arturo Gil and Luis Payá
Appl. Sci. 2025, 15(4), 2044; https://doi.org/10.3390/app15042044 - 15 Feb 2025
Viewed by 756
Abstract
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data [...] Read more.
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data captured by both systems, which is equivalent to finding the transformation between the reference systems of both sensors. Traditional calibration methods for LiDAR and camera systems are developed for pinhole cameras and are not directly applicable to fisheye cameras. This work proposes a target-based calibration method for LiDAR and fisheye camera systems that avoids the need to transform images to a pinhole camera model, reducing the computation time. Instead, the method uses the spherical projection of the image, obtained with the intrinsic calibration parameters and the corresponding point cloud for LiDAR–fisheye calibration. Thus, unlike a pinhole-camera-based system, a wider field of view is provided, adding more information, which will lead to a better understanding of the environment itself, as well as enabling using fewer image sensors to cover a wider area. Full article
Show Figures

Figure 1

21 pages, 6413 KiB  
Article
Targetless Radar–Camera Extrinsic Parameter Calibration Using Track-to-Track Association
by Xinyu Liu, Zhenmiao Deng and Gui Zhang
Sensors 2025, 25(3), 949; https://doi.org/10.3390/s25030949 - 5 Feb 2025
Viewed by 722
Abstract
One of the challenges in calibrating millimeter-wave radar and camera lies in the sparse semantic information of the radar point cloud, making it hard to extract environment features corresponding to the images. To overcome this problem, we propose a track association algorithm for [...] Read more.
One of the challenges in calibrating millimeter-wave radar and camera lies in the sparse semantic information of the radar point cloud, making it hard to extract environment features corresponding to the images. To overcome this problem, we propose a track association algorithm for heterogeneous sensors, to achieve targetless calibration between the radar and camera. Our algorithm extracts corresponding points from millimeter-wave radar and image coordinate systems by considering the association of tracks from different sensors, without any explicit target or prior for the extrinsic parameter. Then, perspective-n-point (PnP) and nonlinear optimization algorithms are applied to obtain the extrinsic parameter. In an outdoor experiment, our algorithm achieved a track association accuracy of 96.43% and an average reprojection error of 2.6649 pixels. On the CARRADA dataset, our calibration method yielded a reprojection error of 3.1613 pixels, an average rotation error of 0.8141°, and an average translation error of 0.0754 m. Furthermore, robustness tests demonstrated the effectiveness of our calibration algorithm in the presence of noise. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

47 pages, 20555 KiB  
Article
Commissioning an All-Sky Infrared Camera Array for Detection of Airborne Objects
by Laura Domine, Ankit Biswas, Richard Cloete, Alex Delacroix, Andriy Fedorenko, Lucas Jacaruso, Ezra Kelderman, Eric Keto, Sarah Little, Abraham Loeb, Eric Masson, Mike Prior, Forrest Schultz, Matthew Szenher, Wesley Andrés Watters and Abigail White
Sensors 2025, 25(3), 783; https://doi.org/10.3390/s25030783 - 28 Jan 2025
Cited by 2 | Viewed by 1191
Abstract
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based [...] Read more.
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based observatory to continuously monitor the sky and collect data for UAP studies via a rigorous long-term aerial census of all aerial phenomena, including natural and human-made. One of the key instruments is an all-sky infrared camera array using eight uncooled long-wave-infrared FLIR Boson 640 cameras. In addition to performing intrinsic and thermal calibrations, we implement a novel extrinsic calibration method using airplane positions from Automatic Dependent Surveillance–Broadcast (ADS-B) data that we collect synchronously on site. Using a You Only Look Once (YOLO) machine learning model for object detection and the Simple Online and Realtime Tracking (SORT) algorithm for trajectory reconstruction, we establish a first baseline for the performance of the system over five months of field operation. Using an automatically generated real-world dataset derived from ADS-B data, a dataset of synthetic 3D trajectories, and a hand-labeled real-world dataset, we find an acceptance rate (fraction of in-range airplanes passing through the effective field of view of at least one camera that are recorded) of 41% for ADS-B-equipped aircraft, and a mean frame-by-frame aircraft detection efficiency (fraction of recorded airplanes in individual frames which are successfully detected) of 36%. The detection efficiency is heavily dependent on weather conditions, range, and aircraft size. Approximately 500,000 trajectories of various aerial objects are reconstructed from this five-month commissioning period. These trajectories are analyzed with a toy outlier search focused on the large sinuosity of apparent 2D reconstructed object trajectories. About 16% of the trajectories are flagged as outliers and manually examined in the IR images. From these ∼80,000 outliers and 144 trajectories remain ambiguous, which are likely mundane objects but cannot be further elucidated at this stage of development without information about distance and kinematics or other sensor modalities. We demonstrate the application of a likelihood-based statistical test to evaluate the significance of this toy outlier analysis. Our observed count of ambiguous outliers combined with systematic uncertainties yields an upper limit of 18,271 outliers for the five-month interval at a 95% confidence level. This test is applicable to all of our future outlier searches. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 817
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 17848 KiB  
Article
UAV-Based 3D-Calibration of Thermal Cameras for Bat Flight Monitoring in Large Outdoor Environments
by Christof Happ, Alexander Sutor and Klaus Hochradel
Remote Sens. 2024, 16(24), 4682; https://doi.org/10.3390/rs16244682 - 15 Dec 2024
Viewed by 748
Abstract
The calibration of 3D cameras is one of the key challenges to successfully measure the nightly 3D flight tracks of bats with thermal cameras. This is relevant around wind turbines to investigate the impact wind farms have on their species. Existing 3D-calibration methods [...] Read more.
The calibration of 3D cameras is one of the key challenges to successfully measure the nightly 3D flight tracks of bats with thermal cameras. This is relevant around wind turbines to investigate the impact wind farms have on their species. Existing 3D-calibration methods solve the problem of unknown camera position and orientation by using a reference object of known coordinates. While these methods work well for small monitoring volumes, the size of the reference objects (e.g., checkerboard patterns) limits the distance between the two cameras and therefore leads to increased calibration errors when used in large outdoor environments. To address this limitation, we propose a calibration method for tracking flying animals with thermal cameras based on UAV GPS tracks. The tracks can be scaled to the required monitoring volume and accommodate large distances between cameras, which is essential for low-resolution thermal camera setups. We tested our method at two wind farms, conducting 19 manual calibration flights with a consumer UAV, distributing GPS points from 30 to 260 m from the camera system. Using two thermal cameras with a resolution of 640 × 480 pixels and an inter-axial distance of 15 m, we achieved median 3D errors between 0.9 and 3.8 m across different flights. Our method offers the advantage of directly providing GPS coordinates and requires only two UAV flights for cross-validation of the 3D errors. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

15 pages, 6968 KiB  
Article
Biomimetic Active Stereo Camera System with Variable FOV
by Yanmiao Zhou and Xin Wang
Biomimetics 2024, 9(12), 740; https://doi.org/10.3390/biomimetics9120740 - 4 Dec 2024
Viewed by 939
Abstract
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static [...] Read more.
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static calibration based on camera rotation angles with dynamic updates of extrinsic parameters, the method leverages relative pose adjustments between the rotation axis and cameras to update extrinsic parameters continuously in real-time. It facilitates epipolar rectification as the FOV changes, and enables precise disparity computation and accurate depth information acquisition. Based on the dynamic calibration method, we develop a two-DOF bionic active camera system including two cameras driven by motors to mimic the movement of biological eyes; this compact system has a large range of visual data. Experimental results show that the calibration method is effective, and achieves high accuracy in extrinsic parameter calculations during FOV adjustments. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot: 3rd Edition)
Show Figures

Figure 1

22 pages, 10421 KiB  
Article
Distributed High-Speed Videogrammetry for Real-Time 3D Displacement Monitoring of Large Structure on Shaking Table
by Haibo Shi, Peng Chen, Xianglei Liu, Zhonghua Hong, Zhen Ye, Yi Gao, Ziqi Liu and Xiaohua Tong
Remote Sens. 2024, 16(23), 4345; https://doi.org/10.3390/rs16234345 - 21 Nov 2024
Viewed by 800
Abstract
The accurate and timely acquisition of high-frequency three-dimensional (3D) displacement responses of large structures is crucial for evaluating their condition during seismic excitation on shaking tables. This paper presents a distributed high-speed videogrammetric method designed to rapidly measure the 3D displacement of large [...] Read more.
The accurate and timely acquisition of high-frequency three-dimensional (3D) displacement responses of large structures is crucial for evaluating their condition during seismic excitation on shaking tables. This paper presents a distributed high-speed videogrammetric method designed to rapidly measure the 3D displacement of large shaking table structures at high sampling frequencies. The method uses non-coded circular targets affixed to key points on the structure and an automatic correspondence approach to efficiently estimate the extrinsic parameters of multiple cameras with large fields of view. This process eliminates the need for large calibration boards or manual visual adjustments. A distributed computation and reconstruction strategy, employing the alternating direction method of multipliers, enables the global reconstruction of time-sequenced 3D coordinates for all points of interest across multiple devices simultaneously. The accuracy and efficiency of this method were validated through comparisons with total stations, contact sensors, and conventional approaches in shaking table tests involving large structures with RCBs. Additionally, the proposed method demonstrated a speed increase of at least six times compared to the advanced commercial photogrammetric software. It could acquire 3D displacement responses of large structures at high sampling frequencies in real time without requiring a high-performance computing cluster. Full article
Show Figures

Graphical abstract

20 pages, 6095 KiB  
Article
MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms
by Fengguang Xiong, Zhiqiang Zhang, Yu Kong, Chaofan Shen, Mingyue Hu, Liqun Kuang and Xie Han
Remote Sens. 2024, 16(22), 4233; https://doi.org/10.3390/rs16224233 - 14 Nov 2024
Viewed by 1235
Abstract
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets [...] Read more.
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets or manual intervention, which are time-consuming and have limited generalization capabilities. To address these issues, we introduce MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms, an end-to-end deep learn-based online calibration network for inferring 6-degree of freedom (DOF) rigid body transformations between 2D images and 3D point clouds. By fusing multi-scale features, we obtain feature representations that contain a lot of detail and rich semantic information. The attention module is used to carry out feature correlation among different modes to complete feature matching. Rather than acquiring the precise parameters directly, MSANet online corrects deviations, aligning the initial calibration with the ground truth. We conducted extensive experiments on the KITTI datasets, demonstrating that our method performs well across various scenarios, the average error of translation prediction especially improves the accuracy by 2.03 cm compared with the best results in the comparison method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

21 pages, 14386 KiB  
Article
A High-Quality and Convenient Camera Calibration Method Using a Single Image
by Xufang Qin, Xiaohua Xia and Huatao Xiang
Electronics 2024, 13(22), 4361; https://doi.org/10.3390/electronics13224361 - 6 Nov 2024
Viewed by 1164
Abstract
Existing camera calibration methods using a single image have exhibited some limitations. These limitations include relying on large datasets, using inconveniently prepared calibration objects instead of commonly used planar patterns such as checkerboards, and requiring further improvement in accuracy. To address these issues, [...] Read more.
Existing camera calibration methods using a single image have exhibited some limitations. These limitations include relying on large datasets, using inconveniently prepared calibration objects instead of commonly used planar patterns such as checkerboards, and requiring further improvement in accuracy. To address these issues, a high-quality and convenient camera calibration method is proposed, which only requires a single image of the commonly used planar checkerboard pattern. In the proposed method, a nonlinear objective function is derived by leveraging the linear distribution characteristics exhibited among corners. An algorithm based on enumeration theory is designed to minimize this function. It calibrates the first two radial distortion coefficients and principal points. The focal length and extrinsic parameters are linearly calibrated from the constraints provided by the linear projection model and the unit orthogonality of the rotation matrix. Additionally, a guideline is explored through theoretical analysis and numerical simulation to ensure calibration quality. The quality of the proposed method is evaluated by both simulated and real experiments, demonstrating its comparability with the well-known multi-image-based method and its superiority over advanced single-image-based methods. Full article
(This article belongs to the Special Issue Robot-Vision-Based Control Systems)
Show Figures

Figure 1

20 pages, 11540 KiB  
Article
Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion
by Xiaoqi Cheng, Xinfeng Liang, Xiaosong Li, Zhimin Liu and Haishu Tan
Sensors 2024, 24(21), 6889; https://doi.org/10.3390/s24216889 - 27 Oct 2024
Viewed by 1282
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). [...] Read more.
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker’s center point relative to the MAV’s body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 11354 KiB  
Article
Stereo Bi-Telecentric Phase-Measuring Deflectometry
by Yingmo Wang and Fengzhou Fang
Sensors 2024, 24(19), 6321; https://doi.org/10.3390/s24196321 - 29 Sep 2024
Cited by 1 | Viewed by 1204
Abstract
Replacing the endocentric lenses in traditional Phase-Measuring Deflectometry (PMD) with bi-telecentric lenses can reduce the number of parameters to be optimized during the calibration process, which can effectively increase both measurement precision and efficiency. Consequently, the low distortion characteristics of bi-telecentric PMD contribute [...] Read more.
Replacing the endocentric lenses in traditional Phase-Measuring Deflectometry (PMD) with bi-telecentric lenses can reduce the number of parameters to be optimized during the calibration process, which can effectively increase both measurement precision and efficiency. Consequently, the low distortion characteristics of bi-telecentric PMD contribute to improved measurement accuracy. However, the calibration of the extrinsic parameters of bi-telecentric lenses requires the help of a micro-positioning stage. Using a micro-positioning stage for the calibration of external parameters can result in an excessively cumbersome and time-consuming calibration process. Thus, this study proposes a holistic and flexible calibration solution for which only one flat mirror in three poses is needed. In order to obtain accurate measurement results, the calibration residuals are utilized to construct the inverse distortion map through bicubic Hermite interpolation in order to obtain accurate anchor positioning result. The calibrated stereo bi-telecentric PMD can achieve 3.5 μm (Peak-to-Valley value) accuracy within 100 mm (Width) × 100 mm (Height) × 200 mm (Depth) domain for various surfaces. This allows the obtaining of reliable measurement results without restricting the placement of the surface under test. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

22 pages, 2851 KiB  
Article
Enhanced Three-Axis Frame and Wand-Based Multi-Camera Calibration Method Using Adaptive Iteratively Reweighted Least Squares and Comprehensive Error Integration
by Oleksandr Yuhai, Yubin Cho, Ahnryul Choi and Joung Hwan Mun
Photonics 2024, 11(9), 867; https://doi.org/10.3390/photonics11090867 - 15 Sep 2024
Viewed by 1155
Abstract
The accurate transformation of multi-camera 2D coordinates into 3D coordinates is critical for applications like animation, gaming, and medical rehabilitation. This study unveils an enhanced multi-camera calibration method that alleviates the shortcomings of existing approaches by incorporating a comprehensive cost function and Adaptive [...] Read more.
The accurate transformation of multi-camera 2D coordinates into 3D coordinates is critical for applications like animation, gaming, and medical rehabilitation. This study unveils an enhanced multi-camera calibration method that alleviates the shortcomings of existing approaches by incorporating a comprehensive cost function and Adaptive Iteratively Reweighted Least Squares (AIRLS) optimization. By integrating static error components (3D coordinate, distance, angle, and reprojection errors) with dynamic wand distance errors, the proposed comprehensive cost function facilitates precise multi-camera parameter calculations. The AIRLS optimization effectively balances the optimization of both static and dynamic error elements, enhancing the calibration’s robustness and efficiency. Comparative validation against advanced multi-camera calibration methods shows this method’s superior accuracy (average error 0.27 ± 0.22 mm) and robustness. Evaluation metrics including average distance error, standard deviation, and range (minimum and maximum) of errors, complemented by statistical analysis using ANOVA and post-hoc tests, underscore its efficacy. The method markedly enhances the accuracy of calculating intrinsic, extrinsic, and distortion parameters, proving highly effective for precise 3D reconstruction in diverse applications. This study represents substantial progression in multi-camera calibration, offering a dependable and efficient solution for intricate calibration challenges. Full article
(This article belongs to the Special Issue Recent Advances in 3D Optical Measurement)
Show Figures

Figure 1

14 pages, 12144 KiB  
Article
NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map
by Changshuai Dai, Ting Han, Yang Luo, Mengyi Wang, Guorong Cai, Jinhe Su, Zheng Gong and Niansheng Liu
Sensors 2024, 24(16), 5228; https://doi.org/10.3390/s24165228 - 13 Aug 2024
Cited by 1 | Viewed by 1588
Abstract
With the advancement of computer vision and sensor technologies, many multi-camera systems are being developed for the control, planning, and other functionalities of unmanned systems or robots. The calibration of multi-camera systems determines the accuracy of their operation. However, calibration of multi-camera systems [...] Read more.
With the advancement of computer vision and sensor technologies, many multi-camera systems are being developed for the control, planning, and other functionalities of unmanned systems or robots. The calibration of multi-camera systems determines the accuracy of their operation. However, calibration of multi-camera systems without overlapping parts is inaccurate. Furthermore, the potential of feature matching points and their spatial extent in calculating the extrinsic parameters of multi-camera systems has not yet been fully realized. To this end, we propose a multi-camera calibration algorithm to solve the problem of the high-precision calibration of multi-camera systems without overlapping parts. The calibration of multi-camera systems is simplified to the problem of solving the transformation relationship of extrinsic parameters using a map constructed by multiple cameras. Firstly, the calibration environment map is constructed by running the SLAM algorithm separately for each camera in the multi-camera system in closed-loop motion. Secondly, uniformly distributed matching points are selected among the similar feature points between the maps. Then, these matching points are used to solve the transformation relationship between the multi-camera external parameters. Finally, the reprojection error is minimized to optimize the extrinsic parameter transformation relationship. We conduct comprehensive experiments in multiple scenarios and provide results of the extrinsic parameters for multiple cameras. The results demonstrate that the proposed method accurately calibrates the extrinsic parameters for multiple cameras, even under conditions where the main camera and auxiliary cameras rotate 180°. Full article
Show Figures

Figure 1

Back to TopTop