Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = camera-IMU extrinsic calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 11540 KB  
Article
Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion
by Xiaoqi Cheng, Xinfeng Liang, Xiaosong Li, Zhimin Liu and Haishu Tan
Sensors 2024, 24(21), 6889; https://doi.org/10.3390/s24216889 - 27 Oct 2024
Viewed by 1774
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). [...] Read more.
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker’s center point relative to the MAV’s body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 5694 KB  
Article
A Robust Planar Marker-Based Visual SLAM
by Zhoubo Wang, Zhenhai Zhang, Wei Zhu, Xuehai Hu, Hongbin Deng, Guang He and Xiao Kang
Sensors 2023, 23(2), 917; https://doi.org/10.3390/s23020917 - 13 Jan 2023
Cited by 4 | Viewed by 4151
Abstract
Many visual SLAM systems are generally solved using natural landmarks or optical flow. However, due to textureless areas, illumination change or motion blur, they often acquire poor camera poses or even fail to track. Additionally, they cannot obtain camera poses with a metric [...] Read more.
Many visual SLAM systems are generally solved using natural landmarks or optical flow. However, due to textureless areas, illumination change or motion blur, they often acquire poor camera poses or even fail to track. Additionally, they cannot obtain camera poses with a metric scale in the monocular case. In some cases (such as when calibrating the extrinsic parameters of camera-IMU), we prefer to sacrifice the flexibility of such methods to improve accuracy and robustness by using artificial landmarks. This paper proposes enhancements to the traditional SPM-SLAM, which is a system that aims to build a map of markers and simultaneously localize the camera pose. By placing the markers in the surrounding environment, the system can run stably and obtain accurate camera poses. To improve robustness and accuracy in the case of rotational movements, we improve the initialization, keyframes insertion and relocalization. Additionally, we propose a novel method to estimate marker poses from a set of images to solve the problem of planar-marker pose ambiguity. Compared with the state-of-art, the experiments show that our system achieves better accuracy in most public sequences and is more robust than SPM-SLAM under rotational movements. Finally, the open-source code is publicly available and can be found at GitHub. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 3947 KB  
Article
Extrinsic Parameter Calibration Method for a Visual/Inertial Integrated System with a Predefined Mechanical Interface
by Chenguang Ouyang, Shuai Shi, Zheng You and Kaichun Zhao
Sensors 2019, 19(14), 3086; https://doi.org/10.3390/s19143086 - 12 Jul 2019
Cited by 6 | Viewed by 3659
Abstract
For a visual/inertial integrated system, the calibration of extrinsic parameters plays a crucial role in ensuring accurate navigation and measurement. In this work, a novel extrinsic parameter calibration method is developed based on the geometrical constraints in the object space and is implemented [...] Read more.
For a visual/inertial integrated system, the calibration of extrinsic parameters plays a crucial role in ensuring accurate navigation and measurement. In this work, a novel extrinsic parameter calibration method is developed based on the geometrical constraints in the object space and is implemented by manual swing. The camera and IMU frames are aligned to the system body frame, which is predefined by the mechanical interface. With a swinging motion, the fixed checkerboard provides constraints for calibrating the extrinsic parameters of the camera, whereas angular velocity and acceleration provides constraints for calibrating the extrinsic parameters of the IMU. We exploit the complementary nature of both the camera and IMU, of which the latter assists in the checkerboard corner detection and correction while the former suppresses the effects of IMU drift. The results of the calibration experiment reveal that the extrinsic parameter accuracy reaches 0.04° for each Euler angle and 0.15 mm for each position vector component (1σ). Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

12 pages, 3217 KB  
Article
Dense RGB-D SLAM with Multiple Cameras
by Xinrui Meng, Wei Gao and Zhanyi Hu
Sensors 2018, 18(7), 2118; https://doi.org/10.3390/s18072118 - 2 Jul 2018
Cited by 18 | Viewed by 5014
Abstract
A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the [...] Read more.
A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the system, two issues must be understood: first, how to calibrate the system where sensors usually shares small or no common field of view to maximally increase the effective field of view; second, how to fuse the location information from different sensors. In this work, a three-Kinect system is reported. For system calibration, two kinds of calibration methods are proposed, one is suitable for system with inertial measurement unit (IMU) using an improved hand–eye calibration method, the other for pure visual SLAM without any other auxiliary sensors. In the RGB-D SLAM stage, we extend and improve a state-of-art single RGB-D SLAM method to multi-camera system. We track the multiple cameras’ poses independently and select the one with the pose minimal-error as the reference pose at each moment to correct other cameras’ poses. To optimize the initial estimated pose, we improve the deformation graph by adding an attribute of device number to distinguish surfels built by different cameras and do deformations according to the device number. We verify the accuracy of our extrinsic calibration methods in the experiment section and show the satisfactory reconstructed models by our multi-camera dense RGB-D SLAM. The RMSE (root-mean-square error) of the lengths measured in our reconstructed mode is 1.55 cm (similar to the state-of-art single camera RGB-D SLAM systems). Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

28 pages, 15259 KB  
Article
Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping
by Chi Chen, Bisheng Yang, Shuang Song, Mao Tian, Jianping Li, Wenxia Dai and Lina Fang
Remote Sens. 2018, 10(2), 328; https://doi.org/10.3390/rs10020328 - 22 Feb 2018
Cited by 57 | Viewed by 9286
Abstract
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by [...] Read more.
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications. Full article
Show Figures

Graphical abstract

Back to TopTop