Next Article in Journal
Plant-Derived Flavonoids as AMPK Activators: Unveiling Their Potential in Type 2 Diabetes Management through Mechanistic Insights, Docking Studies, and Pharmacokinetics
Previous Article in Journal
Robotic Cell Layout Optimization Using a Genetic Algorithm
Previous Article in Special Issue
An Adaptive Fast Incremental Smoothing Approach to INS/GPS/VO Factor Graph Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Automatic Recharging Technology for Automated Guided Vehicles Based on Multi-Sensor Fusion

School of Electrical Engineering, Naval University of Engineering, Wuhan 430033, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8606; https://doi.org/10.3390/app14198606
Submission received: 19 August 2024 / Revised: 12 September 2024 / Accepted: 16 September 2024 / Published: 24 September 2024
(This article belongs to the Collection Advances in Automation and Robotics)

Abstract

:
Automated guided vehicles (AGVs) play a critical role in indoor environments, where battery endurance and reliable recharging are essential. This study proposes a multi-sensor fusion approach that integrates LiDAR, depth cameras, and infrared sensors to address challenges in autonomous navigation and automatic recharging. The proposed system overcomes the limitations of LiDAR’s blind spots in near-field detection and the restricted range of vision-based navigation. By combining LiDAR for precise long-distance measurements, depth cameras for enhanced close-range visual positioning, and infrared sensors for accurate docking, the AGV’s ability to locate and autonomously connect to charging stations is significantly improved. Experimental results show a 25% increase in docking success rate (from 70% with LiDAR-only to 95%) and a 70% decrease in docking error (from 10 cm to 3 cm). These improvements demonstrate the effectiveness of the proposed sensor fusion method, ensuring more reliable, efficient, and precise operations for AGVs in complex indoor environments.

1. Introduction

Currently, automatic recharging [1] is an essential function for mobile robots such as AGVs [2]. To safely and efficiently complete the docking of the robot with the charging station, multiple sensor data need to be integrated [3,4,5,6]. Traditional AGVs use infrared sensors alone in automatic recharging tasks, determining the relative position of the robot to the charging station by the position and number of infrared sensors that can receive signals [7], or they detect the signal strength of the infrared sensor to determine the robot’s direction. There are also methods that use the principle of infrared signal reflection to judge the relative position of the robot and the charging station. Some approaches use infrared and monocular cameras, where the monocular vision system relies on camera image processing [8], which is significantly affected by ambient lighting conditions. In environments with too much or too little light, image quality may decline, impacting target recognition and positioning accuracy. Additionally, monocular vision systems have limitations in depth perception, making it challenging to provide accurate 3D information, which may affect the robot’s obstacle avoidance capability in complex environments. Moreover, infrared sensors have limited effective range and accuracy, particularly in environments with obstacles or reflective interference, which may lead to unstable or failed signal transmission. Another approach combines laser and camera technologies, but their detection range is small, and once the target position cannot be determined initially, more time is required to wander and find it. Another common method uses LiDAR [9,10], which scans the environment around the robot, identifying the charging station’s location and posture from the scan data before approaching and docking with the charging port. The key to this approach lies in how to identify the charging station from the LiDAR scan data and calculate its posture. The principle of radar identification of the charging station primarily involves enhancing the charging station’s features, such as regular convex and concave structures, and the surface of the charging station is regularly covered with materials of different reflective intensities, enabling quicker detection of the charging station’s location. Therefore, the charging station is usually designed to be relatively large. In this paper, LiDAR and cameras are used for autonomous navigation [11,12,13,14] to complete the task of remotely finding the charging station, combined with infrared sensors for close-range docking to achieve automatic recharging. First, LiDAR provides high-precision distance and angle measurements, effectively detecting and avoiding obstacles, giving the robot strong autonomous navigation capabilities in complex environments. The camera supplements the LiDAR by providing visual information, recognizing and locating feature objects in the environment, offering a more comprehensive environmental perception. Infrared sensors are employed for the recharging task, guiding the robot to approach the charging station from a distance, addressing the issue of long-range precise docking that is difficult to achieve solely by vision and LiDAR [15,16,17]. Moreover, infrared sensors are not affected by lighting changes, enabling stable operation under different lighting conditions.
Given these issues, the work presented in this paper focuses on addressing the limitations of traditional LiDAR systems, particularly their blind spots in near-field detection and the restricted range of vision-based navigation. This study employs a multi-sensor fusion approach that combines LiDAR and cameras to enhance the robot’s ability to locate the charging station. The LiDAR generates a detailed map by scanning the environment, while the camera provides complementary visual information, enabling the robot to recognize feature objects and accurately determine the position and posture of the charging station.
The mapping and navigation algorithm leverages this integrated information to plan the optimal path, effectively guiding the robot to the charging station. Throughout the navigation process, the LiDAR continuously monitors the environment in real time, identifying and avoiding obstacles to ensure safe passage. As the robot approaches the charging station, it transitions to a close-range infrared docking mode. Infrared transmitters installed on the charging station create multiple signal zones that the robot’s infrared receivers can detect. Based on the received signal strength and direction, the robot adjusts its posture to achieve precise docking.
By integrating these advanced sensing capabilities, the robot can efficiently execute autonomous navigation and automatic recharging tasks, significantly improving the reliability and operational duration of the robotic system. This research demonstrates a novel approach to enhancing the functionality of AGVs in complex indoor environments, making them more adaptable and effective in real-world applications.
The structure of this paper is organized as follows: Section 2 presents the hardware components of the multi-sensor fusion-based autonomous recharging system, detailing the roles of LiDAR, cameras, and infrared sensors. Section 3 describes the software components required for the automatic recharging systems, including the algorithms used for navigation and docking. Section 4 discusses the experiments conducted to validate the proposed system, including environment mapping and autonomous navigation. Finally, Section 5 concludes the paper and outlines potential future work in this area.

2. Multi-Sensor Fusion-Based Autonomous Recharging System for AGVs

2.1. Hardware Components of the Multi-Sensor Fusion-Based Autonomous Recharging System

The multi-sensor fusion-based autonomous recharging system for unmanned vehicles consists of several key components: LiDAR, cameras, infrared sensors, the control system, and the charging station. The LiDAR is used for environmental scanning and mapping, generating a high-precision map by real-time perception of the surrounding environment. This map is then combined with the visual information captured by the camera to achieve long-range navigation and obstacle avoidance. The camera provides additional environmental perception data, supplementing the blind spots of the LiDAR. When the unmanned vehicle approaches the charging station, the infrared sensor becomes crucial. Infrared transmitters installed on the charging station emit multiple infrared signal areas, which are received by the infrared receivers on the vehicle. These signals help determine the vehicle’s relative position and orientation with respect to the charging station, enabling precise docking. The control system integrates data from the LiDAR, camera, and infrared sensors and, through multi-sensor fusion algorithms, continuously calculates the vehicle’s position and path planning to ensure that the vehicle can accurately locate the charging station and complete the autonomous recharging process. The system diagram of the autonomous recharging system is shown in Figure 1.
The infrared docking part is a crucial component of the autonomous charging system for the unmanned vehicle, with the hardware mainly comprising infrared transmitting tubes and infrared receiving tubes. Two or more infrared transmitting tubes are installed on the charging station, which create multiple infrared signal zones centered around the charging station. The unmanned vehicle is equipped with two or more infrared receiving tubes that detect these infrared signals to determine the vehicle’s position within the signal zones and its relative orientation to the charging station. By processing and analyzing the received signals, the vehicle can accurately adjust its position and orientation to achieve precise docking with the charging station.
On the charging station, two infrared transmitting tubes (A and B) are installed, and a partition is added between them, splitting the infrared signals into two zones centered on the charging station (A and B zones). As shown in Figure 2, the robot positioning relative to the charging station varies between side (Figure 2a) and front views (Figure 2b). The vehicle’s front side is equipped with two infrared receiving heads (L for Left and R for Right), with a partition added between them, dividing the vehicle’s receiving area into two zones as well. By determining which infrared signals are received by the L and R infrared receiving tubes, the vehicle can ascertain the orientation of the charging station relative to the vehicle and the directional deviation between the vehicle’s front and the front of the charging station.

2.2. Software Components of Multi-Sensor Fusion for Automatic Recharging Systems

To accomplish the autonomous recharging task for robots over longer indoor distances, several steps are required. First, a map must be created. Next, the positions of both the robot and the charging station on the map need to be determined. Subsequently, navigation algorithms are used to plan the optimal path for the robot to reach the charging station. During the journey, navigation algorithms automatically control the robot to avoid obstacles encountered along the way. Once the robot approaches the vicinity of the charging station and receives the infrared signal emitted by the charging station, the reception of this infrared signal indicates that the robot has located the charging station.

3. Multi-Sensor-Based Remote Autonomous Navigation and Close-Range Docking Localization

3.1. Remote Autonomous Navigation Based on LiDAR, Vision, and IMU Fusion

In robotic applications, visual SLAM relies on the quality of environmental features and is easily affected by lighting conditions, which can lead to poor performance in low-texture or dark environments. When a robot moves in a stable straight line, the average angular velocity and motion acceleration measured by the IMU are zero, and relying solely on vision can cause system scale drift. Different terrains can cause varying levels of vibration noise in the IMU, impacting algorithm performance and potentially leading to system divergence. Additionally, in large-scale environments, the ratio of the motion baseline to feature depth is very small, which makes feature depth estimation inaccurate and causes the visual constraints to degrade. While LiDAR SLAM offers better accuracy and robustness compared to visual SLAM, the laser point clouds are relatively sparse, with lower resolution at longer distances, and the data association is not as effective as in visual SLAM, particularly in long corridors and open environments. The IMU can provide short-term state estimation through pre-integration, but noise and zero bias cause the integral values to drift quickly. Encoders can measure one-dimensional motion distance but rely heavily on the environment for position estimation, and significant errors can occur during wheel slip.
To address these issues, this paper is based on the RTAB-Map [18,19] system framework and combines the strengths of vision, LiDAR, and IMU to enhance system performance and robustness. The RTAB-Map is an open-source library for real-time localization and mapping based on appearance features. It achieves loop closure detection through a unique memory management approach, making it suitable for long-term and large-scale real-time online SLAM requirements [10,20]. The generated maps can be used for complex tasks such as robot navigation and obstacle avoidance. This technology processes data in three parts: short-term memory (STM), working memory (WM), and long-term memory (LTM). The system framework is illustrated in Figure 3. The RTAB-Map uses a graph structure to organize the map, consisting of nodes and the edges connecting them. Sensor data, once synchronized, are stored in the short-term memory module. The short-term memory (STM) module creates a node to remember odometry poses, raw sensor data, and additional information useful for the next module.
When STM creates a new node, it can use depth images, laser scan data, and point cloud data to generate the corresponding local map. If a three-dimensional map is selected, it can be created directly from the three-dimensional point cloud or processed through a three-dimensional beam model to create a three-dimensional local map. An independent ROS node provides the odometry information required by the RTAB-Map. Due to the cumulative errors in odometry used for local mapping, loop detection and global optimization are necessary. In the RTAB-Map, loop detection includes visual loop closure detection and laser similarity detection, while global optimization uses pose graph optimization methods. Visual loop closure detection is based on the visual bag-of-words model and Bayesian filters. The visual bag-of-words model quickly calculates the similarity between the current pose node and candidate loop closure nodes. It achieves efficient image matching and similarity computation by extracting image features and constructing bag-of-words representations. The Bayesian filter maintains the probability distribution of candidate loop closure node similarities and updates the probability distribution to select the most likely loop closure node. Visual loop closure detection effectively identifies similar scenes at different times and locations, thus addressing the problem of cumulative odometry errors. The loop closure detection technology of the RTAB-Map is detailed as follows:
  • Localization point acquisition: The RTAB-Map uses SURF to extract visual word features. The feature set of the image at time t is referred to as the image signature z t . The current localization point L t is created using the signature z t and time t.
  • Localization point weight update: The similarity s between the current localization point L t and the last localization point in short-term memory is computed by comparing the number of matching word pairs between signatures and the total number of words in the signatures. This similarity s is used to determine the new weight of the localization point L t . The formula for S ( z t , z c ) is:
    S ( z t , z c ) = N p N z if N z t N z N p N z c if N z t < N z c
    where N p represents the number of matches and N z t and N z c denote the total number of words in signatures z t and z c , respectively.
  • Bayesian filter update: The Bayesian filter calculates the likelihood of forming a loop between the current localization point L t and points within the working memory (WM). S t represents all loop closure hypotheses at time t, where S t = i indicates that L t forms a loop closure with an already visited localization point. The posterior probability distribution p ( S t L t ) is computed as follows:
    p ( S t L t ) = ρ p ( L t S t ) t = 1 t n p ( S t S t 1 = i ) p ( S t 1 = i L ( t 1 ) )
    where ρ is a normalization factor, L t is the sequence of localization points, and t n represents the time index of the most recent localization point stored in WM.
The observation function p and the likelihood function γ are used to differentiate the degree of similarity between L t and S t . By comparing L t with S t = j , the similarity s j is obtained and compared with the standard deviation σ . The calculation is as follows:
p ( L t S t = j ) = γ ( S t = j L t ) = s j σ μ if s j u + σ 1 if s j < u + σ
The probability of L t being a new localization point S t = 1 is calculated as:
p ( L t S t = 1 ) = γ ( S t = 1 L t ) = μ σ + 1
where, if γ ( S t = 1 L t ) is larger, L t is more likely to be a new localization point.
  • Loop closure hypothesis selection: When p ( S t = 1 L t ) is below a set threshold, the loop closure hypothesis S t = i with the highest probability in the transition model p ( S t L t ) is accepted. A loop closure link is established between the new and old localization points, and the weight of the localization points is updated.
  • Localization point retrieval: To make full use of high-weight localization points, after a loop closure is detected, if the probability of forming a loop with adjacent localization points is highest, the localization point is moved from LTM back to WM. The words in the dictionary are updated for future loop closure detection.
  • Localization point transfer: To meet the real-time requirements of the RTAB-Map, if the localization matching time exceeds a fixed maximum value T L , it indicates that these localization points are unlikely to form a loop closure. They are transferred from WM to LTM and will no longer participate in loop closure detection, reducing unnecessary time loss in localization.

3.2. Automatic Recharging Path Planning Algorithm

3.2.1. Improved A* Algorithm

Path planning and localization are closely integrated in automatic recharging tasks. Path planning relies on localization information to obtain the robot’s current position, while localization depends on the map and other information provided by path planning to estimate the position more accurately. By combining path planning and localization, the robot can navigate and avoid obstacles in a known environment, achieving autonomous navigation. This paper uses the A* algorithm to effectively find the shortest path between the start node and the target node. The key idea behind the A* algorithm is to use a heuristic function to estimate the cost from each node to the target. The traditional A* algorithm searches often traverse many unnecessary nodes, affecting search efficiency. This issue is mainly caused by the design of the heuristic function. If the heuristic estimate is less than the actual cost, many nodes are searched, resulting in low computational efficiency but the possibility of finding the optimal path. If the heuristic estimate is greater than the actual cost, fewer nodes are searched, which is more efficient but may not find the optimal path. The best efficiency occurs when the heuristic estimate equals the actual cost. Since the heuristic function is the Euclidean distance, the heuristic value is always less than or equal to the actual distance from the current point to the target. When the current point is far from the target, the heuristic estimate is much smaller than the actual value, causing the algorithm to search many nodes and resulting in low efficiency. To improve efficiency, the weight of the estimate should be increased. As the current point approaches the target, the estimate value approaches the actual value. To prevent the estimate from being too high and missing the optimal path, the weight of the estimate should be reduced accordingly. Therefore, this paper improves the cost function as follows:
f ( n ) = g ( n ) + 1 + r R h ( n )
where f ( n ) is the total estimated cost, h ( n ) is the estimated cost from the current point to the target point, and g ( n ) is the actual cost from the start point to the current point. R is the distance from the start point to the target point, and r is the distance from the current point to the target point.
Although the above improvement enhances search efficiency, the path still contains many unnecessary redundant points, which is not conducive to path following. This paper proposes a redundant point removal strategy to eliminate redundant nodes and retain only necessary turning points. The specific redundant point removal strategy is as follows:
  • Traverse all nodes in the path sequentially. If the current node and the two adjacent nodes are collinear, remove the current node.
  • After removing redundant points on the same straight line, let the nodes in the path be { P k k = 1 , 2 , , n } . Connect P 1 and P 3 . If the distance between the line segment P 1 P 3 and obstacles is greater than the set safety distance, then connect P 1 and P 4 until the distance between P 1 P k and obstacles is less than the set safety distance. Then, connect P 1 and P k 1 , remove the intermediate nodes, and repeat the above operation starting from P 2 until all nodes in the path have been traversed.
After applying the redundant point removal strategy, the path planned by the A* algorithm only contains the start point, end point, and necessary nodes, effectively reducing the path length.
In this study, we conducted three sets of simulations to evaluate the performance of the traditional A* algorithm and the improved A* algorithm in path planning. As shown in Figure 4, the improved A* algorithm consistently demonstrates better performance in terms of smoother trajectory and fewer turns compared to the traditional A* algorithm. The yellow lines in the figure represent the paths generated by the traditional A* algorithm, while the purple lines indicate the paths generated by the improved A* algorithm. Table 1 summarizes the key metrics for each experimental group, highlighting significant differences in performance. In the first experimental group, the traditional A* algorithm recorded a planning time of 0.020610 s, while the improved A* algorithm had a slightly longer planning time of 0.027963 s. Although the improved A* algorithm exhibited a longer planning time, it demonstrated substantial advantages in turn angles and turn counts. Notably, it achieved a turn angle of 0 degrees and a turn count of 0, indicating a smoother trajectory compared to the traditional A* algorithm, which had a turn angle of 45 degrees and a turn count of 1.
In terms of path length, the improved A* algorithm achieved a length of 13.0384, significantly shorter than the traditional A* algorithm’s length of 13.8995. This represents an approximate reduction of 6.2%, suggesting that the improved A* algorithm effectively optimizes the path. The node count further illustrates this enhancement; the traditional A* algorithm processed 81 nodes, whereas the improved A* algorithm required only 53 nodes, indicating a reduction of about 34.6% in node evaluations, thereby enhancing search efficiency.
Overall, the improved A* algorithm shows substantial relative improvements in both path smoothness and computational efficiency, making it a more effective choice for path planning tasks.

3.2.2. Improved Dynamic Window Algorithm

The traditional dynamic window approach lacks global planning guidance, making it prone to local optima. To address this drawback, this paper improves the evaluation function of the dynamic window method to incorporate global path information, ensuring that the final local planning adheres to the global optimal path and alleviates the issue of becoming stuck in local optima. The dynamic window method first determines the range of robot motion speeds and then simulates driving trajectories using this speed range within the robot motion model. The evaluation function scores several simulated trajectories, and the trajectory with the highest score is selected as the optimal driving route for the robot. The corresponding angular and linear speeds of this trajectory are the robot’s best driving speeds. After simulating several trajectories based on the speed range and robot motion model, the evaluation function selects the optimal trajectory. To resolve the issue of local optima, this paper improves the evaluation function to combine global path information, ensuring that the final local path is based on the global optimal path. The improved evaluation function is given by:
G ( ν , ω ) = σ α P Head ( ν , ω ) + β dist ( ν , ω ) + γ vel ( ν , ω )
where P Head ( ν , ω ) is the angle difference between the trajectory endpoint direction and the current target point, dist ( ν , ω ) is the distance between the trajectory and the nearest obstacle, vel ( ν , ω ) is the current velocity evaluation function, σ is a smoothing function, and α , β , and γ are weighting coefficients. This evaluation function ensures that the planned route avoids random obstacles and adheres to the global optimal path.
This paper integrates the improved A* algorithm with the dynamic window method, achieving both global path optimality and random obstacle avoidance. The improved A* algorithm is used to plan the global path, and after obtaining the global optimal node sequence, the dynamic window method performs local path planning between each pair of adjacent nodes. The algorithm flow is as shown in Figure 5.
The simulation environment is configured as a 30 × 30 two-dimensional grid map with an obstacle coverage of P = 25 % . The robot’s parameters are defined as follows: a maximum speed of 2 m / s ; maximum rotational speed of 20 rad / s ; acceleration of 0.2 m / s 2 ; rotational acceleration of 50 rad / s 2 ; and velocity resolution of 0.01 m / s . The mobile obstacle is set to start at coordinates ( 12 , 14 ) and move to ( 8 , 21 ) with a specified speed of 0.01 m / s . The integrated algorithm successfully plans a path from the starting departure points located at ( 12 , 14 ) , ( 8 , 21 ) , and ( 9 , 28 ) to the mobile target for the AGV set at coordinates ( 5 , 2 ) . After incorporating the DWA algorithm into the improved A* framework, the robot not only follows a globally optimal path but also demonstrates enhanced obstacle avoidance capabilities. This synergy allows the robot to navigate around obstacles more effectively while ensuring a smooth trajectory, as illustrated in Figure 6. The blue lines in Figures represent the paths planned after the improvement of the A* algorithm, while the red lines represent the movement paths of dynamic obstacles.

3.3. Proximity Docking Localization Based on Infrared Sensors

After completing the remote autonomous navigation, proximity docking and charging are carried out using infrared sensors. The infrared emitter used has a wavelength of 940 nm and a low emission power. It starts emitting infrared rays as soon as the power is turned on, with a detection range of approximately 1.5–3 m. The infrared receiver is an HS0038 pulse-type receiver, which outputs a low level when receiving a modulated 38 kHz infrared carrier, and a high level otherwise. The charging station controls the left and right infrared emitters to emit 38 kHz infrared carriers for different durations, as shown in Figure 7. The carrier duration for emitter A is 39 ms, and for emitter B it is 52 ms. Emitters A and B alternate their infrared signals. The automatic recharging system identifies the different regions A or B based on the carrier duration of the emitted infrared signal.
When the robot is in the infrared signal region A, only receiver R receives the A signal while receiver L does not receive the infrared signal. This indicates that the charging station is located to the robot’s right front. The robot can then be controlled to turn right or move forward towards the right. With the correct procedural logic control, the robot will eventually align with the charging station and complete the docking.
The infrared docking control logic is shown in Table 2, where positive values are used for forward movement and counterclockwise rotation.
When the robot aligns with the charging station and the charging ports make contact, an electrical circuit is formed and charging begins. When the robot is directly facing the charging station, L will necessarily receive the infrared signal A and R will receive the infrared signal B. The schematic of the infrared docking process is shown in Figure 8.
As the robot autonomously navigates close to the charging station, the infrared reception status of L or R will change, as shown in the figures. If L receives both infrared signals A and B and R does not receive any infrared signal, according to Table 2, the robot should turn counterclockwise. The robot will continue to rotate counterclockwise until the infrared reception status of L or R changes. If L receives both infrared signals A and B and R only receives infrared signal B, according to Table 2, the robot should continue to turn counterclockwise. The robot will keep rotating counterclockwise until the infrared reception status of both L and R changes, with L only receiving infrared signal A and R only receiving infrared signal B. According to Table 2, the robot should then move backward. The robot will move backward until the infrared reception status of L or R changes. If L receives both infrared signals A and B and R receives infrared signal B, according to Table 2, the robot should turn counterclockwise again. The subsequent movement of the robot will be a cycle of counterclockwise rotation and backward movement until it is perfectly aligned with the charging station. At this point, the robot will continue moving forward to complete the docking and start charging.

4. Experiments

4.1. Hardware

The robot is designed with a heavy-duty motor, specifically intended for high-load applications, ensuring enhanced torque and operational reliability. This motor is part of the overall design that prioritizes the robot’s capability to handle demanding tasks such as transporting heavy loads or navigating rugged terrains. Accompanying the motor system is an advanced computational setup, which includes the Jetson Nano as the core processing unit and high-quality sensors such as the Orbbec Dabai camera for vision-based tasks. Additionally, the robot is equipped with a microphone array module and an N100 9-axis gyroscope module, which provide essential sensing and stabilization functions during operation. The physical object is shown in Figure 9.
In this study, the selected experimental scene measures approximately 10 m in length and 8 m in width, featuring distinct open areas, obstacle regions, and clear boundaries, allowing for an intuitive comparison of the mapping results. The robot employed three different mapping methods for environmental scanning: gmapping for LiDAR-based mapping, the RTAB-Map using only visual point cloud data for purely visual mapping, and the RTAB-Map for visual and LiDAR fusion.
For the pure LiDAR mapping, the gmapping algorithm was employed. This algorithm allows for real-time construction of indoor maps with high accuracy and low computational demand, especially in small environments. Unlike Hector SLAM, which requires higher LiDAR frequency and is prone to errors during rapid robot movements, gmapping uses fewer particles and does not require loop closure detection, reducing computational load while maintaining a high degree of mapping accuracy. Additionally, gmapping effectively utilizes wheel odometry to provide prior pose information, reducing the dependency on high-frequency LiDAR data.
For pure visual mapping, the RTAB-Map was employed, using only the visual point cloud data from the RGB-D camera. While this visual system offers detailed environmental features, it is sensitive to lighting conditions, leading to less accurate results in low-light or feature-sparse areas compared to LiDAR-based methods. As the robot navigates through the environment, the map is generated using data from the RGB-D camera. Although the mapping results can be relatively accurate, the system’s sensitivity to lighting can cause it to miss key features in areas with insufficient or excessive light, resulting in gaps or inaccuracies in the map. Additionally, in narrow or confined spaces, the camera’s limited field of view often prevents the creation of a complete map, as crucial environmental details may not be adequately captured. These limitations highlight the challenges of relying solely on visual mapping, especially in complex or poorly lit environments.
In the case of LiDAR and visual fusion mapping, the RTAB-Map was used to integrate LiDAR data with visual information from cameras, resulting in a more comprehensive environmental map. LiDAR provides precise distance measurements and obstacle detection, while the camera offers detailed visual features, enhancing the overall resolution and accuracy of the map, particularly in environments with complex textures. The experimental results show that this fusion of sensors produces a more complete and reliable map. By accurately projecting 3D spatial points onto a 2D plane and processing the depth camera data, the map updates remain consistent with those generated by LiDAR. This fusion method combines the rich visual details captured by the camera with the precise distance measurements and obstacle detection from LiDAR, delivering a more comprehensive representation of the environment. Compared to maps created by single sensors, the fusion approach offers a higher level of environmental awareness, providing a more realistic and robust depiction of the experimental scene.
During the mapping process, the robot systematically explored the environment. Initially, it focused on smaller loops to achieve quick loop closure, before gradually expanding its exploration to cover the entire environment. This strategy minimized the accumulation of localization errors and ensured the robot could effectively close larger loops without encountering significant drift. After completing the loop closure, the robot revisited certain areas to refine the map’s details, ensuring that both the global structure and local features were captured accurately. By following this approach, the robot was able to fully explore the environment and generate high-quality maps that facilitated its navigation and task performance, including the critical sub-task of automatic recharging.LiDAR mapping, visual mapping, and LiDAR-vision fusion mapping are shown in Figure 10.
The figure showcases the navigation performance of the robot using two different sensor configurations: LiDAR-only mapping (top row) and LiDAR-visual fusion mapping (bottom row). The six images highlight the robot’s path planning and navigation from three different starting positions, with the goal of reaching the charging station. In the LiDAR-only mapping, the robot successfully detects random obstacles and adjusts its path accordingly using the improved A* algorithm combined with the dynamic window approach, ensuring smooth navigation to the target. This method is effective for obstacle avoidance and path optimization, as LiDAR provides precise environmental mapping.LiDAR mapping navigation and LiDAR-vision fusion mapping navigation results are shown in Figure 11.
On the other hand, the visual-only mapping system struggled to complete the navigation task due to its sensitivity to lighting variations and difficulty in recognizing environmental features in low-light or cluttered areas. Pure visual data often led to incomplete maps, causing the robot to fail in planning a reliable path. The fusion of LiDAR and visual sensors significantly improved the mapping quality and navigation accuracy. The bottom row demonstrates how combining LiDAR’s accurate distance measurement with the visual system’s environmental details provides the robot with a more robust and comprehensive map, allowing it to navigate complex environments more effectively and reliably than using either sensor independently.

4.2. Infrared Docking

When the robot approaches the charging station and enters the close-range docking mode, the infrared emitters on the charging station start emitting 38KHz infrared carrier signals of varying durations. The infrared receivers on the robot identify these signals to determine its position and orientation relative to the charging station. The docking process is shown in Figure 12.
If the left-side receiver receives a signal from emitter A and the right-side receiver receives a signal from emitter B, the robot adjusts its movement according to the predefined logic to ensure it faces the charging interface of the charging station. Once the robot is aligned with the charging station, it will continue moving forward and make contact with the charging port to complete the docking. Throughout the process, the robot continuously monitors the changes in infrared signals and adjusts its direction and position to ensure successful docking.
The experiments were conducted to evaluate the performance of the AGV in terms of map construction, path planning, navigation, and automatic docking under different sensor configurations. The tested configurations include infrared sensors, LiDAR, visual systems, and a combination of these sensors. The AGV was tasked with autonomously navigating from various starting positions to a charging station, avoiding obstacles along the way, and performing an automatic docking maneuver at the station. Table 3 summarizes the results of the tests, showing the advantages of sensor fusion over single-sensor methods.
The mapping process was initiated using three different sensor configurations: LiDAR with the gmapping algorithm, a visual system with the RTAB-Map, and a fusion of LiDAR and visual data with the RTAB-Map. LiDAR-based mapping using gmapping resulted in accurate and robust maps, but visual-only mapping struggled in low-light conditions and narrow spaces, leading to incomplete maps. The fusion of LiDAR and visual systems provided superior map quality, with fewer blind spots and more detailed environmental information. Path planning and navigation tests showed that the AGV using sensor fusion navigated more efficiently, as reflected in the reduced path planning distance and time to reach the target. Additionally, the AGV was able to avoid obstacles more effectively compared to using either LiDAR or visual systems alone. Finally, the automatic docking task demonstrated that relying solely on infrared or visual sensors led to higher docking errors, while the multi-sensor fusion system significantly improved docking precision and success rate. The experimental results clearly demonstrate the benefits of sensor fusion, particularly when operating in complex indoor environments where lighting conditions and obstacle configurations vary.

5. Conclusions

This paper investigates automatic recharging technology for unmanned vehicles based on multi-sensor fusion and proposes a method that integrates LiDAR, depth cameras, and infrared sensors for autonomous navigation and automatic recharging. LiDAR enables accurate environmental perception and obstacle avoidance through remote scanning. The combination of the improved A* algorithm and the dynamic window approach allows the robot to effectively avoid random obstacles while ensuring global path optimality. Infrared sensors are employed for precise docking in the final stage, ensuring accurate alignment with the charging station under various lighting conditions.
However, it is important to acknowledge that sensors may be sensitive to different environmental conditions, such as lighting variations and surface reflectivity. This sensitivity can impact the practicality of the experimental results. Future studies should focus on conducting experiments in a variety of real-world conditions to evaluate and improve the robustness of the system. Additionally, assessing the impact of different environmental factors on sensor performance will be crucial.
Furthermore, the cost and complexity of the proposed sensors must be carefully evaluated against their benefits in enhancing navigation. Future work should explore more cost-effective alternatives or strategies for sensor integration that maintain performance while reducing overall system complexity.
Moreover, the multi-sensor fusion approach requires real-time processing, raising questions about computational demand and latency. Addressing these challenges will be essential for ensuring that the system operates efficiently and effectively in dynamic environments. Future research should investigate optimization techniques to enhance processing speed and reduce latency in sensor data fusion.
In conclusion, the fusion of multiple sensors in the automatic recharging system fully leverages the advantages of each sensor, significantly enhancing the stability and adaptability of the robotic system. The experimental results demonstrate that the proposed method can achieve efficient autonomous navigation and automatic recharging in complex indoor environments, greatly improving system efficiency and reliability. By implementing these suggested improvements, future research can further refine the system’s practicality and performance, making it more suitable for real-world applications.

Author Contributions

Conceptualization, Y.X. and L.W.; methodology, Y.X.; software, L.L.; validation, Y.X., L.W. and L.L.; formal analysis, Y.X.; investigation, Y.X.; resources, L.L.; data curation, L.W.; writing—original draft preparation, Y.X.; writing—review and editing, L.L.; visualization, Y.X.; supervision, L.W.; project administration, Y.X.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 41771487, and the Hubei Provincial Outstanding Young Scientist Fund, grant number 2019CFA086.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Su, K.L.; Liao, Y.L.; Lin, S.P.; Lin, S.F. An interactive auto-recharging system for mobile robots. Int. J. Autom. Smart Technol. 2014, 4, 43–53. [Google Scholar] [CrossRef]
  2. Moshayedi, A.J.; Jinsong, L.; Liao, L. AGV (automated guided vehicle) robot: Mission and obstacles in design and performance. J. Simul. Anal. Novel Technol. Mech. Eng. 2019, 12, 5–18. [Google Scholar]
  3. Song, G.; Wang, H.; Zhang, J.; Meng, T. Automatic docking system for recharging home surveillance robots. IEEE Trans. Consum. Electron. 2011, 57, 428–435. [Google Scholar] [CrossRef]
  4. Hao, B.; Du, H.; Dai, X.; Liang, H. Automatic recharging path planning for cleaning robots. Mobile Inf. Syst. 2021, 2021, 5558096. [Google Scholar] [CrossRef]
  5. Meena, M.; Thilagavathi, P. Automatic docking system with recharging and battery replacement for surveillance robot. Int. J. Electron. Comput. Sci. Eng. 2012, 1148–1154. [Google Scholar]
  6. Niu, Y.; Habeeb, F.A.; Mansoor, M.S.G.; Gheni, H.M.; Ahmed, S.R.; Radhi, A.D. A photovoltaic electric vehicle automatic charging and monitoring system. In Proceedings of the 2022 International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 20–22 October 2022; pp. 241–246. [Google Scholar]
  7. Rao, M.V.S.; Shivakumar, M. IR based auto-recharging system for autonomous mobile robot. J. Robot. Control (JRC) 2021, 2, 244–251. [Google Scholar] [CrossRef]
  8. Ding, G.; Lu, H.; Bai, J.; Qin, X. Development of a high precision UWB/vision-based AGV and control system. In Proceedings of the 2020 5th International Conference on Control and Robotics Engineering (ICCRE), Osaka, Japan, 24–26 April 2020; pp. 99–103. [Google Scholar]
  9. Liu, Y.; Piao, Y.; Zhang, L. Research on the positioning of AGV based on Lidar. J. Phys. Conf. Ser. 2021, 1920, 012087. [Google Scholar] [CrossRef]
  10. Yan, L.; Dai, J.; Tan, J.; Liu, H.; Chen, C. Global fine registration of point cloud in LiDAR SLAM based on pose graph. J. Geod. Geoinf. Sci. 2020, 3, 26–35. [Google Scholar]
  11. Gao, H.; Ma, Z.; Zhao, Y. A fusion approach for mobile robot path planning based on improved A* algorithm and adaptive dynamic window approach. In Proceedings of the 2021 IEEE 4th International Conference on Electronics Technology (ICET), Chengdu, China, 7–10 May 2021; pp. 882–886. [Google Scholar]
  12. Tang, G.; Tang, C.; Claramunt, C.; Hu, X.; Zhou, P. Geometric A-star algorithm: An improved A-star algorithm for AGV path planning in a port environment. IEEE Access 2021, 9, 59196–59210. [Google Scholar] [CrossRef]
  13. Sun, Y.; Zhao, X.; Yu, Y. Research on a random route-planning method based on the fusion of the A* algorithm and dynamic window method. Electronics 2022, 11, 2683. [Google Scholar] [CrossRef]
  14. Zhou, S.; Cheng, G.; Meng, Q.; Lin, H.; Du, Z.; Wang, F. Development of multi-sensor information fusion and AGV navigation system. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 2043–2046. [Google Scholar]
  15. Zhang, J.; Singh, S. Laser–visual–inertial odometry and mapping with high robustness and low drift. J. Field Robot. 2018, 35, 1242–1264. [Google Scholar] [CrossRef]
  16. Jiang, Y.; Leach, M.; Yu, L.; Sun, J. Mapping, navigation, dynamic collision avoidance and tracking with LiDAR and vision fusion for AGV systems. In Proceedings of the 2023 28th International Conference on Automation and Computing (ICAC), Birmingham, UK, 30 August–1 September 2023; pp. 1–6. [Google Scholar]
  17. Dai, Z.; Guan, Z.; Chen, Q.; Xu, Y.; Sun, F. Enhanced object detection in autonomous vehicles through LiDAR—Camera sensor fusion. World Electr. Vehicle J. 2024, 15, 297. [Google Scholar] [CrossRef]
  18. Labbé, M.; Michaud, F. RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Robot. 2019, 36, 416–446. [Google Scholar] [CrossRef]
  19. Das, S. Simultaneous localization and mapping (SLAM) using RTAB-MAP. arXiv 2018, arXiv:1809.02989. [Google Scholar]
  20. Gomez-Ojeda, R.; Moreno, F.A.; Zuniga-Noël, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef]
Figure 1. Hardware components of the multi-sensor fusion-based autonomous recharging system.
Figure 1. Hardware components of the multi-sensor fusion-based autonomous recharging system.
Applsci 14 08606 g001
Figure 2. Robot positioning relative to the charging station: side vs. front view.
Figure 2. Robot positioning relative to the charging station: side vs. front view.
Applsci 14 08606 g002
Figure 3. RTAB-Map system framework.
Figure 3. RTAB-Map system framework.
Applsci 14 08606 g003
Figure 4. Path planning using traditional A* and improved A* from different starting positions to the charging station. (a) Traditional A* from Position 1. (b) Improved A* from Position 1. (c) Comparison of A* and improved A*. (d) Traditional A* from Position 2. (e) Improved A* from Position 2. (f) Comparison of A* and improved A*. (g) Traditional A* from Position 3. (h) Improved A* from Position 3. (i) Comparison of A* and improved A*.
Figure 4. Path planning using traditional A* and improved A* from different starting positions to the charging station. (a) Traditional A* from Position 1. (b) Improved A* from Position 1. (c) Comparison of A* and improved A*. (d) Traditional A* from Position 2. (e) Improved A* from Position 2. (f) Comparison of A* and improved A*. (g) Traditional A* from Position 3. (h) Improved A* from Position 3. (i) Comparison of A* and improved A*.
Applsci 14 08606 g004
Figure 5. The system diagram of the autonomous recharging system.
Figure 5. The system diagram of the autonomous recharging system.
Applsci 14 08606 g005
Figure 6. Comparison of path planning using improved A* and improved A* with DWA. (a) Improved A* from Position 1 to charging station. (b) Improved A* from Position 2 to charging station. (c) Improved. A* from Position 3 to charging station. (d) Improved A* with DWA from Position 1 to charging station. (e) Improved A* with DWA from Position 2 to charging station. (f) Improved A* with DWA from Position 3 to charging station.
Figure 6. Comparison of path planning using improved A* and improved A* with DWA. (a) Improved A* from Position 1 to charging station. (b) Improved A* from Position 2 to charging station. (c) Improved. A* from Position 3 to charging station. (d) Improved A* with DWA from Position 1 to charging station. (e) Improved A* with DWA from Position 2 to charging station. (f) Improved A* with DWA from Position 3 to charging station.
Applsci 14 08606 g006
Figure 7. Infrared signal timing diagram.
Figure 7. Infrared signal timing diagram.
Applsci 14 08606 g007
Figure 8. Infrared docking process showing the stages from approach to successful docking. (a) Initial approach towards the charging station. (b) Robot begins aligning with the infrared signals. (c) Fine-tuning position for accurate docking. (d) Robot nearing the charging station for final adjustments. (e) Final alignment before docking. (f) Robot successfully docked with the charging station.
Figure 8. Infrared docking process showing the stages from approach to successful docking. (a) Initial approach towards the charging station. (b) Robot begins aligning with the infrared signals. (c) Fine-tuning position for accurate docking. (d) Robot nearing the charging station for final adjustments. (e) Final alignment before docking. (f) Robot successfully docked with the charging station.
Applsci 14 08606 g008
Figure 9. Illustration of the system components: (a) the robot and (b) the charging station.
Figure 9. Illustration of the system components: (a) the robot and (b) the charging station.
Applsci 14 08606 g009
Figure 10. Comparison of different mapping methods. (a) LiDAR mapping. (b) Visual mapping. (c) LiDAR and visual mapping.
Figure 10. Comparison of different mapping methods. (a) LiDAR mapping. (b) Visual mapping. (c) LiDAR and visual mapping.
Applsci 14 08606 g010
Figure 11. Navigation paths starting from three different positions: (Top row) LiDAR-only mapping, (Bottom row) LiDAR and visual mapping. (a) LiDAR mapping (Start 1). (b) LiDAR mapping (Start 2). (c) LiDAR mapping (Start 3). (d) LiDAR and visual mapping (Start 1). (e) LiDAR and visual mapping (Start 2). (f) LiDAR + Visual mapping (Start 3).
Figure 11. Navigation paths starting from three different positions: (Top row) LiDAR-only mapping, (Bottom row) LiDAR and visual mapping. (a) LiDAR mapping (Start 1). (b) LiDAR mapping (Start 2). (c) LiDAR mapping (Start 3). (d) LiDAR and visual mapping (Start 1). (e) LiDAR and visual mapping (Start 2). (f) LiDAR + Visual mapping (Start 3).
Applsci 14 08606 g011
Figure 12. Real infrared docking process.
Figure 12. Real infrared docking process.
Applsci 14 08606 g012
Table 1. Comparison of algorithms in three experimental groups.
Table 1. Comparison of algorithms in three experimental groups.
MetricTraditional A* (Experiment 1)Traditional A* (Experiment 2)Traditional A* (Experiment 3)Improved A* (Experiment 1)Improved A* (Experiment 2)Improved A* (Experiment 3)
Planning time (s)0.0206100.0076650.0046260.0279630.0202920.014692
Turn angle45.0000180.0000135.00000.000080.396472.6313
Turn count1.00004.00003.00000.00002.00002.0000
Path length13.899521.242628.656913.038421.010528.2904
Node count81841235373100
Table 2. The infrared docking control logic.
Table 2. The infrared docking control logic.
PositionSensorS1S2S3S4S5S6S7S8
LA11111111
B11110000
RA11001100
B10101010
Control ActionBack/ReverseReverseForwardForwardBackBack
X-axis (m/s)−0.1/000−0.1−0.1−0.1
Z-axis (rad/s)0/0.10.1−0.1−0.100
Table 3. Results comparing different sensor configurations.
Table 3. Results comparing different sensor configurations.
Test VariableInfrared OnlyLiDAR OnlyVision OnlyLiDAR + VisionLiDAR + Vision + Infrared
Docking success rate (%)5070508095
Average docking time (s)6045553525
Docking error (cm)1510N/A83
Obstacle adaptabilityPoorMediumPoorExcellentExcellent
Lighting condition adaptabilityExcellentExcellentPoorGoodExcellent
Path planning distance (m)N/A20N/A2019
Time to reach destination (s)N/A45N/A4035
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xue, Y.; Wang, L.; Li, L. Research on Automatic Recharging Technology for Automated Guided Vehicles Based on Multi-Sensor Fusion. Appl. Sci. 2024, 14, 8606. https://doi.org/10.3390/app14198606

AMA Style

Xue Y, Wang L, Li L. Research on Automatic Recharging Technology for Automated Guided Vehicles Based on Multi-Sensor Fusion. Applied Sciences. 2024; 14(19):8606. https://doi.org/10.3390/app14198606

Chicago/Turabian Style

Xue, Yuquan, Liming Wang, and Longmei Li. 2024. "Research on Automatic Recharging Technology for Automated Guided Vehicles Based on Multi-Sensor Fusion" Applied Sciences 14, no. 19: 8606. https://doi.org/10.3390/app14198606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop