Next Article in Journal
Flow Analysis of a Novel, Three-Way Cartridge Flow Control Valve
Next Article in Special Issue
Application of Interpretable Machine Learning for Production Feasibility Prediction of Gold Mine Project
Previous Article in Journal
Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety

Department of Energy Resources Engineering, Pukyong National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3717; https://doi.org/10.3390/app13063717
Submission received: 31 January 2023 / Revised: 10 March 2023 / Accepted: 13 March 2023 / Published: 14 March 2023
(This article belongs to the Special Issue Recent Advances in Smart Mining Technology, Volume II)

Abstract

:
To improve the working conditions in underground mines and eliminate the risk of human casualties, patrol robots that can operate autonomously are necessary. This study developed an autonomous patrol robot for underground mines and conducted field experiments at underground mine sites. The driving robot estimated its own location and autonomously operated via encoders, IMUs, and LiDAR sensors; it measured hazards using gas sensors, dust particle sensors, and thermal imaging cameras. The developed autonomous driving robot can perform waypoint-based path planning. It can also automatically return to the starting point after driving along waypoints sequentially. In addition, the robot acquires the dust and gas concentration levels along with thermal images and then combines them with location data to create an environmental map. The results of the field experiment conducted in an underground limestone mine in Korea are as follows. The O2 concentration was maintained at a constant level of 15.7%; toxic gases such as H2S, CO, and LEL were not detected; and thermal imaging data showed that humans could be detected. The maximum dust concentration in the experimental area was measured to be about 0.01 mg/m3, and the dust concentration was highly distributed in the 25–35 m section on the environmental map. This study is expected to improve the safety of work by exploring areas that are dangerous for humans to access using autonomous patrol robots and to improve productivity by automating exploration tasks.

1. Introduction

Underground mine sites are prone to several hazards, including explosions, ground subsidence, toxic gas leakage, and equipment collisions [1,2,3]. These hazards can hinder a safe working environment and lead to decreased productivity if mining activities are stopped because of such events. To improve the safety and productivity of work at an underground mine site, periodic examinations of hazard factors must be performed. Various efforts have been made to prevent accidents by investigating these hazard factors in advance. For instance, Hanson et al. [4] presented methods for detecting hazard factors in underground coal mines, and Grychowski [5] analyzed data using fuzzy logic theory to prevent fire accidents in underground coal mines.
Recently, various ICT-based technologies have been used to explore areas that are dangerous for humans to access [6,7,8,9,10,11,12,13]. Several studies have been conducted to explore underground mines using remotely controlled mobile robots [14,15,16,17,18,19,20]. A mobile robot equipped with environmental sensors can investigate hazard factors while driving around in a tunnel and prepare for danger by analyzing them. To this end, Zhao et al. [21] developed a remote-controlled robot that can be used in underground coal mines. A gas sensor was installed on the robot to measure the concentration of toxic gases, and the hazard factors in the pit were investigated by analyzing the toxic gas concentration. Huh et al. [22] developed a remote-controlled robot that could investigate hazard factors in underground mines and evaluated its effectiveness by conducting experiments in underground coal mines.
Several studies have been conducted on autonomous driving robots that can investigate the conditions of underground mines [23,24,25,26,27]. Autonomous driving robots were used for mapping to evaluate the stability of tunnels [28,29,30], and their own location estimation technologies were used to record the position of the robot where hazard factors were detected [31,32,33]. In addition, sensors that can measure the gas, temperature, and humidity conditions were installed on a robot to investigate the environmental characteristics of the tunnel [34,35,36]. Thrun et al. [37] developed a mobile robot capable of navigation, path planning, and mapping in underground mines and conducted a field experiment on abandoned mines. Baker et al. [38] developed an autonomous driving robot that performed driving, exploration, and mapping in abandoned underground mines to investigate gas concentration and sinkage. Kim and Choi [36] analyzed and visualized environmental data obtained from an autonomous driving robot equipped with humidity and gas sensors in an underground mine.
Previous studies that conducted autonomous exploration of underground mines using autonomous robots did not verify their performance in the case of intersections or slopes, as the experimental area consisted of straight lines. Furthermore, the robot’s ability to return to its origin after driving to its destination was not verified. As autonomous exploration must be carried out periodically to ensure the safety of workers in underground mines, an automated system in which robots return to their origin after mine exploration is essential to improve safety and productivity. Therefore, the robot must be able to return to the starting point after sequentially traveling to several points designated by the user. In addition, a field test of an autonomous exploration system using an autonomous driving robot on an actual underground mine should be conducted to evaluate the effectiveness of autonomous driving robots in driving in areas such as intersections and slopes, as well as straight shafts.
This study aims to develop an autonomous driving robot for the autonomous exploration of underground mines and to conduct field experiments in actual underground mine sites. Encoder, light detection and ranging (LiDAR), and inertial measurement unit (IMU) sensors are used to perform location estimation and autonomous driving of the robot. Hazard factors in a mine are investigated using gas sensors, particle sensors, and thermal cameras. A field experiment is conducted with the developed robot, wherein sequential driving of the robot to various points in the mine and its return to the origin is demonstrated. Thus, the route planning and driving ability of the robot in terrains such as intersections and ramps is evaluated. In addition, this study evaluates the sensor fusion technology to improve the accuracy of robot location estimation and create a safety map for the experimental area by combining the acquired safety data with the robot position data.

2. Materials and Methods

2.1. System Configuration of the Autonomous Driving Robot

Figure 1 shows the hardware and communication system of the autonomous driving robot developed in this study. The robot measures the presence or absence of hazard factors such as hazardous gas and dust while automatically driving to a specific point inside the underground mine shaft that has been designated in advance; furthermore, it can detect the presence of workers via thermal imaging. In addition, the robot automatically returns to the starting point after completing the exploration and visualizes the environmental data about the tunnel when the robot returns to starting point. The autonomous driving robot developed in this study has three functions—autonomous driving and localization, safety data processing, and motor control.
Sensors are connected to the main controller through which data acquisition and processing are performed to realize autonomous driving and location estimation. By using the LiDAR sensor, a map of the experimental area is created before performing autonomous driving; during autonomous driving, obstacle recognition and position correction are performed via map matching. The IMU sensor measures the 3-axis attitude change when the robot is driving, and the encoder sensor measures the robot’s moving distance and heading by calculating the number of rotations of the robot’s wheels. An RGB-D camera is used to convey information on the driving situation in the tunnel to workers outside the tunnel by capturing and recording RGB and depth images.
For safety data processing, a dust sensor that can measure the concentration of fine dust and a gas sensor that can measure the concentration of gases, including harmful gases, in the tunnel shaft are used. In addition, a thermal camera is used to accurately determine the presence of workers in a dark tunnel where it is difficult to identify people. It is designed to store all the data in the data acquisition controller so that workers waiting outside the tunnel can check the safety data obtained when the robot moves through the tunnel. In addition, connecting the robot controller and the data acquisition controller through Wi-Fi communication facilitates the control of the robot controller mounted on the robot.
For motor control, the robot can be operated by controlling the motor of the actual robot based on the processing result of the main controller. It can perform functions such as lighting control and emergency stops installed on the front of the robot.
Table 1 lists the specifications of the sensor and the robot platform used in this study. As a mobile robot platform, Agilex’s Scout mini robot was used [39]. The Scout mini robot used is a four-wheeled, differential drive type, with a maximum speed of 20 km/h and a load of 20 kg, and can be equipped with multiple sensors. As the robot can rotate in its place, it has the advantage of efficiently changing its direction when it encounters a dead end or narrow tunnel inside a mine. In addition, as the vehicle body is raised approximately 10 cm from the floor, it can pass through small obstacles without avoiding them. As the driving time is 2 h and the distance is 10 km, it can sequentially explore various points inside the mine and return to its origin without additional charging or management.
The autonomous driving robot uses an IMU, Encoder, and LiDAR sensors to perform location estimation and autonomous driving; it uses RGB-D to capture and record the driving environment. The IMU sensor measures the pose of the robot by combining the data measured from the gyroscope, accelerometer, and magnetometer; the LiDAR sensor scans the shape of the wall of the underground tunnel to create a two-dimensional map and is used to perform autonomous navigation and location estimation of the robot. In addition, two PCs were used as controllers to control the robot and acquire data [40].
Figure 2 shows the developed autonomous driving robot. A LiDAR sensor is mounted on the front of the robot to measure the shape of the tunnel. Further, thermal and RGB-D cameras are placed in the robot’s driving direction to record the presence of workers and the driving environment. A built-in IMU sensor and an encoder sensor are used to measure the pose and position of the robot. Gas and dust sensors are mounted on the front and side parts of the robot, respectively, to examine the working environment in the tunnel and detect the presence or absence of harmful gases.
Figure 3 shows the overall data-processing configuration of the autonomous driving robot developed in this study. Data processing includes localization, safety data acquisition, and autonomous driving, and each function shares processed data and operates organically. First, for localization, the data input from the gyroscope, accelerometer, and magnetometer are combined with the Kalman filter to obtain relatively accurate robot pose data. When estimating the robot’s position by fusing multiple sensors, the accuracy of pose estimation can be improved by using the Kalman filter to probabilistically calculate and correct errors that may occur in the sensor [36]. The general Kalman filter cannot be applied because the robot’s movement and location data are non-linear. Therefore, in this study, the extended Kalman filter (EKF) was used to correct the movement of the robot. In addition, the position of the robot is measured by reflecting the wheel rotation data measured by the encoder sensor on the corrected pose data. At this time, the moving distance and heading of the robot are measured using the encoder sensor. The robot’s location and pose data are combined to improve accuracy.
For safety data acquisition, the data acquired by the gas and particle sensors are stored. Further, the data regarding the presence of workers that is detected via thermal imaging are stored. In addition, after aligning the acquired data according to time, they are combined with the position data of the robot to visualize the point at which toxic gases, dust, and workers are detected.
For autonomous driving, a destination point is first set on a previously prepared 2D map. Subsequently, a route is planned, and the robot’s position and pose data calculated in real time are reflected to realize autonomous driving. In addition, the robot recognizes and avoids surrounding obstacles using the LiDAR sensor.

2.2. Autonomous Driving Algorithm and Software Configuration

This study used the robot operating system (ROS) to develop an autonomous robot. The ROS is an open-source platform that can perform data transmission, data processing, and hardware connections. It can easily perform tasks such as visualization and simulation and can efficiently apply algorithms and libraries to autonomous driving systems [41].
Figure 4 shows the overall data flow of the autonomous driving algorithm used in this study [40]. The autonomous driving system acquires data from the sensors and processes it, plans the motion of the robot, and controls the motor used to move the robot. The position of the robot is estimated using odometry, AMCL algorithms, and measured data from the encoder sensors. The routes are planned based on previously prepared maps. At this time, the area in which the robot can be driven is selected through the local and global cost maps, and functions such as real-time obstacle avoidance, local path planning, and global path planning in a wide area are simultaneously implemented. In addition, the recovery vehicles return when the robot can no longer drive because of its proximity to an obstacle or failure in localization. Table 2 lists the representative topics used to implement the autonomous driving function in this study.
The robot’s location is estimated through the adaptive Monte Carlo localization (AMCL) algorithm, which compares and matches the previously prepared 2D map with the shape of the tunnel acquired in real time from the LiDAR sensor. AMCL estimates the robot’s pose and location using a particle filter that utilizes particles with weights to stochastically represent uncertainty. Therefore, it is a method of estimating the optimal position of a robot by calculating the values of sensor data probabilistically. The position prediction particle of the robot may be expressed as Equations (1) and (2). Xt represents the pose of the robot predicted at time t, and xt[m] represents the probability of each of the M particle sets. Among the AMCL parameters, the minimum particle was set to 100, and the maximum particles were set to 5000. The resample interval was set to 1, and the transform tolerance was set to 0.3 [42]. Additionally, the number of particles used in the filter was set to 80, the resampling threshold was set to 0.5, and the linear and angular updates were set to 0.5.
X t = x t 1 ,   x t 2 ,   ,   x t m
x t m   ~   p ( x t | z 1 : t ,   u 1 : t )
In this study, the position of the robot was estimated by the encoder, and the IMU data were fused through EKF to improve position accuracy by correcting the rotation angle of the robot. When estimating the position of the robot using EKF, the values shown in Equation (3) are used. Here, X, Y, and Z represent the position of the robot; roll, pitch, and yaw represent the orientation in the x-, y-, and z-axis directions, respectively; and X ˙ , Y ˙ , Z ˙ represent the linear velocity in the x-, y-, and z-axis directions, respectively. The symbols roll, pitch, and yaw represent angular velocity in the x, y, z direction, respectively, and X ¨ , Y ¨ , and Z ¨ represent a linear acceleration in the x, y, z direction, respectively. In this study, values for x and y were used to estimate the position of the robot in 2D, and values in the yaw direction were used from the IMU sensor. Since the rotation angle of the IMU data is reflected by applying the EKF-based position correction, the rotation angle error due to wheel slip can be corrected [42].
X Y Z r o l l p i t c h y a w X ˙ Y ˙ Z ˙ r o l l ˙ p i t c h ˙ y a w ˙ X ¨ Y ¨ Z ¨
In this study, a two-dimensional map of the tunnel was created by driving through the experimental area once using remote control before performing autonomous driving. Simultaneous localization and mapping (SLAM) technology was used to create a two-dimensional map of the experimental area. SLAM is a technology that combines point cloud data obtained through LiDAR and additional sensors during the motion of the robot to recognize the location and simultaneously create a map of the driving area. Figure 5 shows the interface screen visualizing the autonomous driving robot planning a route, estimating a location, and driving toward a destination. Here, the part overlapped on the map and displayed as a red line is the real-time LiDAR data, and the green arrow indicates the estimated robot position and heading. In addition, the red circular area located near the green arrow represents the probabilistically estimated position set of the robot. The denser the red area in one place, the more accurate the robot position estimation. In addition, the data list to be visualized is checked, and the camera’s RGB image and depth image are visualized in real time.

2.3. Autonomous Exploration System Configuration

Figure 6 shows the dust sensor (Figure 6a), gas sensor (Figure 6b), and a thermal camera (Figure 6c) used in this study. The particle sensor can measure the dust particle concentration in the tunnel, and the gas sensor can measure CO, combustible gases (LEL), and H2S. In addition, the detection range of the particle sensor and gas sensor used in this study can measure the recommended environmental range prescribed by the Korean government for the underground mine environment [43]. At a mine site, it is necessary to ensure the absence of workers in the mine before carrying out blasting work. At this point, the thermal camera has the advantage of being able to accurately detect human presence even in a dark underground environment. Table 3 lists the detailed specifications of the environmental detection sensors used in this study.
Figure 7 shows the data processing flowchart for creating an environmental map. While the robot travels along the planned path, the safety sensors acquire data. When the robot returns, the position and safety data are aligned over time to match the robot’s position with the safety data at that point. Then, the safety data are visualized according to the robot’s movement path; if there is a significant change, they are created in the form of a map using the geographic information system (GIS). In this study, an environmental map for the experimental area was created using the IDW (inverse distance weighting) algorithm [44].
The robot acquires thermal images when it drives through the entire experimental area. It recognizes people using thermal images and machine-vision-based image processing algorithms. In the image processing process, the background is removed, leaving only the people displayed at a relatively high temperature. When two or more people are detected, they are recognized as independent objects, and the number of people is counted. Figure 8 shows the process of human recognition using the acquired thermal image and machine vision system. As shown in Figure 8a, a thermal image is acquired and then converted to a grayscale image to clearly represent it by separating it from the background (Figure 8b). After classifying the converted grayscale image as a binary image, each object is recognized independently, and the number of people is counted (Figure 8c). The morphology and particle analysis algorithms, along with LabVIEW software, were used for this process. Thermal imaging can be used not only to recognize people but also to control fire accidents that frequently occur in underground mines.

3. Field Experiment

3.1. Experimental Area

In this study, a field experiment was conducted in an underground limestone mine (37°03′37” N, 128°19′44” E) in Korea. The experimental area is a limestone mine that is currently in operation. For safety reasons, a part of the entire mine shaft with a length of approximately 100 m, a width of 6 m, and a height of 8 m was selected as the experimental section (Figure 9). The experiment was planned with the robot moving in a straight section of approximately 30 m along the slope from the intersection and finally returning to the origin. The path selection function at the intersection, the driving function in the slope area, and the origin return function were verified.

3.2. Experimental Method

The autonomous driving robot starts driving from the origin along the planned path while estimating its own location. At this time, the current position is corrected by comparing the previously prepared map with the shape of the currently measured tunnel, and the presence or absence of obstacles in front is checked in real time. The sensor data acquired when the robot was driving were stored, and the driving process of the robot was recorded using an external camera. In addition, while the robot was driving, the videos captured by the RGB and thermal cameras installed on the front of the robot were recorded. External lighting was installed to accurately record the external driving of the robot, along with the robot’s own lighting.

4. Results

Figure 10 shows the motion of the developed autonomous driving robot in the experimental area at the underground mine site, along with the RGB and depth images of the robot recorded through the RGB-D camera mounted on the robot. It was confirmed that the autonomous driving robot arrived at the waypoint, located at the intersection after departure. Then, it safely drove to the destination point, located at the top of the slope. Furthermore, after changing the driving direction through self-rotation at the destination, it passed the waypoint and returned safely to the starting point. The robot took approximately 126 s to travel through the entire experimental section. The robot recorded RGB images while driving through the entire experimental section; when it returned to its destination, the entire driving environment was confirmed.
During the experiment, external lighting was used to photograph the driving of the robot. However, in an actual mine, it is difficult to grasp the overall shape of the underground tunnel or the presence or absence of cavities because no external lighting can be used, and the only light available is mounted on the robot itself. At this time, it was possible to determine the overall shape of the tunnel using depth images. The part marked in white was recognized as the area where the tunnel was opened, and the black part was recognized as the wall of the tunnel; thus, the overall direction and shape of the tunnel could be recognized. In addition, if the robot can explore the blasting area, the size of the blasting can be measured even without an additional light source.
Figure 11 shows the motion of the self-driving robot while correcting its position by reflecting the data obtained by matching the LiDAR sensor data with the map and its own location estimation data. Before starting autonomous driving, the robot corrects its position and pose by matching the LiDAR data and map (Figure 11a). When it arrived at the first waypoint, it was confirmed that it moved to the coordinates and pose set in advance and corrected its position and posture while matching the corners of the intersection (Figure 11b). It was confirmed that the robot passed the slope and arrived at the destination (Figure 11c), the returning area (Figure 11d), and returned to the starting point again (Figure 11e).
Figure 12 shows the driving path estimated through the encoder sensor and the driving path estimated by fusing the encoder, IMU, and LiDAR sensor. It was confirmed that the path to the destination and the path to return were almost similar to those estimated by fusing the three types of sensors. On the other hand, there was a relatively large difference between the going path and the returning path estimated through only the encoder. Additionally, when the robot rotates an angle, the amount of rotation appears relatively larger than that of the sensor fusion method. It can be expected that the number of rotations of the wheels measured at a specific point is greater than the actual movement angle of the robot, and this can cause position and angle errors due to the wheel slip phenomenon between the floor and the robot’s wheels.
Figure 12 also shows the image of a human detected by the thermal camera. For safety reasons, the movement of workers in the tunnel was restricted while the experiment was conducted; however, the videographers and equipment operators at the intersection and starting point, respectively, were captured by the thermal camera. The thermal images were processed using a machine vision processing algorithm to accurately classify and recognize the human form. In addition, by merging these data with the estimated robot’s position coordinates, it was possible to confirm the location where the human was detected in the mine. Two people detected at the starting point were recognized when the robot returned almost to the starting point after completing its run, and one person was detected when it entered an intersection. The x and y coordinates of the points where the persons were detected were recorded as the origin return (1.53, 0.62) and intersection (30.57, 3.13), respectively. It is believed that the location of a person in the mine can be predicted more accurately if the heading angle of the robot at the point where the person is detected and the pixel coordinates of the person recognized in the thermal image are fused.
During the field experiment, the autonomous driving robot measured the concentrations of H2S, CO, O2, and LEL using a gas sensor; further, it measured the concentration of dust in the tunnel using a particle sensor and acquired thermal images. Hazardous factors, such as H2S, CO, and LEL, were not measured, and the O2 concentration remained constant at approximately 15%. Figure 13 shows graphs of dust concentration and O2 concentration obtained by the autonomous patrol robot during an underground mine field experiment. Figure 14 shows the particle concentration map for the experimental area. The estimated robot position coordinates and particle concentration data were aligned according to the time, and the particle concentration was visualized through color differences at the robot position points in the GIS system. In the experiment, the robot traveled to the same point twice because it first arrived at the destination and returned to the origin while passing through the same point, but only the dust data obtained while traveling to the destination was reflected on the map to prevent overlapping data display on one map. When the environmental map was created by reflecting the location coordinates of the robot on the data obtained while the robot was traveling, it was possible to intuitively visualize the dusty area and effectively examine how the dust concentration changed as the robot moved.

5. Discussion

The robot system developed in this study was compared with the mobile robots in underground mines used in three previous studies (Table 4). The robot’s localization function is essential in an underground mining environment in which GPS cannot be used. In two of the three comparison cases, the localization function was realized; however, in one study, the localization function was impossible because the operator controlled it remotely. In the two studies that achieved localization, one used multi-waypoint driving, and the other used one-way navigation via the wall-following method. To facilitate safe mine exploration, it is necessary to be able to explore the mine without requiring the worker to directly enter the tunnel. Therefore, the multi-waypoint navigation function must be used.
Functions such as environmental monitoring and human detection are needed to detect hazard factors and recognize workers in dangerous areas. In the previous studies considered in this comparison, one involved environmental monitoring, and the other performed human detection; however, most functions were not performed simultaneously. Moreover, these studies did not use multi-waypoint navigation. Furthermore, these studies did not demonstrate the return of the robot to its origin. Therefore, these studies were limited because the mobile robots used at underground mine sites did not have safety-related functions when the robot’s driving and returning functions were enabled, and studies with safety functions found that the robot’s driving function was difficult to apply to an underground mine environment.
In an underground mine, there are several roads to move to the mining and loading areas, and there are many intersections at the point where the roads are divided. Most underground mining sites include slope areas as they must reach deeper underground points for mining operations. Therefore, to utilize an autonomous robot in an actual underground mine environment, it is essential to verify and conduct an experiment not only on straight paths but also on intersections and slope areas. While other studies conducted field experiments on straight sections without intersections and slopes, this study conducted field experiments on intersections and slope areas, and as a result, stable driving performance was confirmed.
In the case of an underground mine environment, since dust and gas are continuously generated due to repetitive blasting for mining, exploration for workers’ safety must be performed periodically. If an unmanned autonomous driving robot is used instead of a human in the initial exploration after the blasting, safe exploration work can be performed. In addition, to explore underground mines safely using robots, the technology of autonomous driving to a specific point, along with an autonomous return to the starting point, is essential. In this study, an automated system was developed in which an autonomous robot returns to the starting point after performing autonomous exploration. By conducting field experiments on underground mines, it was possible to improve the possibility of using the robot system in actual work sites.
The underground mining environment has rough roads designed to reach deeper areas while performing mining operations, as well as several roads made for different transport efficiencies. Therefore, a stable driving capability is essential for an autonomous robot to efficiently explore an underground mine site, especially when going down a slope or returning after exploration, and experiments must be conducted to verify this. Additionally, the robot should be able to drive and return along a planned route among several roads. Through field experiments of the autonomous patrol robot developed in this study, the ability to drive on slopes and the function of intersection path planning were confirmed.

6. Conclusions

This study developed a multi-sensor-based patrol robot for autonomous exploration and conducted a field experiment in an actual underground mine. The robot’s location was estimated using sensors, such as an encoder, IMU, and LiDAR, along with a sensor fusion algorithm. In addition, the location was calibrated by comparing the map prepared in advance with real-time LiDAR data. Multiple waypoints, including the intersection and slope section, were sequentially driven and returned to the starting point. The robot was designed to be equipped with gas sensors, dust sensors, and a thermal camera to recognize hazard factors and workers in the mine. A safety map was created using the GIS system after combining the data acquired by these sensors with the robot location data. The results of the experiment showed that the autonomous driving robot safely traveled to the experimental area and returned to its origin. No toxic gas was detected in the tunnel, and the O2 concentration was almost constant. The maximum dust concentration measured was about 0.01 mg/m3, and it was confirmed that the highest dust concentration was measured in the section between 25 m and 35 m.
As the autonomous driving system sets waypoints on a 2D map while driving, position-estimation errors may occur when driving on an inclined surface. In addition, because only 2D position coordinates are estimated, the acquisition point is not accurate as the robot goes deeper. Therefore, in future research, a more efficient autonomous driving system should be developed by estimating the 3D position of the robot in addition to the 2D coordinates. To this end, a 3D point cloud acquisition sensor, such as 3D LiDAR, should be additionally used, and the location estimation accuracy should be improved by combining it with existing sensors.
In this study, the experiment was performed only for short and simple sections among entire underground mines, and there is a limitation that the experiment was conducted only for a single mine. However, it is expected that the same method can be used for other mines, including slopes and intersections. Therefore, additional experiments related to this need to be performed in the future. In addition, the accuracy evaluation of the localization method should be performed by comparing the estimated driving path with the actual driving path. In this study, a gas sensor was installed in the robot, but in order to measure gases lighter than air, a gas sensor should be installed at the top of the tunnel wall, and data should be acquired through wireless communication when the robot reaches the sensor.
In this study, there were no dynamically moving obstacles when the robot drove in the underground mine, but in actual underground mining environments, there are many dynamic obstacles, such as workers and trucks. Therefore, to improve robot utilization, experiments on obstacle avoidance and driving in the presence of dynamic obstacles should be conducted in the future. Furthermore, camera sensors and vision technologies should be applied to efficiently detect areas or dynamic obstacles that are not reflected in the pre-prepared map. Additionally, in actual underground mines, the floor surface is often uneven, causing reduced location estimation accuracy and decreased autonomous driving performance due to wheel slippage. Therefore, in the future, more precise location correction technology should be developed to address wheel slippage.
Various risk factors, such as toxic gas, falling rocks, and ground collapse, exist at underground mine sites, and there are many areas that are difficult for people to access. Therefore, at a mine work site, periodic exploration must be performed to ensure the safety of workers, and safe rescue work must be carried out when an accident occurs. Therefore, deploying an autonomous robot for periodic exploration can improve the productivity of the work because it can repeatedly explore the mining site and significantly contribute to enhancing safety because the robot can explore dangerous areas that humans cannot access. The results of this study are expected to be used as important references in various fields related to the autonomous exploration of underground mines in the future.

Author Contributions

Conceptualization, Y.C.; Data curation, Y.C.; Funding acquisition, Y.C.; Investigation, H.K. and Y.C.; Methodology, H.K. and Y.C.; Project administration, Y.C.; Resources, Y.C.; Software, Y.C.; Supervision, Y.C.; Validation, H.K.; Visualization, H.K.; Writing—original draft, H.K.; Writing—review and editing, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2021R1A2C1011216).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bahn, S. Workplace hazard identification and management: The case of an underground mining operation. Saf. Sci. 2013, 57, 129–137. [Google Scholar] [CrossRef]
  2. Centers for Disease Control and Prevention. Available online: https://www.cdc.gov/niosh/mining/works/statistics/factsheets/miningfacts2014.html (accessed on 9 January 2023).
  3. Mining Health Safety—7 Common Risks to Protect Yourself Against. Available online: https://www.miningreview.com/health-and-safety/mining-health-safety-7-common-risks-to-protect-yourself-against/ (accessed on 9 January 2023).
  4. Hanson, D.R.; Vandergrift, T.L.; DeMarco, M.J.; Hanna, K. Advanced techniques in site characterization and mining hazard detection for the underground coal industry. Int. J. Coal Geol. 2002, 50, 275–301. [Google Scholar] [CrossRef]
  5. Grychowski, T. Multi sensor fire hazard monitoring in underground coal mine based on fuzzy inference system. J. Intell. Fuzzy Syst. 2014, 26, 345–351. [Google Scholar] [CrossRef]
  6. Moczulski, W.; Przystałka, P.; Sikora, M.; Zimroz, R. Modern ICT and mechatronic systems in contemporary mining industry. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 9920 LNAI.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 33–42. [Google Scholar] [CrossRef]
  7. Baek, J.; Choi, Y. Bluetooth-beacon-based underground proximity warning system for preventing collisions inside tunnels. Appl. Sci. 2018, 8, 2271. [Google Scholar] [CrossRef] [Green Version]
  8. Choi, Y. Analysis of Patent Trend for ICT-based Underground Mine Safety Management Technology. J. Korean Soc. Miner. Energy Resour. Eng. 2018, 55, 159–164. [Google Scholar] [CrossRef]
  9. Baek, J.; Choi, Y. Simulation of truck haulage operations in an underground mine using big data from an ICT-based mine safety management system. Appl. Sci. 2019, 9, 2639. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, Y.; Chen, M.; Wang, K.; Fu, G. A dynamic information platform for underground coal mine safety based on internet of things. Saf. Sci. 2019, 113, 9–18. [Google Scholar] [CrossRef]
  11. Grabowski, A.; Jankowski, J. Virtual Reality-based pilot training for underground coal miners. Saf. Sci. 2015, 72, 310–314. [Google Scholar] [CrossRef]
  12. Jha, A.; Tukkaraja, P. Monitoring and assessment of underground climatic conditions using sensors and GIS tools. Int. J. Min. Sci. Technol. 2020, 30, 495–499. [Google Scholar] [CrossRef]
  13. Singh, N.; Gunjan, V.K.; Chaudhary, G.; Kaluri, R.; Victor, N.; Lakshmanna, K. IoT enabled HELMET to safeguard the health of mine workers. Comput. Commun. 2022, 193, 1–9. [Google Scholar] [CrossRef]
  14. Bharathi, B.; Samuel, B.S. Design and Construction of Rescue Robot and Pipeline Inspection Using Zigbee. Int. J. Sci. Eng. Res. 2013, 1, 75–78. [Google Scholar] [CrossRef]
  15. Novák, P.; Kot, T.; Babjak, J.; Konečný, Z.; Moczulski, W.; Rodriguez López, Á. Implementation of Explosion Safety Regulations in Design of a Mobile Robot for Coal Mines. Appl. Sci. 2018, 8, 2300. [Google Scholar] [CrossRef] [Green Version]
  16. Reddy, A.H.; Kalyan, B.; Murthy, C.S.N. Mine Rescue Robot System—A Review. Procedia Earth Planet. Sci. 2015, 11, 457–462. [Google Scholar] [CrossRef] [Green Version]
  17. Szrek, J.; Zimroz, R.; Wodecki, J.; Michalak, A.; Góralczyk, M.; Worsa-Kozak, M. Application of the infrared thermography and unmanned ground vehicle for rescue action support in underground mine—The amicos project. Remote Sens. 2021, 13, 69. [Google Scholar] [CrossRef]
  18. Yang, X.; Lin, X.; Yao, W.; Ma, H.; Zheng, J.; Ma, B. A Robust LiDAR SLAM Method for Underground Coal Mine Robot with Degenerated Scene Compensation. Remote Sens. 2023, 15, 186. [Google Scholar] [CrossRef]
  19. Miller, I.D.; Cladera, F.; Cowley, A.; Shivakumar, S.S.; Lee, E.S.; Jarin-Lipschitz, L.; Bhat, A.; Rodrigues, N.; Zhou, A.; Cohen, A.; et al. Mine tunnel exploration using multiple quadrupedal robots. IEEE Robot. Autom. Lett. 2020, 5, 2840–2847. [Google Scholar] [CrossRef] [Green Version]
  20. Topolsky, D.; Topolskaya, I.; Plaksina, I.; Shaburov, P.; Yumagulov, N.; Fedorov, D.; Zvereva, E. Development of a Mobile Robot for Mine Exploration. Processes 2022, 10, 865. [Google Scholar] [CrossRef]
  21. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A search-and-rescue robot system for remotely sensing the underground coal mine environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [Green Version]
  22. Huh, S.; Lee, U.; Shim, H.; Park, J.B.; Noh, J.H. Development of an unmanned coal mining robot and a tele-operation system. In Proceedings of the 2011 11th International Conference on Control, Automation and Systems, Gyeonggi-do, Republic of Korea, 26–29 October 2011; pp. 31–35. [Google Scholar]
  23. Shang, W.; Cao, X.; Ma, H.; Zang, H.; Wei, P. Kinect-Based vision system of mine rescue robot for low illuminous environment. J. Sens. 2016, 2016, 8252015. [Google Scholar] [CrossRef] [Green Version]
  24. Kim, H.; Choi, Y. Development of a LiDAR Sensor-based Small Autonomous Driving Robot for Underground Mines and Indoor Driving Experiments. J. Korean Soc. Miner. Energy Resour. Eng. 2019, 56, 407–415. [Google Scholar] [CrossRef]
  25. Kim, H.; Choi, Y. Review of Autonomous Driving Technology Utilized in Underground Mines. J. Korean Soc. Miner. Energy Resour. Eng. 2019, 56, 480–489. [Google Scholar] [CrossRef]
  26. Kim, H.; Choi, Y. Field Experiment of a LiDAR Sensor-based Small Autonomous Driving Robot in an Underground Mine. Tunn. Undergr. Space 2020, 30, 76–86. [Google Scholar] [CrossRef]
  27. Chi, H.; Zhan, K.; Shi, B. Automatic guidance of underground mining vehicles using laser sensors. Tunn. Undergr. Sp. Technol. 2012, 27, 142–148. [Google Scholar] [CrossRef]
  28. Lösch, R.; Grehl, S.; Donner, M.; Buhl, C.; Jung, B. Design of an autonomous robot for mapping, navigation, and manipulation in underground mines. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1407–1412. [Google Scholar]
  29. Bakambu, J.N.; Polotski, V. Autonomous System for Navigation and Surveying in Underground Mines. J. Field Robot. 2007, 24, 829–847. [Google Scholar] [CrossRef]
  30. Neumann, T.; Ferrein, A.; Kallweit, S.; Scholl, I. Towards a Mobile Mapping Robot for Underground Mines. In RobMech and AfLaT International Joint Symposium, Proceedings of the 2014 Pattern Recognition Association of South Africa (PRASA), Cape Town, South Africa, 27–28 November 2014; Puttkammer, M., Eiselen, R., Eds.; Pattern Recognition Association of South Africa (PRASA): Cape Town, South Africa, 2014. [Google Scholar]
  31. Kim, H.; Choi, Y. Comparison of three location estimation methods of an autonomous driving robot for underground mines. Appl. Sci. 2020, 10, 4831. [Google Scholar] [CrossRef]
  32. Kim, H.; Choi, Y. Location estimation of autonomous driving robot and 3D tunnel mapping in underground mines using pattern matched LiDAR sequential images. Int. J. Min. Sci. Technol. 2021, 31, 779–788. [Google Scholar] [CrossRef]
  33. Ghosh, D.; Samanta, B.; Chakravarty, D. Multi sensor data fusion for 6D pose estimation and 3D underground mine mapping using autonomous mobile robot. Int. J. Image Data Fusion 2016, 8, 173–187. [Google Scholar] [CrossRef]
  34. Günther, F.; Mischo, H.; Lösch, R.; Grehl, S.; Güth, F. Increased safety in deep mining with iot and autonomous robots. In Mineral Industry’ (APCOM 2019), Proceedings of the 39th International Symposium ‘Application of Computers and Operations Research, Wroclaw, Poland, 4–6 June 2019; Mueller, C., Assibey-Bonsu, W., Baafi, E., Dauber, C., Doran, C., Jaszczuk, M.J., Nagovitsyn, O., Eds.; CRC Press: London, UK, 2019; pp. 101–105. [Google Scholar]
  35. Li, Y.; Li, M.; Zhu, H.; Hu, E.; Tang, C.; Li, P.; You, S. Development and applications of rescue robots for explosion accidents in coal mines. J. Field Robot. 2020, 37, 466–489. [Google Scholar] [CrossRef]
  36. Kim, H.; Choi, Y. Self-driving algorithm and location estimation method for small environmentalmonitoring robot inunderground mines. Comput. Model. Eng. Sci. 2021, 127, 943–964. [Google Scholar] [CrossRef]
  37. Thrun, S.; Thayer, S.; Whittaker, W.; Baker, C.; Burgard, W.; Ferguson, D.; Hahnel, D.; Montemerlo, D.; Morris, A.; Omohundro, Z.; et al. Autonomous exploration and mapping of abandoned mines. IEEE Robot. Autom. Mag. 2004, 11, 79–91. [Google Scholar] [CrossRef] [Green Version]
  38. Baker, C.; Morris, A.; Ferguson, D.; Thayer, S.; Whittaker, C.; Omohundro, Z.; Reverte, C.; Whittaker, W.; Thrun, S. A Campaign in Autonomous Mine Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’04), New Orleans, LA, USA, 26 April–1 May 2004; IEEE: New York, NY, USA, 2004. [Google Scholar]
  39. AGILEX ROBOTICS–Scout Mini. Available online: https://global.agilex.ai/products/scout-mini (accessed on 9 January 2023).
  40. Kim, H.; Choi, Y. Development of a ROS-Based Autonomous Driving Robot for Underground Mines and Its Waypoint Navigation Experiments. Tunn. Undergr. Space 2022, 32, 230–241. [Google Scholar] [CrossRef]
  41. ROS Wiki–Navigation Stack. Available online: http://wiki.ros.org/navigation (accessed on 9 January 2023).
  42. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics (Intelligent Robotics and Autonomous Agents); The MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  43. Lee, C.; Kim, J.; Kim, J.D.; Chon, S.W.; Kim, S.J.; Cheong, M.C.; Lim, G.J.; Cheong, Y.W. Mine Environmental Engineering; CIR Press: Seoul, Republic of Korea, 2014. [Google Scholar]
  44. Lu, G.Y.; Wong, D.W. An adaptive inverse-distance weighting spatial interpolation technique. Comput. Geosci. 2008, 34, 1044–1055. [Google Scholar] [CrossRef]
Figure 1. Overall structure of the autonomous driving robot developed in this study.
Figure 1. Overall structure of the autonomous driving robot developed in this study.
Applsci 13 03717 g001
Figure 2. Autonomous driving robot and sensors used in this study.
Figure 2. Autonomous driving robot and sensors used in this study.
Applsci 13 03717 g002
Figure 3. System architecture of the data processing procedure for the autonomous driving robot used in this study.
Figure 3. System architecture of the data processing procedure for the autonomous driving robot used in this study.
Applsci 13 03717 g003
Figure 4. System architecture of the ROS-based autonomous navigation system used in this study.
Figure 4. System architecture of the ROS-based autonomous navigation system used in this study.
Applsci 13 03717 g004
Figure 5. User interface of the autonomous driving robot system in ROS.
Figure 5. User interface of the autonomous driving robot system in ROS.
Applsci 13 03717 g005
Figure 6. Environmental sensors used in this study: (a) particle sensor; (b) gas sensor; (c) thermal camera.
Figure 6. Environmental sensors used in this study: (a) particle sensor; (b) gas sensor; (c) thermal camera.
Applsci 13 03717 g006
Figure 7. Processing diagram of the environmental mapping system to visualize the data.
Figure 7. Processing diagram of the environmental mapping system to visualize the data.
Applsci 13 03717 g007
Figure 8. Example of thermal image processing to detect human presence: (a) thermal image is obtained; (b) it is converted into a grayscale image; (c) it is converted into a binary image, and the number of humans is counted.
Figure 8. Example of thermal image processing to detect human presence: (a) thermal image is obtained; (b) it is converted into a grayscale image; (c) it is converted into a binary image, and the number of humans is counted.
Applsci 13 03717 g008
Figure 9. Conceptual diagram of the field experiment area in the underground mine. Circled numbers indicate the order of the robot’s moving points.
Figure 9. Conceptual diagram of the field experiment area in the underground mine. Circled numbers indicate the order of the robot’s moving points.
Applsci 13 03717 g009
Figure 10. View of the autonomous driving robot, RGB image, depth image at the (a) starting point, (b) intersection, (c) destination, and (d) arrival point.
Figure 10. View of the autonomous driving robot, RGB image, depth image at the (a) starting point, (b) intersection, (c) destination, and (d) arrival point.
Applsci 13 03717 g010
Figure 11. View of the robot’s heading, position, and LiDAR data at the (a) starting point, (b) intersection, (c) destination, (d) returning area, and (e) arrival point.
Figure 11. View of the robot’s heading, position, and LiDAR data at the (a) starting point, (b) intersection, (c) destination, (d) returning area, and (e) arrival point.
Applsci 13 03717 g011
Figure 12. The robot’s driving paths estimated by the sensor fusion method and encoder odometry method, as well as the position of the human detected by thermal image and vision system.
Figure 12. The robot’s driving paths estimated by the sensor fusion method and encoder odometry method, as well as the position of the human detected by thermal image and vision system.
Applsci 13 03717 g012
Figure 13. Graph showing (a) particle concentration and (b) O2 concentration in underground mine.
Figure 13. Graph showing (a) particle concentration and (b) O2 concentration in underground mine.
Applsci 13 03717 g013
Figure 14. Particle concentration map of the study area created using the measured data and GIS.
Figure 14. Particle concentration map of the study area created using the measured data and GIS.
Applsci 13 03717 g014
Table 1. Specifications of sensors and robot platform used in this study.
Table 1. Specifications of sensors and robot platform used in this study.
EquipmentModelSpecification
Robot PlatformScout miniSize: 612 mm (length) × 580 mm (width) × 245 mm (height)
Drive: four-wheel four drive
Max travel: 10 km
Max Speed: 20 km/h
Climbing ability: 30°
Payload capacity: 50 kg
Code wheel: hall encoder
Main
Controller
Laptop PC
Ubuntu 18.04
Intel Core i7-9750H CPU 4.50 GHz (Intel, Santa Clara, CA, UAS), 16 GB RAM, NVIDIA GeForce 1650 4GB (NVIDIA, Santa Clara, CA, USA)
Data acquisition and remote controllerLaptop PC
Windows 10
Intel Core i5-10210U, 1.6 GHz, 8 GB RAM, NVIDIA MX250 (NVIDIA, Santa Clara, CA, USA)
LiDAR SensorLMS-111Field of View: 270°
Interface: TCP/IP
Operating Range: 0.5–20 m
Scanning frequency: 25 Hz/50 Hz
IMU SensorRB-SDA-v1Interface: uart2
Gyroscope range: ±2000 dps
Accelerometer range: ±16 g
Magnetometer range: ±4900 μT
RGB-D CameraD435iRGB sensor FOV (H × V): 69° × 42°
RGB frame resolution:1920 × 1080
Depth Field of View (FOV): 87° × 58°
Depth output resolution: 1280 × 720
Table 2. Representative ROS topics that make up the autonomous driving robot developed in this study.
Table 2. Representative ROS topics that make up the autonomous driving robot developed in this study.
FunctionTopicDescription
Robot settingCmd_velControl robot’s linear and angular velocity
TfRelative transform between coordinate systems
Scout_statusRobot’s current status
LocalizationIMU/dataRobot’s pose measured by IMU sensor data
Amcl_poseRobot’s position calibrated through map matching
Odometry/filteredCalibrated robot’s position
MapPre-generated map
Autonomous drivingLocal/Global plannerLocal/Global driving path
Local/Global costmapOccupancy grid of the map
waypointsRobot’s driving waypoint
Table 3. Specifications of environmental sensors used in this study.
Table 3. Specifications of environmental sensors used in this study.
SensorParticle SensorGas SensorThermal Camera
ModelDigital dust monitor model 3443Gas Alert Max XT IIVue Pro 640 19 mm 9hz
ManufacturerKANOMAXBW TechnologiesTeledyne FLIR
Measuring factorParticle concentrationH2S, CO, O2, Combustible Gas (LEL)Thermal image/video
Dimensions (mm)162 × 60 × 10913.1 × 7.0 × 5.245 × 45 × 68
Weight (g)3281300113
Operating
temperature
−20~50 °C5~40 °C−20~55 °C
Detection range0~1000 ppmH2S: 0–200 ppm
CO: 0.001~10 mg/m3
O2: 0~30% vol
LEL: 0~100% LEL
0~40 °C
Accuracy±10%N/A±5 °C
Table 4. Comparison of mobile robot systems developed in this study with those developed in previous studies.
Table 4. Comparison of mobile robot systems developed in this study with those developed in previous studies.
ResearchLösch et al. [28]Kim and Choi [36]Jarosław Szrek et al. [17]This Study
Function
LocalizationOOXO
Driving methodAutonomous drivingAutonomous drivingRemote controlAutonomous driving
Navigation methodMulti Waypoint navigationOne-way navigationRemote controlMulti Waypoint navigation
Environmental monitoringXOXO
Human detectionXXOO
Intersection, slope area verificationXXXO
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, H.; Choi, Y. Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety. Appl. Sci. 2023, 13, 3717. https://doi.org/10.3390/app13063717

AMA Style

Kim H, Choi Y. Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety. Applied Sciences. 2023; 13(6):3717. https://doi.org/10.3390/app13063717

Chicago/Turabian Style

Kim, Heonmoo, and Yosoon Choi. 2023. "Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety" Applied Sciences 13, no. 6: 3717. https://doi.org/10.3390/app13063717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop