Next Article in Journal
The Influence of Ecological Factors on the Phytochemical Characteristics of Pinus cembra L.
Previous Article in Journal
Adaptive Age Estimation towards Imbalanced Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Body Positioning Method of Bolting Robots Based on Monocular Vision

1
College of Mechanical and Electrical Engineering, China University of Mining and Technology-Beijing, Beijing 100083, China
2
Key Laboratory of Intelligent Mining and Robotics, Ministry of Emergency Management, Beijing 100083, China
3
Beijing Institute of Control and Electronics Technology, Beijing 100038, China
4
Beijing Tianma Intelligent Control Technology Co., Ltd., Beijing 100029, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10183; https://doi.org/10.3390/app131810183
Submission received: 4 August 2023 / Revised: 6 September 2023 / Accepted: 7 September 2023 / Published: 11 September 2023

Abstract

:
Aiming at the intelligent design of underground roadway support and the precise positioning of unmanned full excavation faces, a positioning and measurement method of bolt robots based on the monocular vision principle was proposed. In this paper, a vehicle body positioning model based on image data was established. The data were obtained with a camera, and the conversion between image coordinates and world coordinates was carried out through coordinate system conversion. A monocular vision positioning system of the bolt robot was designed, and the simulation’s experimental model was established. Under the simulation’s experimental conditions, the effective positioning distance of the monocular vision positioning system was measured. An experimental platform for the bolt robot was designed, and real-time human positioning data measurement of the vehicle was carried out. The experimental error was analyzed, and the reliability of the method was proven. This method realizes the real-time positioning of underground mines through the bolt robot, improves the accuracy and efficiency of the positioning, and lays a foundation for the positioning control of the heading face and the unmanned bolt robot.

1. Introduction

Coal is the dominant energy source in China, but 51.34% of the coal is buried above 1000 m [1], and the risk coefficient of deep coal seams is very high. Deep coal seams are prone to disasters and accidents, which brings a great risk of injury to mining workers. Accidents occur frequently in the excavation process, and about 70% of the accidents occur in the mining operation area [2]. The positioning method proposed in this paper is of great significance to the development of intelligent unmanned technology [3] in fully mechanized mining faces.
By the end of 2020, 550 intelligent fully mechanized mining faces had been built in the country. However, there are few successful examples of intelligent, fully mechanized heading faces. The development of the intelligent mining industry is uneven. In the process of roadway excavation, tunneling, support, and bolt support are the three main steps. The bolt operation usually accounts for the main part of the whole tunneling time, and it is also the bottleneck to realize fast and intelligent tunneling. In order to dig the roadway safely and quickly, intelligent bolt technology should be developed.
The bolt robot is one of the core pieces of equipment of the modern excavation face. The working time is directly related to the safety of personnel and the efficiency of roadway excavation. Therefore, how to improve the intelligence and automation level of bolt robots has become the focus of much research. Autonomous positioning and orientation is the key technology to realize the unmanned automatic operation of the bolt robot [4]. In order to realize autonomous positioning and orientation of the bolt robot, the relationship between the body coordinates and roadway coordinates must be established first.
In recent years, the positioning and measurement of underground mobile equipment, such as the use of the ultra-wide band, intelligent global pooling systems [5], machine vision, and multi-sensor synthesis, have been reported. Based on the ultra-wide band method, UWB-TOF [6], (time of flight) ranging technology is used to realize the autonomous positioning and orientation of the roadheader (RH). Because the four base stations of the ultra-wide band occupy a large space, it is easy to interfere with the supporting equipment of the roadheader in narrow and crowed roadways, so it is difficult to realize the positioning under actual conditions. Based on the intelligent global pooling systems method, the position and attitude parameters of the RH are obtained through the space rendezvous measurement technique. This method has the advantages of strong autonomy and strong resistance to obstacles. On the basis of the machine vision method [7,8,9,10,11], a cross-laser image of the fuselage target is captured with an explosion-proof camera. However, due to the laser’s long optical path, the dust is thicker in the process of tunneling, and it is easy for the dust to block it, which affects the image acquisition. The multi-sensor understanding method [10,12] involves a variety of data fusions, increasing the complexity of the positioning system. The above methods are mainly applied to the positioning of RH, the shearer, and the rock-drilling robot, but there are some studies that investigate the positioning of the bolt robot.
With the rapid development of computer vision and image-enhancement technology [13,14,15], visual positioning technology has been applied to the fuselage positioning of mobile robots. Eduardo Montijano proposed a fully distributed solution to drive a robot to achieve the desired formation without an external positioning system [16]. First of all, a three-dimensional distributed control law designed at the kinematic level was proposed, which used two simultaneous consistent controllers. Secondly, in order to apply the controller to a group of aerial robots, we combined this idea with a new sensor fusion algorithm using airborne cameras and information from inertial measurement units to estimate the relative position of the robot. The algorithm eliminates the influence of scrolling and pitching in the camera image and uses the structure in the motion method to estimate the relative position between robots. Mohammad M. Aref proposed a nonholonomic mobile robot navigation system based on real-time vision [17], in which the vision-based controller can improve the positioning accuracy of the robot near its workstation. For the mobile manipulator, this method improves the role of the mobile platform as the main contributor to the extensive movement of the end effector of the manipulator. Then, the method was combined with the author’s macro and micro architecture of mobile operation, and the results showed that synchronous visual feedback provides a certain accuracy for the controller to track the object correctly and grasp it with hands. This accuracy makes the performance of the mobile robot equivalent to that of the manipulator.
Based on the research of the abovementioned scholars in recent years, the laser beam is easily obstructed, making it difficult to improve accuracy. The total station can only detect one point, making it more suitable for static pose measurement of tunneling machines. The drift phenomenon and other accumulated errors generated by inertial navigation after long-term use pose significant challenges for the application of inertial navigation in the navigation of coal mining robots. Visual cameras have a higher price ratio than LiDAR, IMU, and UWB, and can obtain more image information to directly perceive the environmental information of the roadway. The application of visual positioning technology in underground working faces has become possible [7], especially in fully mechanized mining faces with poor light, thick dust, water mist, and vibration hazards. At the same time, the bolt of the roadway roof and side plate can be used as the basis of bolt robot positioning and map navigation.
Although there are some differences in principle between monocular vision and binocular vision, the positioning accuracy of monocular vision is similar to that of binocular vision in the process of vehicle posture measurement of mobile robots with prescribed shapes. Additionally, the accuracy and stability of the monocular vision feature point coordinate calculation algorithm is much better than the binocular vision positioning method. Panoramic vision has an omni-directional perspective and a wide field of view, which mean it can obtain rich and complete environmental information, but the panoramic image has large distortion, strong nonlinear changes and poor robustness of extracted environmental features, and it is difficult to match features. Additionally, the panoramic vision has too much information, which increases the complexity of the algorithm and reduces the real-time performance. Binocular vision uses two cameras with known relative positions, which is very consistent with the structure of biological vision. With binocular vision, it is easy to obtain image depth information with simple calculations and high accuracy, but the calibration of binocular cameras is more complex. Additionally, the monocular vision positioning system only needs one camera, which is simple in structure and easy to calibrate. Compared with the binocular position measurement system, the monocular vision positioning system has the advantages of good real-time performance and fast processing speed, and it can meet the time requirements of mobile robots. In order to realize unmanned anchor rod support at general mining faces and achieve fast and accurate anchor rod robot positioning, an automatic anchor rod positioning method based on monocular machine vision is proposed. The positioning system collects the bolt array on the roadway roof through the charge-coupled device camera installed on the fuselage of the mobile robot. Through extracting image feature points and performing linear fitting, combined with a visual localization estimation model, self-positioning of the robot body was achieved on an experimentally simulated road.
For this method, the positioning accuracy is affected by many factors, but the positioning error mainly comes from visual system errors, positioning algorithm errors, and camera installation errors. Among them, the errors caused by camera installation can be controlled within the allowable range of accuracy through many measurements, which can be ignored in this system. Visual system errors can be reduced by feature point extraction algorithms to meet the needs of practical engineering applications. In this paper, the error caused by the positioning algorithm was studied, the influence of the error source on the positioning system error was analyzed using Matlab simulation, and the positioning distance and accuracy of the positioning system were verified through the visual measurement system platform test. The experimental results showed that the monocular machine vision localization method could meet the basic requirements of automatic, rapid, and accurate localization.

2. Bolting Robot Vision Positioning System

2.1. Overall Measurement Principle

In the roadway heading face, the roadway needs the combined use of the roof and rib bolts so as to achieve the purpose of combined support. At present, drilling and anchor construction are mainly carried out manually. Automatic bolt drilling is the development direction of intelligent tunneling. Due to the complex conditions of the heading face and the positioning and orientation methods based on the ultra-wide band, intelligent global pooling systems and multi-sensors, etc., are facing challenges.
Using machine vision to identify the end features of the roadway roof bolt array is one of the effective ways to realize the autonomous navigation of bolt robots. The bolt arrangement of the mining face and roof is shown in Figure 1, in which the mining direction is indicated according to the matching line of the anchor end or the anchor plate.
The machine vision positioning system of the bolt robot is composed of a charge-coupled device camera, laser meter, computer system, auxiliary light source, and other components. Before the bolt robot drills the bolt hole forward according to the bolt spacing, the robot first needs to identify the bolt array and determine the direction of movement through the machine vision system. The back end of the bolt robot is equipped with a charge-coupled device camera to synchronously photograph the roof bolt array of the roadway environment at the top of the roadway. The computer system receives the image from the charge-coupled device camera and the depth data of the laser meter and calculates the relative displacement of the current position relative to the previous position. Through the accumulation of relative displacement, the position of the robot in the process of moving is determined. The technical roadmap is shown in Figure 2:
The computer system realizes the conversion from image data to location data. The positioning principle is shown in Figure 3. When the camera was in position i, the camera captured image i and compared it with image i − 1. The pixel coordinates of the same feature points of images i − 1 and i are different in the two images, and each point has a set of image coordinate data in the two images, respectively. By combining the image coordinates of multiple feature points, the change of the plane angle and the displacement of the image were determined, and the relative location was realized. The AB line in Figure 4 represents the change in the direction angle. The plane composed of ABC represents the offset and rotation of the coordinate system between image i and image i − 1.

2.2. Bolt Robot Positioning Coordinate System

For the positioning of the bolt robot, the main purpose is to determine the position and attitude of the bolt robot relative to the roadway, that is, to determine the position coordinate of the origin of the fuselage coordinate system in the roadway coordinate system and the rotation angle of the fuselage coordinate system relative to the world coordinate system. In the process of visual positioning, it is necessary to determine the relationship among the world coordinate system, the fuselage coordinate system, the camera coordinate system, image coordinate system, and the pixel coordinate system. The coordinate system is shown in Figure 5.
To establish the coordinate system, the roadway coordinate system O W X w Y W Z w was established on the roof, and its origin O W was located on the projection point of the laser pen’s origin of the heading machine on the roof. The roadway coordinate system depends on the specific driving route, in which the O w Z w axis was the direction of the laser pointer, i.e., the driving direction; the O w X w axis was vertical to the driving direction, horizontally to the right; and the O w Y w axis was vertical to the O w X w Z w plane. The camera coordinate system O c X c Y c Z c ’s origin O c was established in the camera’s optical center, the O c Z c axis was right along the image plane, the O c Y c was upward along the optical axis center line, the camera to the scene direction was positive, and the O c X c was facing downward along the image plane; the body coordinate system O b X b Y b Z b coincided with the camera’s coordinate system. The image coordinate system X O Z ’s coordinate origin o was in the image center, O Z was right along the image plane, and O X was under the image plane; for the pixel coordinate system O f U V , the coordinate origin O f was in the upper left corner of the image, U was along the right side of the image plane, and V was located below the image plane.

2.3. Visual Positioning Estimation Model of Bolting Robot

As shown in Figure 4, the heading angle between adjacent images Δ ψ = ψ i + 1 ψ i , Δ θ = θ i + 1 θ i ; the pixel coordinates of the feature point 1 in the adjacent image were u i 1 , v i 1 , u i + 1 1 , v i + 1 1 , the pixel coordinates of feature point 2 in the adjacent image were u i 2 , v i 2 , u i + 1 2 , v i + 1 2 , the deflection angles of the straight lines formed by feature points 1 and 2 in the image were ψ i , ψ i + 1 , respectively, and the bolt robot and the heading angles were, respectively, θ i and θ i + 1 when collecting adjacent images, then
Ψ i = arctan v i 2 v i 1 u i 2 u i 1 Ψ i + 1 = arctan v i + 1 2 v i + 1 1 u i + 1 2 u i + 1 1
The relationship between i − 1’s acquisition pixel coordinate system and i’s acquisition image coordinate system is
Κ u i v i 1 = u i + 1 v i + 1 1  
Here,
K = cos Δ Ψ i sin Δ Ψ i Δ p u i sin Δ Ψ i cos Δ Ψ i Δ p v i 0 0 1 .
The translation vector between the two sampling pixel coordinate systems is
Δ p u i = u i + 1 cos Δ Ψ i u i + sin Δ Ψ i v i Δ p v i = v i + 1 sin Δ Ψ i u i cos Δ Ψ i v i
u i v i 1 ρ 0 u 0 0 ρ v 0 0 0 1 1 cos θ i sin θ i p z i sin θ i cos θ i p x i 0 0 1 z w x w 1
The final position coordinate relative to the roadway is w p z i + 1 , w p x i + 1
w p z i + 1 = p z i + 1 cos θ i + 1 p x i + 1 sin θ i + 1 u 0 cos θ i + 1 v 0 sin θ i + 1 w p x i + 1 = p z i + 1 sin θ i + 1 p x i + 1 cos θ i + 1 v 0 cos θ i + 1 + u 0 sin θ i + 1

2.4. Bolting Robot Positioning Parameter

The position and attitude of the bolt robot relative to the roadway coordinate system were simplified as a rigid body. Δ Z represents the forward error of the geodetic coordinate system and the body coordinate system, Δ X represents the horizontal error of the geodetic coordinate system and the body coordinate system, and θ represents the heading angle error; these are the three positioning parameters of the ideal roadway. The target body positioning parameters are shown in Table 1.
According to the 2018 “Technical Specification for Bolt support in Coal Mine roadway”, the row distance error between bolt holes does not exceed 100 mm, and the bolt strike error does not exceed 5°. For the 3.6 m roadway with a width of 5.5 m and a height of 3.6 m, the anchor bolts on both sides of the roof do not exceed 250 mm away from the edge of the roadway, and the bolt spacing is 800 mm. The whole line deviation angle is not explained in the national standard. By calculating the roadway section size, the angle error of the whole row of anchor bolts is less than 2.29°. The analysis process is shown in Figure 6, in which the circle represents the end of the bolt. Under the actual working conditions, the distance between the ends of bolts is about 700 mm, so the average error of the forward direction between adjacent bolts is 33 mm, and the average error in the horizontal direction is 13 mm. As shown in Figure 7.

2.5. Camera Model

The camera model aims to realize the transformation from an image coordinate system to a geodetic coordinate system, including translation transformation and rotation transformation.
We set the coordinates of the space point P in the roadway coordinate system as ( X w , Y w , Z w ) , the corresponding coordinates in the camera coordinate system as ( X c , Y c , Z c ) , the coordinates in the image system as X , Z , and the coordinates in the pixel coordinate system as u , v .
The relationship between the pixel coordinate system and the image coordinate system is
u v 1 = k z 0 u 0 0 k x v 0 0 0 1 z x 1
z x 1 = f 0 0 0 f 0 0 0 1 z c y c x c y c 1
In the relationship between the image coordinate system and camera coordinate system, f is the effective focal length of the camera.
x c y c z c 1 = R T 0 1 x w y w z w 1
The relationship between the camera coordinate system and the roadway coordinate system is
u v 1 = k z 0 u 0 0 k x v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R T 0 1 x w y w z w 1 = k z f 0 u 0 0 k x f v 0 0 0 1 R T 0 1 x w y w z w 1
After the abovementioned formulae are combined, the relationship between the pixel coordinate system and roadway coordinate system is obtained:
ρ u d ρ v d 1 = cos θ sin θ p z sin θ cos θ p x 0 0 1 z w x w 1
In the ideal roadway model, if the roof is parallel to the top of the fuselage of the drilling and anchoring robot and the height from the camera to the roof is the same, k = k x f = k z f , ρ = y c k , then the coordinate formula of the ceiling point P of the roadway can be recorded as P ( X w , Z w , 1 ) , u d = u u 0 ,   v d = v v 0 , the relationship between the pixel coordinate system and the roadway coordinate system.

3. Image Processing and Feature Point Recognition

3.1. Image Processing

Because the bolt end of the roadway roof presents a black circle, it is necessary to extract the black edge of the bolt end to extract the feature points. The circle feature of the bolt end was taken as the target, and the center of the circle was detected. In the roadway, not only is the environment dark, the dust concentration high, and the water vapor high, but the bolt robot’s body vibration frequency is also low during its work, which easily causes image blur, resulting in the final feature extraction. Therefore, based on the abovementioned problems, it is necessary to preprocess the image, which is the first step to improve the accuracy of feature extraction and the positioning accuracy of the visual system [18,19]. Image preprocessing mainly includes two parts, image enhancement and filtering processing, to improve the positioning accuracy of the visual system.
In order to enhance the contrast of the bolt end feature points relative to the road–surface interface [20] and make the bolt end feature points clearer, this paper used the histogram equalization algorithm to enhance the image [21,22,23] and filter the image with the minimum mean square error, that is, the Wiener filter [24], to eliminate the low-frequency vibration and image blur during the motion of the bolt robot.
Considering the portability of the camera lens field of view, in order to ensure that different wide-angle cameras obtain the same results, the random sample consensus (RANSAC) algorithm was used to correct the image distortion [25]. The random sample consensus algorithm is different from the traditional smoothing process; the traditional method is to use as much data as possible to obtain a more primitive solution, and then try to use some optimization algorithms to eliminate invalid data points. For the random sample consensus algorithm, a relatively small data set is used, and then consistent data are used as much as possible to expand the original initialized data set.
Compared with the least squares method, the random sample consensus algorithm can also make the fitted line visible with the naked eye in the case of large errors, but the line fitted by the least squares method is wrong, and the sample consensus algorithm can be fitted successfully. When a lot of the data are abnormal, it can also be estimated and fitted with high precision. The purpose is to make up for the image edge distortion caused by the camera width. By calculating and compensating for the edge pixels, the distortion caused by the internal parameters of the camera can be restored. The effect of distortion correction is shown in Figure 8. Figure 9a shows the wide-angle image before distortion correction, and Figure 9b shows the image after distortion correction.
Because the color of the coal’s surface is similar to that of the bolt end face, the difference between them is small and not conducive to feature extraction of the bolt end face. Therefore, the histogram equalization algorithm was used to enhance the image so that the bolt end could show a higher brightness gray value, making it ready for feature extraction.
Histogram equalization was conducted through the nonlinear stretching of the image; the gray level of the image was redistributed in the pixel value so that the gray level was roughly evenly distributed in the same wide range as the pixel value, and the uniformity of its distribution could be adjusted according to the needs [26].
This method was applied to change the histogram of the image so that the grayscale of the target image pixels could be changed to enhance the contrast of the image where the dynamic range was small. The basic principle was that the grayscale of the target pixels of the key frames of the video image which had more pixels was expanded and widened, and the grayscale values of the images which have fewer pixels were reduced and merged to increase the contrast to make the image clear and enhance the image. The algorithmic steps of this method are shown in Algorithm 1.
The image’s gray distribution histogram and the image’s enhancement effect are shown in Figure 9a,b.
Algorithm 1. Algorithm steps of histogram equalization
Input: original target image
Output: histogram-equalized image with histograms
1. Obtaining the width and height of the source target image;
2. Scanning the pixel points of the image line by line and performing gray level homogenization;
3. Analyzing and calculating the probability density of each gray level.
In order to solve the problem of image blurring in low-frequency vibration and motion of the bolt robot, the minimum mean square error filter, Wiener filter [27], was used to estimate the blurred image so as to minimize the root mean square error between the estimated image and the actual image. The algorithm achieved good results in the application of mine image restoration, and it has a fast processing speed and good robustness. The effect is shown in Figure 9c.

3.2. Identification of Feature Points

First of all, it is necessary to recognize the features of the image after the abovementioned processing. Secondly, multiple feature point extraction algorithms were compared based on the actual environment of the underground coal mine, and the Hough circle transform detection algorithm was selected as the feature point extraction algorithm. Combined with the circular feature of the bolt end face, the bolt end face was detected based on the Hough transform, and the circular boundary and center coordinates of the end face were extracted.
Given the detection of artificial beacon yellow circular stickers, we describe the mathematical expression of the yellow circular stickers. We selected consistent parameters to match the circular stickers on the roadway roof, that is, the yellow circular stickers on the roof in the original image generated a corresponding circle in the parameter space for parameter matching. If parameters a and b were consistent, the same circular stickers matched in the original space and parameter space on the roof. Thus, the same point (a, b) in the corresponding parameter space could be used to determine whether it was a circle. According to the requirements of the monocular vision system orientation and positioning system, the system recognized that the circle had a certain range because the ratio of pixels to actual values was 3.6:1, which means that the size of the circle in the roof tunnel space detected by the circle was 5 to 10 pixels.
Based on the least squares method and the random sample consensus algorithm, the anchor head feature points extracted from the image range were fitted into relative positioning reference lines [26]. As shown in Figure 10a, the white circle is the recognized feature point, the red circle at the edge of the circle represents the fitted feature circle, and the red line is the fitting line of the center of the fitting circle. Figure 10b shows an enlarged view of a part of the green block diagram.

3.3. Tracking of Feature Points

The tracking principle is shown in Figure 11 and Figure 12. The feature point i − 1 and feature point i represent the pixel coordinates of the same feature point in different images. In the process of slow and continuous movement of the bolt robot, the homogeneous coordinates of the edge feature points of the image collected for the first time were M i 1 = u i 1 , v i 1 , 1 T and M i 2 = u i 2 , v i 2 , 1 T , and the homogeneous coordinates of corresponding feature points of the first acquisition were M i + 11 = u i + 11 , v i + 11 , 1 T and M i + 12 = u i + 12 , v i + 12 , 1 T ; in the process of the bolt robot moving, the horizontal coordinates of the same edge feature points will continue to decrease or jump out of the field of view. If the edge feature points u i + 11 > u i 1 in the two images, the corresponding edge feature points jump out of the field of view, and otherwise, M i + 11 in i sampling corresponds to the feature points still in the field of view, and M i + 11 in i sampling corresponds to M i 1 .

4. Bolting Robot Vision Estimation and Positioning System

4.1. Visual Positioning Experiment of Bolting Robot

According to our current situation and past experience, the roadway environment in the coal mine tends to be stable, and there is no changeable environment. Additionally, in the current situation, the concentration of dust gradually decreases, which will not affect the positioning accuracy of the anchor drilling robot. In the case of lack of light, we can supplement the light through some auxiliary equipment or enhance the identification of the bolt end. In the following experiments, we further test the positioning accuracy of the anchor drilling robot based on the monocular vision system.
Based on the actual positioning needed for the bolt robot and combined with the working conditions of the coal mine underground, an experimental environment and a mobile car experimental system were established, and a simulated tunnel layout environment was built in a corridor. The mobile car was used to simulate the experimental bolt robot and visual system. To mimic the dark conditions in the coal mine underground as much as possible, a black background was arranged. The design of the anchor rod end was replaced by a yellow reflector, and the layout environment of the tunnel is shown in Figure 13. During the process of arranging the feature points, the initial points were first determined, and subsequent feature points were randomly marked within the range of 800 ± 50 mm through measurement to ensure that the center position of the anchor rod end feature points was within this range. The layout points were selected at a distance of 20 m (at least 25 coordinate points and 100 sets of data were arranged).
The visual measurement system of the experimental platform consisted of a hardware system and a software system. The hardware system included a Mako camera assembly, a leveling mobile platform, a laptop, network cable, and a 50 W projector. The Mako camera assembly was installed on the camera mounting base of the adjustable leveling mobile platform through mechanical coordination. The camera mounting base was connected to the mobile platform through bolts through a mating surface. During positioning measurement, the camera was powered by a 12 V low-voltage power supply, and the set GigE interface was connected to the computer through a network cable to complete the transmission of digital images from the camera to the computer. The floodlight mainly followed the movement of the bolt robot and projected towards the roof. During the experiment, it was necessary to select appropriate positions and projection directions based on the imaging effect. The software system included the following: based on Visual Studio 2013 and OpenCV 3.0 image processing platforms, a Vimba Viewer camera SDK. We developed a positioning error simulation platform based on MATLAB. The Vimba Viewer 2.1.3 SDK was connected to the camera through the GigE interface, and its main function was to complete image acquisition and storage. The main functions of the image processing platform based on Visual Studio 2013 and OpenCV3.0 were to preprocess the collected images, extract feature points, and store coordinates. The main function of the MATLAB positioning error simulation platform was to verify the positioning simulation algorithm and develop error programs and various error models based on the positioning algorithm to complete the positioning error simulation of the bolt robot.
Among them, according to the positioning requirements of the bolt robot in coal mine tunnels, it may be required to see more anchor rod end feature points within the field of view. During the camera selection process, the distance between the camera and the roof was 2500 mm. In the field of view above the roof, five rows of anchor rods were set to be seen, corresponding to a distance of 4000 mm or more, 2/3 “lenses and 1/1.8” sensors were selected, and the camera focal length was 5 mm based on geometric relationships; The selected feature was a circular feature with a size of 30 mm. It mainly extracted the circular feature and then extracted the center of the circle based on it. The target size of 30 mm was 0.06 mm, so its pixel size needed to be less than 0.06 mm. The camera resolution needed to be greater than 10,800 pixels. In order to improve the accuracy and stability of the system as much as possible, it was necessary to improve the pixel image resolution while considering the speed of the image processing process. By selecting a camera with 5 million pixels, the detection accuracy of feature points could reach 3.45 μm.
According to the camera model, it was necessary to determine the internal and external parameters and camera distortion parameters of the camera at this stage, which meant that the camera needed to be calibrated. Based on the Zhang Zhengyou calibration method, the camera calibration needed to be carried out on the Visual Studio 2013 and OpenCV3.0 platforms. In this article, the specifications for the camera calibration test were 6 × 9 and 60 mm × 60 mm checkerboard. The captured images are shown in Figure 14.
Through calibration, the Mako-G507B camera’s internal parameter matrix was
G = 19829.19 0 1231.48 0 19885.77 1207.50 0 0 1
The camera’s external parameter matrix was
R = 0.893 0.450 0.003 0.450 0.893 0.001 0.002 0.002 1 ; T = 2 . 189 2 . 120 0 . 014
The camera distortion coefficient was
[k1, k2, k3, p1, p2] = [5.4991, −583.8657, −0.132, −0.0488, 2.831.8715]
Here, k 1 , k 2 , and k 3 are radial distortion coefficients and p 1 and p 2 are tangential distortion coefficients.
The experiment was carried out in a dark environment. The corridor was used to simulate the roadway, black paper was used to simulate the black background at the top of the roadway, and the yellow reflection circle was used to simulate the end of the bolt. The mobile trolley was used to simulate the bolt robot, the laser rangefinder was used to measure the depth, and the depth information of each image was obtained synchronously; the data were connected with the computer through the data line to obtain image data. The data were processed on the computer, and the measurement results of the feature points, longitudinal tilt fitting line, the step displacement, and other data were obtained.
In the course of the experiment, the experimental data were collected. Then, we compared the data, including the spatial position of the bolt end and the spatial position of the bolt anchor when obtaining various images and depth data. In addition, the starting position of the forward direction of the anchor robot was set as the origin of the world coordinate system, and the spatial position conversion between the world coordinate system, the volume coordinate system, and the camera coordinate system were carried out. At each data acquisition position, the horizontal position, forward position, and longitudinal tilt of the bolt robot were measured. To ensure that the optical axis of the camera was vertical, we used a horizontal collimator to adjust the camera’s lens level, as shown in Figure 13.
By comparing the experimental with the measured data of the bolt robot test-bed, the horizontal, forward, and longitudinal errors of the bolt robot were obtained, and the error analysis and the feasibility analysis of the experimental results were carried out.

4.2. Experiment Data Processing

The following are the steps for the positioning experiments:
(1)
Set the positioning reference point and positioning axis. Set the positioning reference point and heading angle calibration reference and use the centerline of the floor tile as the calibration heading reference line during the calibration process. Using the reference point as the origin, establish a tunnel coordinate system and set a positioning point every 750 mm along the tunnel direction using a laser rangefinder and angle-measuring instrument.
(2)
Select the bottom right corner of the positioning platform as the positioning point and align it with the calibration point, with the right boundary aligned with the heading angle line. Place the inclinometer on the top plane of the camera. By using three leveling nuts to level the camera plane along the forward and right planes of the body, use the Vimba Viewer to collect and store images in the corresponding order. Then, move the camera platform and repeat the operations at each calibration point position.
(3)
Based on an image processing platform developed by Visual Studio and OpenCV, preprocess the stored images and sequentially extract the features. Store the extracted center pixel coordinates accordingly.
(4)
Substitute the pixel coordinates of the images collected between adjacent positioning points into the calculated positioning model and output the position and heading angle of the anchor drilling robot at each positioning point relative to the initial calibration point.
(5)
Repeat 2–4 operations and ensure that each calibration point is measured five times.
(6)
Take the average of the position coordinates and heading angles measured in the experiment, and compare them with the target positioning attitude parameters to obtain the variation pattern of the camera positioning system positioning error with the positioning distance in the simulated tunnel.
We carried out the experiment at a constant speed; we did not experiment with the robot at different speeds for the time being. First of all, we simulated the environment of the coal mine roadway under ideal conditions, and proved the feasibility of the positioning method in the experiment. Secondly, in the roadway, it is necessary to accurately locate the anchor point, as the speed is too fast may lead to anchor positioning errors. Additionally, in the later experiments, we also found a robot moving speed to meet the needs of the project. Finally, we proved the feasibility of the location method, and carried out experiments with different machine speeds to verify the robustness of the location method in any case. The steps, methods, and data analysis of the experiment are as follows.
We recorded the position of the experimental car in the driving process and collected image data through the visual system. By using the positioning method, the feature points in the image were extracted and the step distance and angle of the walking car were calculated on the computer. Detailed data was shown in Table 2. Then, the trajectory of the car was fitted through the cumulative positioning method. In an experiment, five images were taken for each fixed point, the average value of the image data was calculated, and the location and measurement position of each point were calculated.
The data were analyzed in MATLAB, and the longitudinal tilt error and X- and Z-direction error were obtained, respectively. From the experimental data, it can be seen that the longitudinal tilt positioning error of the bolt robot vision positioning system increased with the increase in the positioning distance, but in the range of 17,250 mm, the longitudinal tilt positioning error was less than 2° and the maximum longitudinal tilt positioning error was 1.845°. The positioning accuracy was high and met the positioning requirements of the bolting robot.
According to the results of Figure 15 and Figure 16,under the influence of the positioning errors in the x-direction and z-direction, the visual positioning system moved the measurement trajectory to the negative direction of the x-axis. In the range of 17,250, the z-direction positioning error of the bolt robot vision positioning system fluctuated in a certain range, which was essentially kept within 100 mm. However, the x-direction positioning error was less stable and showed a linear increasing trend. Within the range of 6750 mm, the x-direction positioning error was controlled within 100 mm. The experiments fully proved that the visual positioning system had higher accuracy in the z-direction and lower positioning accuracy in the x-direction. In the range of 6750 mm, the accuracy of the visual positioning system met the positioning requirements of actual working conditions.

5. Error Analysis and Simulation Experiments

According to the nature of the measurement error, the error could be divided into systematic error, random error, and gross error, in which the gross error was needed for its exclusion, while the systematic error and random error were needed to explore its distribution law and carry out a systematic study to lay the foundation for the compensation and correction of the measurement system in the later stage. In the visual measurement system, the distinction between systematic error and random error was not absolute, and the corresponding nature of the error would change with different observation conditions. For example, in the visual measurement process of the drilling anchor robot, under the influence of environmental light on the measurement results, when the light intensity relative to the optimal light conditions had constant deviation, the final measurement error was manifested as a systematic error, and when the visual system was distanced from the source of continuous transformation, the light intensity of the increased or decreased gradually, thus resulting in a change in the systematic error. When the light intensity was in the process of random change, it caused the measurement system to change. When the light intensity changed randomly in the process of its work, it caused the measurement system error to change randomly.
The visual measurement system error mainly came from the error caused by the inaccurate measurement of image feature point coordinates by the visual system. The main factors that had a significant impact on the visual system are the errors caused by the environment and feature point extraction algorithm, as well as the errors caused by image distortion. Environmental variables belonged to random variables and random errors in the measurement process of visual positioning systems. The feature point extraction algorithm was greatly influenced by environmental variables, which directly affected the accuracy of feature point extraction and even determined whether it could be extracted. The results of the two factors were considered here. Image distortion belonged to system error. In this system, the distance between the feature points of the same image and the image was different, and the corresponding degree of distortion was also different. Under certain distortion parameters, the selection of feature points in the image interface was random. Therefore, the distortion error was treated as a random error.
The positioning algorithm error mainly came from the depth-of-field variation error and the non-vertical error between the camera’s optical axis and the roof. There is a certain margin of error in the height and width of the roadway during the excavation process of the coal mine comprehensive excavation working face. Under the conditions of high ground pressure in deep coal seams, the roof and side plates of the excavated roadway would move closer, and there might even be a certain degree of bulging. Therefore, during the operation of the bolt robot, the distance between the camera and the roof would randomly change according to the working conditions of the roadway, which was defined as random error in this system. Due to local variations in the roof and floor of the roadway, the bolt robot had pitch and roll angles. During the operation of the bolt robot, it also randomly changed with the conditions of the roof and floor of the roadway. Therefore, the errors caused were also defined as random errors. Whether it was depth-of-field error or non-vertical error between the camera and the roof, it could cause the pixel coordinates of feature points to shift, and its impact on monocular visual positioning systems cannot be ignored.
The simulated lanes of this experiment tend to be in the ideal case and were experimentally verified in a dark environment, so a part of the above error was negligible. During the experiment, due to the accuracy of the experimental platform, there were errors in the vertical axis of the optical axis and depth errors. Therefore, it was necessary to analyze and simulate the errors in these directions.

5.1. Experiment Data Processing Optical Axis Is Not Vertical Error

The line perpendicular to the focal plane of the camera and passing through the focal point is called the optical axis. The vertical optical axis refers to the axis perpendicular to the image plane, which was very suitable for the experiments. There were many reasons for the non-perpendicular optical axis, such as the roof not being parallel to the ground, the leveling accuracy not being high enough, the measurement accuracy being low, and so on.
The error of the non-vertical optical axis generally exists in an angle less than 1°, and the direction of the error occurs randomly. At the ideal height, the error range was first obtained.
From Formula (11):
u = k z f n x + s f n y + u 0 n z X w + k z f o x + s f o y + u 0 o z Y w + k z f a x + s f a y + u 0 a z Z w v = k x f n y + v 0 n z X w + k x f o y + v 0 o z Y w + k x f a y + v 0 a z Z w 1 = n z X w + o z Y w + a z Z w
In the formula, s is the twist coefficient.
The rotation coefficient of the optical axis was obtained by combining the rotation angle of the optical axis along the x-axis and the z-axis:
R s = R r o t x θ × R r o t z ω = 1 0 0 0 cos θ sin θ 0 sin θ cos θ cos ω sin ω 0 sin ω cos ω 0 0 0 1   cos ω sin ω 0 cos θ sin ω cos θ cos ω sin θ sin θ sin ω sin θ cos ω cos θ
θ 2 + ω 2 = α 2  
In the formula, θ is the rotation angle of the x-axis, ω is the rotation angle of the z-axis, and α takes 1°.
Firstly, 1000 single-point error simulation experiments were carried out, and the positioning error distribution cloud map of the single positioning measurement is shown in Figure 17. According to the simulation results, the positioning error had a circular distribution. The probability of the positioning error in the x-direction being within 8 mm was 100%, the probability of the positioning error being within 4 mm was 91.4%, and the probability of the positioning error being within 2 mm was 72.7%. The longitudinal tilt error was so small that it could be ignored. Therefore, the point feature extraction error had little influence on the positioning accuracy of a single measurement, and the positioning error was within the allowable range of engineering application requirements.
In order to explore the effectiveness of the non-vertical error of the optical axis, 1000 simulation experiments were carried out on the cumulative positioning error within 50 m, and the probability diagram of the error distribution being within 50 m was obtained. The simulation experiment results are shown in Figure 6. In the range of 50 m, the positioning error of monocular vision estimation increased with the increase in the positioning distance. The error was generally distributed symmetrically, and the center of symmetry is the position where the error is 0 mm. It can be seen from Figure 18a that the positioning error in the z-direction was controlled within 80 mm; from Figure 18b, it can be seen that the positioning error in the x-direction was controlled within 60 mm; from Figure 18c, it can be seen that the positioning error of longitudinal tilt was controlled within 2°. The simulation results showed that the monocular vision positioning measurement system was affected by the point feature extraction error in the range of 50 m. The x, z, and longitudinal tilt positioning errors met the positioning accuracy requirements of the bolt robot.

5.2. Depth Error

Depth is the distance from the camera lens to the image plane and can be obtained with a laser rangefinder. A laser rangefinder is an instrument that uses a laser to accurately measure the distance of the target (also known as laser ranging). When the laser rangefinder is working, it shoots a very thin laser beam to the target, and the photoelectric element receives the laser beam reflected by the target. The timer measures the time of the laser beam from emission to reception, and calculates the distance from the observer to the target. It generally uses two methods to measure distance: the pulse method and the phase method. The process of pulse ranging is as follows: the laser emitted by the rangefinder is received by the rangefinder after being reflected by the measured object and the rangefinder records the round-trip time of the laser at the same time. Half of the product of the speed of light and round-trip time is the distance between the rangefinder and the object to be measured. The accuracy of measuring distance using the pulse method is generally about +/−10 cm. In this paper, a laser rangefinder was used to set the distance between each location point and the distance between the camera and the roof, and the distance between the camera and the roof changed due to the uneven ground. Ideally, the measured depth is constant. The depth was measured 10 times during the experiment. The results are shown in Table 3.
The depth-of-field error mainly comes from the change in the distance between the camera and the top plate due to the uneven ground during the movement of the anchor-drilling robot. Therefore, the pixel coordinates of the corresponding feature points and the pixel length between the feature points changed. This caused an estimated positioning error. In order to study the influence of depth-of-field error on the visual measurement system, the distribution law of the depth-of-field error was substituted into the body position solution model, and a simulation experiment of a single positioning error was carried out.
Ideally, the distance between the lens and the image plane is 2636 mm, the depth of 20 points is from 2604 mm to 2655 mm, and the depth-of-error is 51 mm. According to the characteristics of monocular vision, and considering the influence of depth on the measurement data, 1000 simulation experiments were carried out on a single positioning error. The simulation results showed that the positioning point error was linearly distributed with depth, and within 51 mm, the error range was less than 2 mm.
Taking 0.6 mm as the variable amplitude, the errors of 50 m in the x-direction and z-direction were simulated, and the probability distribution of the error distribution at 50 m was obtained. The distribution of the positioning error in the z-direction was within 2 mm, and that in the x-direction was within 1.5 mm. The longitudinal tilt error was very small and could be ignored. The depth-of-field error had a great influence on the positioning error in a single x-direction, while the positioning error in the z-direction had little influence on the longitudinal tilt positioning. The overall positioning error was within the allowable range of engineering application requirements.

6. Conclusions

The experimental data were measured using the bolt robot test-bed and the error was analyzed. For the single visual bolt robot body positioning system, the following conclusions can be drawn:
  • The monocular vision positioning system proposed in this paper could realize the function of fuselage positioning. The specific functions are as follows: (a) obtain image data information through the camera; (b) improve image quality through image processing methods such as distortion correction, image enhancement, and image denoising; (c) realize the position detection of the bolt robot through the feature extraction algorithm, straight line fitting algorithm, and position estimation algorithm. In summary, the contrast of the image increases through equalization processing, the blur in the process of image movement is removed through the Wiener filtering algorithm, and the anchor end circle feature is effectively and stably extracted based on the Hough circle algorithm. Through simulation, the monocular vision positioning model was shown to be in the range of 100 m, and the positioning algorithm error was almost zero, which meets the positioning requirements of the anchor-drilling robot.
  • According to the analysis of the results obtained from the experiment, the effects of optical axis non-vertical error and depth error on the experimental results met the requirements. Through the analysis of the experimental results, the results obtained are as follows: (a) the single-point error of the comprehensive error was less than 8 mm, the comprehensive error of the x-direction and the z-direction was less than 100 mm, and the cumulative longitudinal tilt error was less than 1.6°, which met the needs of the project. (b) The horizontal synthesis error and forward synthesis error obeyed normal distribution. The probability that the combined error was within 50 mm was more than 98%. (c) The longitudinal tilt error was only affected by the non-vertical optical axis. (d) The error was mainly caused by the error of the non-vertical optical axis, accounting for more than 95% of the error. In summary, the non-vertical error between the camera and the roof had a great influence, but the comprehensive simulation results showed that the error was still within the range of practical engineering requirements.
In addition, there are some issues that need to be addressed here. First of all, in recent years, IMU has been used in the field of mobile robot positioning, but IMU has a high update frequency and high estimation accuracy in a short time. However, its long-term use will produce error accumulation, and the motion error increases with time, so it can only locate for a short time and provide relative positioning information. Its function is to measure the route of motion relative to the starting object, and it cannot provide specific information about its location. Additionally, it is often used with GPS, but the underground coal mine does not have the appropriate surface conditions, such as the lack of GPS, which will lead to the decline in IMU positioning accuracy. The method we designed is based on the arrangement of the anchor network, selects the bolt end as the feature, and uses vision to identify the feature points so as to obtain the current position of the robot. Of course, with the continuous development of IMU, in future research, we will also consider its fusion with machine vision for positioning and explore a fusion location method that can be used in coal mines. Secondly, in some environments where the light is not strong, we can use auxiliary equipment to supplement the light to make it meet the positioning conditions. Finally, with regard to the problem of dust, according to our existing experience, in the environment of the coal roadway or semi-coal roadway, the problem of dust has been solved, so that in the underground environment of the coal mine, the anchor-drilling robot can work normally and complete the positioning.
The anchor robot monocular vision object location detection method invented in this paper could realize the positioning function, the error was within a reasonable range, and the method was feasible.

Author Contributions

Conceptualization, X.H.; Methodology, X.H. and Y.Z.; Software, X.Y.; Validation, Y.Z. and X.Y.; Formal analysis, X.H., Y.Z. and J.Z.; Investigation, J.Z., R.W., Z.W. and H.J.; Resources, R.W., Z.W. and H.J.; Data curation, R.W., Z.W. and H.J.; Writing—original draft, Y.Z.; Writing—review & editing, X.H., X.Y. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Innovation Research Group Project of the National Natural Science Foundation of China (grant number: 52121003), National Key Research and Development Program of China (grant number: 2022YFC2904105) and Key Research and Development Program of Hebei Province (grant number: 23311805D).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, B. Analysis of the current situation and disaster prevention and control of deep mining in China’s coal mines. China Pet. Chem. Stand. Qual. 2020, 16, 192–193. [Google Scholar]
  2. Cheng, J.; Zhu, M.; Wang, Y.; Yun, H.; Cui, W. Cascade construction and key technologies of geological model for intelligent and precise coal mining face. Coal Sci. Technol. 2019, 44, 2285–2295. [Google Scholar] [CrossRef]
  3. Wang, G.; Meng, L. Development of coal mine intelligence and its technical equipment. China Coal. 2023, 49, 1–13. [Google Scholar]
  4. Ma, H.; Chao, Y.; Xue, X.; Mao, Q.; Wang, C. Binocular vision-based displacement detection method for anchor digging robot. J. Mine Autom. 2022, 48, 16–25. [Google Scholar]
  5. Tian, Y. Present situation and development direction of navigation technology of boom-type roadheader. J. Mine Autom. 2017, 43, 37–43. [Google Scholar]
  6. Fu, S.; Li, Y.; Zhang, M.; Zong, K.; Cheng, L.; Wu, M. Ultra-Wideband pose detection system for boom-type roadheader based on caffery transform and taylor series expansion. Meas. Sci. Technol. 2018, 29, 015101. [Google Scholar] [CrossRef]
  7. Wang, Z. Research on the key technology of autonomous positioning of mining cantilever roadheader based on machine vision. Autom. Appl. 2023, 64, 177–179. [Google Scholar]
  8. Huang, D.; Yang, L.; Luo, W.; Zhang, X.; Shi, Z.; Huang, J.; Wang, J. Research on real-time positioning measurement method of road header based on vision/inertial navigation. Laser Technol. 2017, 41, 19–23. [Google Scholar] [CrossRef]
  9. Ma, H.; Zhou, Z.; Sun, L. Mathematical model of automatic positioning of coal road bolt drilling rig. Min. Mach. 2008, 40, 24–27. [Google Scholar]
  10. Cao, D.; Zhuang, X. Research on robot visual image enhancement and obstacle recognition in coal mine roadway environment. Coal Min. Mach. 2017, 36, 39–41. [Google Scholar] [CrossRef]
  11. Tian, Y. Talking about the application prospect of machine vision technology in coal mines. Ind. Mine Autom. 2010, 36, 30–32. [Google Scholar]
  12. Shao, K. Research on Automatic Positioning Drilling Technology of Tunnel Rock Drill Trolley. Constr. Mach. Maint. 2021, 3, 48–50. [Google Scholar]
  13. Zhang, K.; Tian, Y.; Jia, Q. Application Status and Trend of Machine Vision in Coal Machinery Equipment. Coal Mine Mach. 2020, 41, 123–125. [Google Scholar]
  14. Zhi, N.; Mao, S.; Li, M. Low-luminance image enhancement algorithm for coal mine underground based on double gamma function. J. Liaoning Tech. Univ. (Nat. Sci. Ed.) 2018, 37, 191–197. [Google Scholar] [CrossRef]
  15. Miao, L.; Che, Z. Vision positioning technology of mobile robot based on adaptive down sampling. Appl. Opt. 2017, 38, 429–433. [Google Scholar] [CrossRef]
  16. Montijano, E.; Cristofalo, E.; Zhou, D.; Schwager, M.; Saguees, C. Vision-based distributed formation control without an external positioning system. IEEE Trans. Robot. 2016, 32, 339–351. [Google Scholar] [CrossRef]
  17. Aref, M.M.; Oftadeh, R.; Ghabcheloo, R.; Mattila, J. Real-time vision-based navigation for nonholonomic mobile robots. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 515–522. [Google Scholar]
  18. Wu, G.; Zheng, J.; Bao, J.; Li, S. Mobile robot location algorithm based on image processing technology. EURASIP J. Image Video Process. 2018, 1, 1–8. [Google Scholar] [CrossRef]
  19. Huang, C.; Chen, D.; Tang, X. Implementation of Workpiece Recognition and Location Based on Opencv. In Proceedings of the 2015 8th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December 2015; pp. 228–232. [Google Scholar]
  20. Apedo, K.L.; Ronel, S.; Jacquelin, E.; Massenzio, M.; Bennani, A. Theoretical analysis of in-flatable beams made from orthotropic fabric. Thin-Walled Struct. 2009, 47, 1507–1522. [Google Scholar] [CrossRef]
  21. Zhou, S.; Chen, J.; Wen, Q. A weight-based optimization algorithm of histogram equalization. In Proceedings of the SPIE 10843, 9th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Sensing and Imaging, 108430T, Chengdu, China, 8 February 2019. [Google Scholar] [CrossRef]
  22. He, Y.-X. The Influence of Image Enhancement Algorithm on Face Recognition System. In Proceedings of the 2021 2nd International Conference on Computer Engineering and Intelligent Control (ICCEIC), Chongqing, China, 12–14 November 2021; pp. 20–24. [Google Scholar]
  23. Al-Shemarry, M.S.; Li, Y.; Abdulla, S. Identifying License Plates in Distorted Vehicle Images: Detecting Distorted Vehicle Licence Plates Using a Novel Preprocessing Methods With Hybrid Feature Descriptors. IEEE Intell. Transport. Syst. Mag. 2023, 15, 6–25. [Google Scholar] [CrossRef]
  24. Alimagadov, K.A.; Umnyashkin, S.V. Application of Wiener Filter to Suppress White Noise in Images: Wavelet vs Fourier Basis. In Proceedings of the 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), St. Petersburg, Moscow, Russia, 26–29 January 2021; pp. 2059–2063. [Google Scholar]
  25. Liu, S.; Chai, Y.; Yuan, R.; Miao, H. Laser 3D Tightly Coupled Mapping Method Based on Visual Information. Ind. Robot. Int. J. Robot. Res. Appl. 2023, 7–25. [Google Scholar] [CrossRef]
  26. He, B.; Geng, X.; Hu, W. An improved LDR image enhancement algorithm based on visual characteristics and qray scale compensation. J. Univ. Chin. Acad. Sci. 2018, 35, 391–401. [Google Scholar]
  27. Le, H.M.; Cao, L.; Do, T.N.; Phee, S.J. Design and modelling of a variable stiffness manipulator for surgical robots. Mechatronics 2018, 53, 109–123. [Google Scholar] [CrossRef]
Figure 1. Excavation face and roof bolt array of roadway.
Figure 1. Excavation face and roof bolt array of roadway.
Applsci 13 10183 g001
Figure 2. Technology roadmap.
Figure 2. Technology roadmap.
Applsci 13 10183 g002
Figure 3. The visual positioning system.
Figure 3. The visual positioning system.
Applsci 13 10183 g003
Figure 4. The improved reckoning positioning model.
Figure 4. The improved reckoning positioning model.
Applsci 13 10183 g004
Figure 5. Diagram of the coordinate system of drilling and the anchor unit.
Figure 5. Diagram of the coordinate system of drilling and the anchor unit.
Applsci 13 10183 g005
Figure 6. Ideal roadway environment positioning model.
Figure 6. Ideal roadway environment positioning model.
Applsci 13 10183 g006
Figure 7. Analysis of the angle error of anchor rod spacing.
Figure 7. Analysis of the angle error of anchor rod spacing.
Applsci 13 10183 g007
Figure 8. Image distortion correction effect. (a) Wide-angle image. (b) Image after distortion correction.
Figure 8. Image distortion correction effect. (a) Wide-angle image. (b) Image after distortion correction.
Applsci 13 10183 g008
Figure 9. Image enhancement and filtering effect. (a) Original image. (b) Histogram equalization. (c) Wiener filtering.
Figure 9. Image enhancement and filtering effect. (a) Original image. (b) Histogram equalization. (c) Wiener filtering.
Applsci 13 10183 g009
Figure 10. Image enhancement and filtering effect. (a) Characteristic line fitting tunnel environment. (b) Green block diagram detail amplification.
Figure 10. Image enhancement and filtering effect. (a) Characteristic line fitting tunnel environment. (b) Green block diagram detail amplification.
Applsci 13 10183 g010
Figure 11. Feature point image coordinates diagram.
Figure 11. Feature point image coordinates diagram.
Applsci 13 10183 g011
Figure 12. Feature point tracing principle.
Figure 12. Feature point tracing principle.
Applsci 13 10183 g012
Figure 13. Experimental scene and platform. (a) Image feature points of the bolt head, (b) simulated roadway environment, (c) level meter, (d) camera and bodywork, (e) horizontal goniometer, (f) roadway.
Figure 13. Experimental scene and platform. (a) Image feature points of the bolt head, (b) simulated roadway environment, (c) level meter, (d) camera and bodywork, (e) horizontal goniometer, (f) roadway.
Applsci 13 10183 g013
Figure 14. Camera calibration and acquisition of images. (af) different inter frame images.
Figure 14. Camera calibration and acquisition of images. (af) different inter frame images.
Applsci 13 10183 g014
Figure 15. Mean error curve of heading angle positioning.
Figure 15. Mean error curve of heading angle positioning.
Applsci 13 10183 g015
Figure 16. Z-, x-orientation error average curve.
Figure 16. Z-, x-orientation error average curve.
Applsci 13 10183 g016
Figure 17. Optical axis non-vertical error single-point positioning simulation error.
Figure 17. Optical axis non-vertical error single-point positioning simulation error.
Applsci 13 10183 g017
Figure 18. Optical axis non-vertical error within 50 m accumulated error. (a) Simulation data in the z-direction and error distribution at 50 m. (b) Simulation data in the x-direction and error distribution at 50 m. (c) Simulation data of the course angle and the course angle of error distribution at 50 m.
Figure 18. Optical axis non-vertical error within 50 m accumulated error. (a) Simulation data in the z-direction and error distribution at 50 m. (b) Simulation data in the x-direction and error distribution at 50 m. (c) Simulation data of the course angle and the course angle of error distribution at 50 m.
Applsci 13 10183 g018
Table 1. Target body positioning parameters list.
Table 1. Target body positioning parameters list.
ParameterSymbolSignificance
Bodywork CSYS O b X b Y b Z b
Horizontal error x The   deviation   distance   between   bodywork   CSYS   and   X w   of roadway CSYS
Advance error z The   deviation   distance   between   bodywork   CSYS   and   Z w   of roadway CSYS
Course angle θ The   deviation   angle   between   bodywork   CSYS   and   Y w of roadway CSYS
Table 2. Table of position and pose parameters for the positioning experiment.
Table 2. Table of position and pose parameters for the positioning experiment.
Mark PointParameter of Marking
Position
Parameter of Survey
Location
Positioning System
Error
z/mmx/mmθ/°z/mmx/mmθ/° Δ z /mm Δ x/mm Δ θ
1750136.5201189.298.919.8−37.659.2−0.228
21500136.5201945.188.719.8−47.865.1−0.162
32250136.5202702.269.220.2−67.372.20.172
43000136.5203479.037.421.0−99.199.01.015
53750136.5204182.653.320.2−83.971.30.186
64500136.5204923.459.920.1−77.262.10.086
75250136.5205660.750.819.9−86.349.4−0.150
86000136.5206411.529.219.9−107.950.2−0.145
96750136.5207118.141.219.2−95.96.8−0.833
107500136.5207871.66.419.5−130.710.3−0.495
118250136.5208588.2−1.219.0−138.3−23.1−0.957
129000136.5209337.1−34.419.4−171.6−24.2−0.610
139750136.52010,150.7−98.021.2−234.520.71.173
1410,500136.52010,728.9−19.919.0−153.5−120.9−0.986
1511,250136.52011,584.3−109.520.2−243.1−15.50.177
1612,000136.52012,236.4−99.819.6−233.3−113.4−0.422
1712,750136.52013,107.0−161.921.0−295.57.21.038
1813,500136.52013,814.3−147.620.4−281.2−35.50.407
1914,250136.52014,395.4−76.618.3−210.1−204.4−1.715
2015,000136.52015,258.5−174.819.5−308.4−91.3−0.483
2115,750136.52016,050.4−221.620.5−355.2−49.40.535
2216,500136.52016,804.2−240.620.8−374.2−45.60.843
2317,250136.52017,580.4−279.821.8−413.4−19.41.845
Table 3. Depth measurement of the experiment.
Table 3. Depth measurement of the experiment.
Distance of
Z-axis/mm
070014002100280035004200490056006300
Mean value/mm26042603.62621.62655.22650.43628.62646.4264926462647.4
Distance of
Z-axis/mm
7000770084009100980010,50011,20011,90012,60013,300
Mean value/mm26352646.626422640.42633.226312641.826402621.62642.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, X.; Zhang, Y.; Yang, X.; Zhang, J.; Wen, R.; Wu, Z.; Jia, H. Research on the Body Positioning Method of Bolting Robots Based on Monocular Vision. Appl. Sci. 2023, 13, 10183. https://doi.org/10.3390/app131810183

AMA Style

Hao X, Zhang Y, Yang X, Zhang J, Wen R, Wu Z, Jia H. Research on the Body Positioning Method of Bolting Robots Based on Monocular Vision. Applied Sciences. 2023; 13(18):10183. https://doi.org/10.3390/app131810183

Chicago/Turabian Style

Hao, Xuedi, Yiming Zhang, Xueqiang Yang, Jinglin Zhang, Rusen Wen, Zhenlong Wu, and Han Jia. 2023. "Research on the Body Positioning Method of Bolting Robots Based on Monocular Vision" Applied Sciences 13, no. 18: 10183. https://doi.org/10.3390/app131810183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop