Next Article in Journal
Detection of Pest Feeding Traces on Industrial Wood Surfaces with 3D Imaging
Next Article in Special Issue
Implementing an Industry 4.0 UWB-Based Real-Time Locating System for Optimized Tracking
Previous Article in Journal
Improving Mechanical Properties of Low-Quality Pure Aluminum by Minor Reinforcement with Fine B4C Particles and T6 Heat Treatment
Previous Article in Special Issue
Optimization of Wiring Harness Logistics Flow in the Automotive Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot

Department of Cybernetics and Robotics, Faculty of Electronics, Photonics and Microsystems, Wrocław University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(23), 10774; https://doi.org/10.3390/app142310774
Submission received: 27 September 2024 / Revised: 8 November 2024 / Accepted: 19 November 2024 / Published: 21 November 2024

Abstract

:
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the location of inspected objects, such as, for example, conveyor idlers in the vicinity of the robot. This paper presents a novel approach to analyze the 3D LIDAR data to detect idler frames in real time with high accuracy. Our method processes a point cloud image to determine positions of the frames relative to the robot. The detection algorithm utilizes density histograms, Euclidean clustering, and a dimension-based classifier. The proposed data flow focuses on separate processing of single scans independently, to minimize the computational load, necessary for real-time performance. The algorithm is verified with data recorded in a raw material processing plant by comparing the results with human-labeled objects. The proposed process is capable of detecting idler frames in a single 3D scan with accuracy above 83%. The average processing time of a single scan is under 22 ms, with a maximum of 75 ms, ensuring that idler frames are detected within the scan acquisition period, allowing continuous operation without delays. These results demonstrate that the algorithm enables the fast and accurate detection and localization of idler frames in real-world scenarios.

1. Introduction

Belt conveyors are among the most important means of transport of bulk material in a variety of industries, from mining to food processing. They gained their position because they combined high efficiency that allowed the transport of large capacities in a short time with cost effectiveness. Report [1] indicates that, due to the importance of conveyors to the operation of mines, a failure in conveyor operation due to malfunction, repairs or replacement can cause losses up to EUR 1 mln per hour. Nevertheless, like any other machinery, the smooth functioning of conveyors relies on regular maintenance. Neglecting this aspect can result in severe outcomes. Analysis of accident data in the report of the French Ministry for an Ecological and Solidarity Transition [2] shows an increasing number of reported belt conveyor fire accidents, indicating that they were mainly (67% of cases) a result of equipment defects, with a leading cause of them in 43% being the inadequate frequency and quality of preventive maintenance. In [3], the authors give an example of such catastrophic consequences of the lack of proper maintenance—the belt conveyor fire in a Brazilian mining company. Therefore, regular inspection and proper maintenance are expected not only to reduce the costs of unexpected malfunctions but also to prevent more serious accidents.
Currently, the primary method of maintenance involves periodic inspection carried out by human workers, who check elements along the conveyors which can span for many kilometers in a typical industrial plant. Such inspections have low efficiency and are costly, not to mention the additional hazard to the personnel operating in the vicinity of the running machinery, collectively leading to rare check schedules. Consequently, a number of research and development works are underway to automate belt conveyor inspection. It is expected that introducing automated inspection in place of that performed by humans will be beneficial in many aspects. This includes a reduction in both the time and costs of unexpected downtime due to an increased frequency of the inspections. Moreover, it will improve human resource allocation as fewer personnel will be necessary to be allocated for searching faulty elements and enable more focus on actual repair tasks. Lastly, the shift to automated inspection is expected to reduce the safety risk by relocating personnel from in-mine presence to remote operation.
Up until now, the most advances were achieved in automating the inspection of main active components that can be continuously monitored using stationary equipment, such as the driving systems and belts. These individual units can be equipped with dedicated monitoring systems. However, monitoring the idlers remains a challenge. The idlers form the support carrying the belt, and typically consist of a frame and a number of idler rolls (cf. Figure 1).
With idlers positioned every several meters along the conveyor, the number of idler rolls may exceed thousands per kilometer. Damage to an idler roll can lead to severe issues, such as belt rips, overheating, and even fires. Various types of idler roller damage and their potential consequences for belt integrity are presented in [6]. Therefore, it is crucial to detect issues with idlers before they deteriorate into failures. Due to the fact that the deterioration of idler rolls is a source of vibrations, excessive friction, noise, and heat [7], various solutions were developed for inspection and pre-failure detection in idler rolls. The technologies used include the analysis of the audio signal from the microphone array processed with convolutional neural networks (CNNs) [8], SVM, or various NNs [9], infrared (IR) or RGB images [4,10], load and speed sensors, and combinations of these [11]. These solutions allow for automation of the idler roll state analysis. However, there remains an issue of delivery of the measurement unit to each individual idler roll. The review of automated solutions for belt conveyor inspection and maintenance in [1,7,12] presents several approaches that were tested and can be mainly divided into three groups.
  • The direct approach in which sensors are mounted along the conveyor. Some solutions within this approach use discrete or distributed sensors. In the first case, the sensors are either integrated with the idler rolls or mounted on the idler frames. The sensors communicate using Internet of Things (IoT) technology, such as a mesh network. This approach allows for the precise, continuous monitoring of each idler’s state. In the latter case, a fiber optic cable is installed along the conveyor for distributed acoustic sensing of overheating [13] or general damage of idlers [14]. However, due to the length of the conveyors and the sheer number of the idlers, this approach may not be the most economically efficient.
  • The second approach uses the belt as a transport medium for sensors. Examples include solutions where sensors are integrated with the belt (e.g., Honeywell BeltAIS) or IMU-based sensor devices inserted into transported material [15]. The measurements are indirect, as they measure the belt’s reaction to the idler rollers’ states, making them less precise than direct observations of the rollers. On the other hand, employing a single measurement unit for multiple idlers reduces the overall cost of monitoring.
  • The third approach aims to combine the advantages of direct measurement with the cost effectiveness of a single sensor unit by mounting sensors on a mobile platform. Within this approach, various solutions have been proposed. Some use a rail along the conveyor that transports a sensor module [16] or a track with an inspection robot on it, with either an RGB camera to detect belt displacement [17] or an audio signal and infrared image sensors to detect idler malfunction [18]. Others employ various mobile platforms for the transportation and operation of sensors. For example, Machinery Automation and Robotics (currently Scott) [1] presented the Robotic Idler Replacement System—a tracked platform with an industrial manipulator capable of inspecting and replacing idler rolls. Lösch et al. [19] introduced a manipulator mounted on a four-wheel platform for environmental monitoring, which cooperates with an Internet of Things (IoT) sensor network and is capable of autonomous driving. Among platforms investigated for carrying sensor modules are four-wheeled robots [20,21], legged robots [22], and a hybrid wheeled–tracked mobile robot [23]. Furthermore, some researchers explore the use of unmanned aerial vehicles (UAVs) to create an integrated monitoring and inspection system employing quadrocopters [3,24] or combined systems of mobile robots with UAV landing pads [25]. Most of these platforms rely on teleoperation. Even in cases when the measurement process with placement of the sensor next to an idler roll is automated, the platform’s motion is usually controlled by the human operator. This is caused by the lack of sufficiently reliable localization, which is crucial for an autonomous robot to move along the predefined paths and execute tasks in set locations.
The comparison of approaches presented in [12] emphasizes the usability of mobile robots due to their precision and ability to carry versatile sensors. At the same time, navigation and mobility are indicated as the main challenges of such platforms. In environments where precise absolute localization systems are not available, robots cannot rely on predefined motion plans. Instead, on-board sensors must be used to drive a robot to expected destinations and to aim its sensory system towards the measured objects.
Currently, mobile industrial robots are frequently equipped with Light Detection and Ranging (LIDAR) sensors to scan their surroundings using infrared laser beams. The LIDAR scanners most often used for localization are categorized into two types based on the shape formed by the transmitted beams: two-dimensional (2D or planar), where the light beams are confined to a single plane, and three-dimensional (3D or spatial), where the beams are distributed across three dimensions.
Long range, field of view, and precision have made spatial LIDAR scanners applicable in a variety of fields, including urban and underground exploration, archaeology, forestry, agriculture, environmental monitoring, and more [26]. This is supported by a growing body of research on data processing methods and an increasing number of sensor models from numerous manufacturers [27]. Additionally, 3D LIDAR scanners play an important role in autonomous driving [28,29]. It is, however, typical that 3D point clouds are used to build a full 3D map of the whole area, therefore hugely increasing the computation time and memory utilization. Examples of this approach for localization in underground mines can be found in [30], and for outdoor and indoor scenarios in [31,32]. Yet the localization time may greatly vary—in [32], the median time for the localization algorithm processing is about 1 s; in [31], the authors report it to be about 0.1 s. The latter may be sufficient in many applications, but it may also introduce unnecessary constraints to the robot performance. It is a natural expectation that reducing the amount of data to be processed, particularly in the complex stages of algorithms, will lead to a decrease in the processing time. However, it remains crucial that the retained data are sufficient to achieve acceptable precision.
In this paper, we investigate the applicability of point density in detecting the idler frames of belt conveyors. Filtering regions with an increased density of point cloud points is expected to indicate the regions where the searched objects may be situated. The initial selection of these regions significantly reduces the number of points processed further in the detection and classification stages.
Some researchers used density-based filtering to detect long objects, both natural such as trees [33,34,35] and artificial such as street poles [36] or bridge elements [37]. However, in all cases, the authors were processing dense point clouds obtained from geodesic 3D LIDAR, often using point clouds aggregated from multiple locations. Such an approach is not suitable for mobile robots, as in [34] the authors state that a single scan contains 30 mln points and its collection lasts 8 min, and in [33] it takes 2.5 min to collect 50 mln points, with 3 h of a single-scan processing. Another common element of those cases is that the authors search for objects that extend in one direction. After the first clustering based on density analysis, the selected regions are sliced with a single or multiple planes perpendicular to the direction in which the object extends and further analysis is performed for 2D images.
The idler frames that are to be detected by the inspection robot are different from the long objects. They occupy a rather concise space and are placed regularly along the belt conveyor. Consequently, the mentioned methods dedicated to long objects cannot be applied directly. Motivated by that fact, this paper extends the previous approach by proposing a density analysis not in a single but in two orthogonal directions to allow the application of the method to concise objects. Additionally, the whole processing uses 3D points, not being restricted to slice planes.
The contribution of the paper is a novel processing workflow of the 3D LIDAR scans for the detection of the idlers’ supports. The main properties of the proposed algorithm are as follows:
  • The method utilizes selection of the objects based on density segmentation, but the processing scheme proposed in the paper differs from the previously mentioned works in that it is tailored with consideration of the requirements of mobile robots. In particular, it is assumed that the data are obtained with a portable LIDAR scanner with a short scanning time and generating a relatively low number of points for each scan.
  • The focus of the algorithm was on minimizing the computational load. Consequently, the total processing time of the proposed workflow is short enough to allow processing of the scans in real time with low latency.
  • The proposed solution is purposely limited to the data from a single sensor.
The proposed solution is expected to become a component of a future multi-sensor system of navigation dedicated to a mobile robot for idler roller inspection.

2. Materials and Methods

2.1. Process Overview

The proposed method for identifying the positions of idler frames in the image from 3D LIDAR consists of several stages, illustrated in Figure 2.
The process encompasses the following:
  • Data Acquisition (DA): In this stage, distance measurements from the LIDAR are filtered and transformed into a point cloud.
  • Cropping and Alignment (CA): The points are cropped to reduce their number and aligned to match the local coordinate system with the global one.
  • Density Segmentation (DS): This stage determines the regions where the objects (belts and idler supports) are most likely located based on the density of points in the regions.
  • Support Clustering (SCt): The points from regions of interest found in the segmentation stage are grouped into spatial clusters for further analysis.
  • Support Classification (SCs): A classification of the clusters of points is performed in this stage.
  • Position Estimation (PE): The final stage of the process calculates the location of the detected objects.
The data processing software was built as nodes for ROS 2 Humble [38] on Ubuntu 22.04 [39]. The core software was implemented in C++ using the Point Cloud Library version 1.12 with the visualization written in Python. The source code of the ROS package is available at https://github.com/wust-dcr/fast_idler_supports_detection (accessed on 8 November 2024).
In this section, we describe the processing stages and methods selected, along with the rationale for their choice, highlighting parameters influencing process performance. Section 3 will present the test parameters’ values and justifications.

2.2. Data Acquisition

The first stage of data processing has the role of reading the data from the LIDAR interface and transforming it to a suitable format for further processing. After the collection of the raw sensor data from the LIDAR with the ROS driver, the points are transformed to Cartesian coordinates and represented in sensor_msgs/PointCloud2 format.
A rotating 3D LIDAR outputs a scan as an array of distances measured from a single point sweeping the angular field of view. The returned values may be interpreted as points in spherical coordinates
P i , j S = ( d , θ i , ϕ j ) T ,
where d is the measured distance at polar angle θ and azimuth angle ϕ . The angles depend on the mechanical construction of the sensor and are the same in each consecutive scan. To obtain the point cloud representation, the points from the 3D scan are transformed to the Cartesian coordinates. To conform to the ROS standards, the points are represented in the local coordinate system with the X axis directed to the front of the sensor, the Y axis to the left, and the Z axis up. The transformation to the Cartesian coordinates is given by
P i , j C = ( x , y , z ) T = ( d cos θ i cos ϕ j , d cos θ i sin ϕ j , d sin ϕ j ) T .
In Section 3.1, we provide the details on the data transformation for the specific 3D LIDAR used in the experiments.

2.3. Cropping and Alignment

The second stage of the process fulfills two tasks: it reduces the number of points to process by clipping those outside the region of interest (ROI), and it transforms the points by aligning the scan to a global reference frame via rotation. Due to the uneven terrain on which the robot moves, the relative position of the LIDAR to the ground changes for each scan, and the rotation angles must be calculated at each step.
Those goals are achieved in three steps: first, distant points are removed; then, the position of the ground is estimated; finally, the rotation matrix is calculated to align the ground with the horizontal plane, and the scan points are rotated accordingly.
The position of the ground plane is estimated using the Random Sample Consensus (RANSAC) method. The method detects the predefined shapes based on the hypotheses generated by random sampling of the dataset. Hypotheses are then accepted or rejected by a vote mechanism involving all remaining data samples. The method was developed by Fischler and Bolles [40] and was later modified to increase its applicability and efficiency. In [41], a version of the RANSAC algorithm was proposed to effectively detect several basic shapes, such as planes, spheres, cylinders, cones, and tori. The method was successfully applied in a similar scenario of ground detection with 3D LIDAR in industrial environment in [42]. A review of the state-of-the-art developments in the RANSAC algorithm and its variants for robotic applications is presented in [43].
In the proposed workflow, the ground plane π G is calculated using the implementation of the RANSAC algorithm from the PCL library [44]. The points are matched to the ground plane represented by the four-parameter pcl::SACMODEL_PLANE model, equivalent to the Hessian normal form
π G : n x x + n y y + n z z + n o = 0 ,
where n G = ( n x , n y , n z ) T is a vector normal to the plane and n 0 a distance between the plane and the sensor origin.
Based on the parameters obtained for the ground plane, the points within a predefined minimum distance from the ground d g m i n are excluded. The remaining points are then rotated and translated to align the ground and XY planes
P A = R ( n G ) P C ( 0 , 0 , n 0 ) T .
The 3 × 3 rotation matrix R ( n G ) is obtained from the Rodrigues formula.
R ( n G ) = cos α i d 3 + ( 1 cos α ) v v T + sin α [ v ] ,
where i d 3 denotes 3 × 3 identity matrix and [ v ] is an antisymmetric matrix generated from v = ( v x , v y , v z ) T as
[ v ] = 0 v z v y v z 0 v x v y v x 0 .
The rotation axis v is obtained from a vector product of the vector normal to the ground plane n G and the unit vector parallel to the sensor Z axis, and the rotation angle α is obtained from the scalar product of the two:
v = n g × ( 0 , 0 , 1 ) T , α = arccos n g , ( 0 , 0 , 1 ) T .
An example of transformation results in each step is presented in Section 3.2. The results of the cropping and alignment stage can be modified by tuning the distance to the ground d g m i n .

2.4. Density Segmentation

Previous research has shown that density segmentation is an effective tool for detecting objects that extend in one direction. It was applied to both objects extending vertically, such as trees or poles [34,36], and horizontally, like a bridge construction [37]. The core idea of this approach involves using a parallel projection along the line of span to reduce spatial information to a 2D plane and subsequent analysis of the created planar object. However, in the considered context of belt conveyor inspection, the direct application of this approach is not sufficient. While the belt qualifies as an elongated object and this method can be applied directly to it, the idlers and their supports occupy small, confined areas. Therefore, detecting them through density segmentation requires a new approach, which is developed in this study.
Observing industrial sites with belt conveyors and considering environment along the conveyor, objects may be categorized into 3 main groups: long objects, such as belts or protective barriers, which occupy most space along the considered line; periodic objects of similar shape, like idlers or pillars, and other objects that appear irregularly. The density graph, created with a parallel projection of 3D points to a plane along the selected line, is expected to reflect this categorization. It is expected to show high density for elongated objects, medium density for periodic objects, and low density for irregular ones. Moreover, the density for periodic objects should correspond to the distance between consecutive instances of those objects. The parameters that distinguish these three classes depend on the sensor resolution, placement relative to the objects, and the grid size used in the analysis.
To exploit this property, the proposed solution uses 2D histograms representing the density of points. The histogram is constructed by a parallel projection of points onto a plane along a selected line. This plane is partitioned into a grid with square cells of size a and each cell in the histogram is associated with the number of points within it. Since the analyzed point cloud is the result of the alignment in the previous stage, the 3D points are projected along the main axes. Specifically, two projections are used: the top one along the Z axis and the front one along the X axis of the robot. The resulting 2D histograms are denoted as H X Y and H Y Z , respectively. The ( i , j ) cell of a histogram is given by
H X Y ( i , j ) = # x y = 1 0 0 0 1 0 P A : ( i 1 ) a x < i a ( j 1 ) a y < j a , H Y Z ( i , j ) = # y z = 0 1 0 0 0 1 P A : ( i 1 ) a y < i a ( j 1 ) a z < j a ,
where # { · } denotes the number of elements in the set.
Considering the shape of the idler frames, they are expected to appear as periodic objects in the H Y Z histogram and as locally elongated objects in the H X Y histogram. The first step of the density segmentation stage involves identifying objects that extend along the Z axis in the H X Y histogram, creating a set of point groups considered as idler candidates. The cells are classified to an idler candidate group if
H X Y ( i , j ) H X Y ( i + b r , j ) b i and H X Y ( i , j ) H X Y ( i b r , j ) b i ,
i.e., they establish a local maximum along the X axis with a number of points higher by at least b i than the cells in b r range. The second step entails processing H Y Z to find objects extended along the X axis and detect conveyor belts. Classification as an elongated object is done through thresholding, where points within cells satisfying H Y Z ( i , j ) > b t h constitute the set of elongated objects. It is important to note that, while the identified points may belong to the belts, they could also represent other objects, such as cables and protective barriers running alongside the conveyors. To improve the performance of the subsequent clustering stage, points classified as belonging to elongated objects are removed from the idler candidates.

2.5. Clustering

A drawback of the histogram approach is that the cell border may occasionally pass through the idler structures. Consequently, some points belonging to the idlers might not be included among the idler candidates during the density segmentation stage. To address this issue, the clustering stage was introduced. This stage uses a clustering algorithm to recover missing points in the idler candidates identified during segmentation. The specifics of this stage are that the objects to form the clusters are of irregular shape; therefore, the classification to the clusters is based on the distance of a point in question to the points already classified as the idler frame. The initial cluster cores are formed by the idler candidates from the previous stage. An algorithm used for this purpose is Euclidean clustering from the PCL library. The algorithm was proposed in [45] and it is presented in Listing 1.
The action of the algorithm may be modified by setting 3 parameters: distance defining neighborhood d t h , and minimum c m i n and maximum c m a x size of accepted clusters.
The points that passed the density filter become the idler frame candidates for Euclidean clustering. The remaining points are tested as to whether a point that is their nearest neighbor belongs to a frame candidate and whether the distance to that point is smaller than a predefined maximum distance. After processing all the points, the clusters are verified to contain a number of points within predefined limits. The accepted clusters are passed to the next stage.
Listing 1. Euclidean clustering [45], PCL library implementation.
let C be empty list of clusters, Q list of points to be checked
for every point p i P :
 add p i to Q
 for every point p i Q do:
  search for the set P i k of point neighbors of p i in a sphere with radius r < d t h
  for every neighbor p i k P i k :
   check if the point has already been processed, and if not add it to Q
 if the list of all points in Q has been processed:
  if number of points in Q is within the limit ( c m i n , c m a x ) :
   add Q to the list of clusters C
  reset Q to an empty list
STOP if all points p i P processed and are now part of the list of point clusters

2.6. Object Detection and Position Estimation

At the final stage, the clusters representing idler frame candidates are verified with a binary classifier. A simple classifier was used based on the dimensions of a candidate cluster. A candidate is considered a valid idler if it passes all conditions of the cascade classifier
  • Width of the cluster (on Y axis) exceeds the minimum width i w ;
  • The highest point of the cluster is within i r h range of the idler height i h , or it is below i h but it was collected on the top plane of the sensors’ field of view;
  • The lowest point of the cluster is within vertical angular resolution range from the ground, or it is above the ground but it was collected on the bottom plane of the sensors’ field of view;
  • The center of the cluster is located in the vicinity of an elongated object of width above b w (possibly belt), which was found on the 2D histogram in the density segmentation phase.
The clusters not conforming to all these conditions are designated as non-idlers.
In the results section we verify the importance of the last step, comparing the algorithm operation with and without considering the belt location.

3. Results

3.1. Experiment Setup

The proposed algorithm was verified using data recorded in an industrial hall with mineral resource storage. The dataset allowed testing the solution in a realistic scenario of the actual industrial site including a moving belt conveyor. The data were collected previously for the purpose of developing the idler malfunction detection presented in [4,20]. The authors granted permission for the use of the recorded 3D LIDAR data in this study.
The experiment comprised two runs, designated as Path A and Path B. The scheme of the experiment location is depicted in Figure 3. Illustrative images of the robot path and the conveyors with marked idler supports to be detected are presented in Figure 4.
Path A was passing between two conveyors of a 0.7 m height. Path B was traversed in the opposite direction from Path A, with the same conveyor to the left as in the Path A and no conveyor to the right. Each path was about 150 m long.
The mobile platform used for data collection is shown in Figure 5. The platform has a skid-steered kinematic system, i.e., four independently driven wheels on fixed axles. This type of mobile platform is characterized by the slippage of the wheels at turns; therefore, the localization of the robot with odometry, that is common for differential drive or car-like kinematic structures, is subject to significant error and therefore is unreliable without additional sensors like inertial measurement units (IMUs) or global localization modules like GPS. The experimental platform was not equipped with any of these sensors. This further supports the need of reliable localization using the available data.
The platform carried a measurement module containing several sensors. That included an RGB and depth camera, a thermal imaging camera and 2D and 3D LIDAR sensors. However, for the needs of this study, only the data from the 3D LIDAR sensor were used. The 3D LIDAR sensor used in the experiment was Velodyne Puck. The sensor belongs to a popular type of spinning LIDAR sensors using the time-of-flight (ToF) method for the measurement of distance. The sensor uses a spinning array of 16 pairs of infrared (IR) lasers and detectors. Each pair measures distance within a full angle on a single cone’s surface, with measurement surfaces vertically separated by an angular distance of 2°. As a result, the field of view of the sensor is 30° vertically (from −15° to 15°) and 360° horizontally with angular resolutions 2° and 0.1–0.4°(depending on settings), respectively, and a measurement range up to 100 m. The sensor returns measurements of up to 300,000 points per second in the Single Return Mode that was used in the experiment. The transformations of measured distances to the point cloud coordinates were performed by the ROS driver of the LIDAR sensor.
The sensor was mounted at a height of 1.45 m from the ground, above the conveyor belts, tilted downward at approximately 26°. The research on the thermal stability of the sensor [46] confirmed that the error of distance measurement in the range of 1.5 m is below 2 cm.
The dataset used in the tests was collected during a single traverse by the robot. The processing relies solely on the LIDAR data. The collected set contains Velodyne scans acquired every 100 ms. The traverse of Path A contains 150 s of recording and Path B 94 s.
To provide the ground truth for evaluating the algorithm, the supports were labeled by a human operator. During the process, the point cloud from each scan was inspected using 3D visualization. For each observed support, the operator marked a set of points perceived as belonging to that support. Then, in postprocessing, the mean points and bounding boxes for each support were calculated. To utilize prior knowledge, the Z coordinates of the bounding boxes were adjusted to span from the ground up to 70 cm. As a result, a reference set was created that associates each timestamped scan with a list of human-labeled supports, described by their center, bounding box, and a set of points from the point cloud belonging to that support. It should be noted that the idler frames in the dataset consist of a pair of supports on both sides of the belt, but due to the LIDAR sensor’s field of view and occlusions, in many cases, only one of two supports is visible and labeled.

3.2. Single-Scan Point Cloud Processing

This section presents the results of processing of a single scan. The stages are presented and commented on separately to allow the evaluation of the influence of each stage on the final results.
The cropping and alignment stage transforms an acquired point cloud through the four processing steps. Figure 6 illustrates how the processed point cloud changes through the elimination of points at each step.
Figure 6a presents the points obtained from the scanner. The red rectangle indicates the actual area to be considered, where the robot path and the conveyors are located. It may be noticed that the long range of the scanner provides a huge number of points which are irrelevant to the idler frame detection. There may even be observed distant walls and the ceiling of the hall. For that reason, in the first step, the distant points are removed. The threshold distance for removal was 5 m. Figure 6b shows the remaining points. The red markings indicate the location of the idlers that are to be found. In the next step, the RANSAC algorithm is used to detect the ground plane. The distance threshold d g m i n was set to 0.05 m. The result of ground detection are the points in red in Figure 6c. The red points, which are within d g m i n of the ground plane, are removed. After removing the ground points and alignment of the ground plane with the horizontal plane, the processed point cloud receives the form shown in Figure 6d, with colors indicating the Z coordinate. The region of interest was defined as l = 5 m long, h = 1.5 m high, and w = 4 m wide.
One of the main purposes of this stage is a reduction in the number of points to process. To evaluate how many points are eliminated in each step of this stage, a scan-by-scan analysis was performed for both paths. The results for Path A are presented in Table 1 and for Path B in Table 2.
The tables present the minimum and maximum value encountered on each path, the average, and standard deviation. The consecutive rows present the initial number of points in the data from the LIDAR and after each step filtering the points. It may be noted that in the minimum, maximum, and average values there is not much difference between the two paths. However, the standard deviation is approximately doubled for Path B. This corresponds to the more variable environment to the right of the robot along Path B (cf. Figure 4b).
Figure 7 present the 2D histograms of points for the top and front projection for the cell size a = 0.1 m. The red marking on the 2D histograms indicates the cells including the idler frames and conveyor belts.
In the top projection, the idler cells are clearly visible as local maxima of the 2D histogram along the X axis. The parameters used to classify a histogram cell as containing an idler candidate were b r = 3 and b i = 30 . However, in the front view, the maximum is occupied by the elongated objects, i.e., belts and barriers. To obtain the cells representing elongated objects, a threshold of b t h = 30 was applied.
Figure 8 presents the steps of density classification to obtain idler frame candidates. The points selected from the top projection segmentation are highlighted in Figure 8a. Figure 8b displays the elongated objects obtained from the front projection. Finally, the result of the set difference operation between the points from the two projections is presented in Figure 8c.
Figure 9 illustrates the results of the Euclidean clustering. The parameters for the clustering function were set as distance tolerance to d t h = 0.5 m, minimum cluster size to c m i n = 20 , and maximum cluster size to c m a x = 500 .
Figure 10 presents examples of the results of the final classification. The clusters with a width of more than i w = 0.1 m and height within i h = 0.65 m (bigger frames) and i h = 0.55 m (smaller frames) with i r h = 10 % tolerance were classified as idlers and marked in green. The remainder were considered non-idlers and marked in red.
It may be concluded that the green marking was correctly assigned to idler frames. Also, the object to the left marked in red was correctly marked as a non-idler. However, some other idler frames were not classified correctly as such. This happened both for the distant one in the left image and for the close idler in the right image. Closer investigation has shown that the height-based detector prohibits the positive classification of the frames which are not completely visible. This happens often in the case of distant frames as they may be occluded by closer objects, but it also happens when the idler frame is close and it is not visible in full height due to the limited vertical angle of view of the LIDAR.

3.3. Detection Along the Paths

The detection results were evaluated by comparing them with manually labeled data. Matching was based on the distance between the mean points of the objects projected onto the XY plane. During the evaluation process, for each LIDAR scan, the objects detected by the algorithm and those labeled by a human were categorized into three classes: correct, missed, and wrong. Objects detected by the algorithm that had a corresponding object in the human-labeled set within a distance of 25 cm were classified as correct (true positives, TP). Human-labeled objects that did not have corresponding objects detected by the algorithm were marked as missed (false negatives, FN). Finally, objects detected by the algorithm that did not have a match in the human-labeled set were marked as wrong detections (false positives, FP). It is expected that the correctness of the detection depends on the volume of the object present in the sensor’s field of view and that it will increase with the number of LIDAR points corresponding to the idlers.
Positions of the detected objects from each class in the robot’s local coordinates for both paths are presented in Figure 11. The additional lines in the figure indicate the borders of areas with the same theoretical number of active LIDAR channels. This number is defined as the number of LIDAR measurement surfaces that can intersect a vertical line between 0 and 0.7 m, assuming no occlusions are present. This corresponds to the maximum theoretical number of LIDAR channels capable of registering the idler frame at a given location. It may be observed that correct detections appear in the first row of idler supports, as the supports in the second row are occluded by the belts and other closer objects. Also, as expected, better results are observed in areas with a higher number of active LIDAR channels.
To determine the area of the effective operation of the algorithm, an evaluation was performed for different regions of interest. The region of interest, denoted as X+, is defined as the area where the theoretical number of active LIDAR channels is greater than or equal to X and bounded by horizontal lines between the robot and the middle of the belt. A channel is considered theoretically active if the surface of the cone for this channel intersects the object, even if due to occlusions and other factors no actual measurement of the object for that channel is registered in the scan. A summary of the results for various regions of interest is shown in Figure 12.
It may be observed that the areas with the number of active channels of four or less have only a minimal influence on the results. Starting from regions of interests restricted to five or more active channels, the number of objects decreases, but the percentage of correct detection grows. The biggest change is observed when limiting output to the areas of six and more active channels. Further restrictions have a smaller influence, especially for Path A and, at the same time, substantially decrease the number of objects in the region of interest. Figure 13 presents the distribution of the detection results for the region of interest limited to six or more active channels, clipped to the first row of supports, i.e., between the robot path and the conveyor’s belt.
A comparison of the detection results in the whole scan and in the limited region is presented in Table 3. It may be noted that the accuracy within the region 6+ is above 83% for Path A and 85% for Path B. In both cases, the main source of failure is missed objects, for both paths exceeding 10% of the total number of objects. The detection of objects which are not actual supports is low, below 4% for Path A and below 1% for Path B.
Apart from the summary of results, the detection quality over time can also be tracked. Figure 14 shows X locations of the objects within the region of interest, with color indicating the correctness of detection. For clearer visualization, the plots for objects on the left and right sides of the robot were separated.
The track corresponding to a single idler support begins when it is first registered, at a distance of over 3 m ahead of the robot. As time progresses, this distance decreases until the X coordinate of the support is around 1 m, at which point the support is no longer visible.
It can be observed that, in most cases, correct and missed detections occur for the same support for different time instances. For most supports, they are correctly detected for the majority of the time, what allows us to conclude that each support can be detected for at least a part of the time it is visible. An analysis of the plots also reveals issues with the algorithm that require further investigation. Groups of false positives marked by red dots—visible, for example, in Figure 14a between 45 and 50 s, and after 135 s, or in the lower part of Figure 11—indicate that objects with shapes similar to the supports may be incorrectly classified. However, there are cases when the algorithm detects the supports in correct locations, that the human operator failed to identify, as shown in Figure 14a between 115 and 120 s.

3.4. Processing Time

The processing software was run on a laptop with Intel i5-10300H@2.500 GHz and 32 GB of RAM. The processing time divided by stage for all scans along Path A and statistics of the processing time in each stage are shown in Figure 15. The aggregated values including minimum, maximum, and average time, and standard deviation are collected in Table 4. Analogously, the results for Path B are presented in Figure 16 and Table 5.
While traversing Path A, the total processing time does not exceed 32 ms which is significantly shorter then the 100 ms scan acquisition period of the LIDAR sensor used. The average processing time of less than 14 ms allows for real-time processing of the scans. The performance on Path B is slightly lower, with more scans requiring over 20 ms of processing and a maximum time of 75 ms. However, even at more than twice the time of Path A, the maximum processing time remains well below the point cloud acquisition period.
An analysis of the duration of each stage of the process shows that the cropping and alignment stage contributes most to the total processing time. Comparing the graphs with camera images helps to identify situations where the processing time increases. Two types of such situations were observed: pillars close to the robot’s path (visible in the background of Figure 4b) and mounds of material encroaching on the robot’s path. Due to the smooth shapes of the material mounds, the RANSAC algorithm required more iterations to detect the ground, increasing the processing time.

4. Discussion

The results demonstrate that the proposed workflow based on density segmentation allows detection of the idler frames by a mobile robot in real time. In the experiments, two main aspects were considered: the accuracy of the method and the scan processing time.
The experiments have indicated the influence of the number of LIDAR planes crossing the frame on the chances of the detection and of the occlusion by other objects, particularly the belts of the conveyors. It was observed that, when the number of LIDAR channels intersecting the frame was four or less, the accuracy of detection was relatively low, with the number of missed frames exceeding 25% and 35% depending on the test path. Increasing from five active channels, the accuracy improves, with the highest change visible for six channels. However, a higher number of active channels is associated with a smaller area of detection. The performed tests allowed finding a compromise between the accuracy of the detection and its range as the area with six active channels. This finding may be used practically in the design of inspection robots, providing guidelines to the selection of the sensors and their placement on the robot to optimize the area of detection.
The performed tests also provided an evaluation of the influence of each stage of the workflow on the processing time. The major contribution to the total time comes from the cropping and alignment phase. This is caused mainly by the RANSAC algorithm, which has a complexity O ( k N ) where k is a number of iterations and N is the number of points. To alleviate that, the RANSAC algorithm is preceded by clipping the region of interest, reducing the number of points by almost half. The second in time share is the density segmentation phase, contributing about 30% to the total processing time, with the complexity linearly dependent on the number of points and the number of cells in the 2D histograms. These observations indicate that the highest potential for a further reduction in processing time is in the method of ground detection and removal. Future works should focus on improving this step.
The approach proposed in this paper may be an alternative to other methods of object detection in the industrial environment known from the literature. While we are not aware of available benchmark scenarios that might be used for direct comparison, we may draw expectations from similar research. The most popular alternative for object detection uses computer vision and includes methods based on machine learning such as YOLO (You Only Look Once) and traditional algorithms such as edge detection. An advantage of these method is the availability of the sensors and rapid development of methods, especially machine learning based. The application of YOLO and AFC (adaptive frame control) to general detection in [47] has shown that with a similar processing unit the method can achieve a 30 fps processing frequency; however, the overall latency from image acquisition to detection result exceeded 170 ms even for the best cases for images of resolution 416 × 416. A similar latency was obtained for a process that combined edge detection and convolutional neural networks (CNNs) applied to the detection of conveyor belt deviation in [17]. Additionally, vision-based systems are more vulnerable to insufficient and varying lighting. Compared with vision-based systems, the proposed framework is expected to be more reliable in the variable conditions of industrial sites.

5. Conclusions

The results obtained from the experiments in the industrial plant have shown that combining density segmentation in multiple directions allows an effective detection of conveyor idlers. In general, this approach may also be applied to other non-elongated but periodically appearing objects. Density segmentation, as a single stage of the overall process, is fast and does not require significant computational power, making it suitable for mobile robots.
The overall performance of the idler frame detection process is reliable, with more than 83% of correct detection in the clipped regions and the ability to detect all idlers for at least part of the time they are visible.
The obtained efficiency of the algorithm confirms the validity of our approach. As a result, we plan two directions of further development: the integration of the proposed workflow with the navigation system of a mobile robot and further improvement of the workflow.
The future work toward incorporating the workflow into a mobile robot navigation system includes integration with a simultaneous localization and mapping (SLAM) algorithm, such as the particle filter-based FastSLAM, using detected idler frames as map features. Reliable navigation in rough terrain will allow further integration with the measurement unit dedicated to idler roller monitoring.
The short processing time, even in less favorable areas like Path B, provides room for further improvements to the algorithm by incorporating additional processing steps. These enhancements could include combining data from multiple steps, applying filtering techniques such as an extended Kalman filter or similar methods, and fusion with data from other sensors.
Other areas of possible improvement include the detection range and the algorithm’s processing time. The experiments showed that efficient performance is limited to regions where idler frames are within the theoretical field of view of at least six LIDAR channels. Expanding this area could be achieved by using a LIDAR with a higher number of channels or tuning the placement of the sensor. Although the processing time is well below the required threshold, data on the processing duration indicate that the main contribution to the overall time comes from ground detection in the cropping and alignment stage. It is expected that optimization of that stage, particularly by tuning the RANSAC algorithm, can further reduce processing time.
Future work will focus on improving classification by dynamically tracking idlers across consecutive scans, incorporating 3D shape matching, and sensor fusion with image processing from an RGB camera.

Author Contributions

Conceptualization, J.J.; methodology, J.D. and J.J.; software, J.D.; validation, J.D. and J.J.; writing—original draft preparation, J.J.; writing—review and editing, J.J and J.D.; visualization, J.D.; supervision, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the NDA agreement with the processing plant at which the data were collected.

Acknowledgments

The authors would like to thank Professor Radosław Zimroz and his team from the Faculty of Geoengineering, Mining and Geology for providing the access to the data used for algorithm validation in the results section.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Thieme, K.R.; Liu, X. Economic Justification of Automated Idler Roll Maintenance Applications in Large-Scale Belt Conveyor Systems; Delft University of Technology: Delft, The Netherlands, 2014. [Google Scholar]
  2. Accidentology of Conveyors, Elevators, and Transporters; Lessons from the ARIA Database; Ministry for an Ecological and Solidarity Transition: Lyon, France, 2019.
  3. Nascimento, R.; Carvalho, R.; Delabrida, S.; Bianchi, A.G.; Oliveira, R.A.R.; Garcia, L.G. An integrated inspection system for belt conveyor rollers advancing in an enterprise architecture. In Proceedings of the ICEIS 2017—Proceedings of the 19th International Conference on Enterprise Information Systems, SciTePress, Porto, Portugal, 26–29 April 2017; Volume 2, pp. 190–200. [Google Scholar] [CrossRef]
  4. Siami, M.; Barszcz, T.; Wodecki, J.; Zimroz, R. Design of an Infrared Image Processing Pipeline for Robotic Inspection of Conveyor Systems in Opencast Mining Sites. Energies 2022, 15, 6771. [Google Scholar] [CrossRef]
  5. Alharbi, F.; Luo, S.; Zhang, H.; Shaukat, K.; Yang, G.; Wheeler, C.A.; Chen, Z. A Brief Review of Acoustic and Vibration Signal-Based Fault Detection for Belt Conveyor Idlers Using Machine Learning Models. Sensors 2023, 23, 1902. [Google Scholar] [CrossRef]
  6. Bortnowski, P.; Kawalec, W.; Król, R.; Ozdoba, M. Types and causes of damage to the conveyor belt—Review, classification and mutual relations. Eng. Fail. Anal. 2022, 140, 106520. [Google Scholar] [CrossRef]
  7. Morales, A.S.; Aqueveque, P.; Henriquez, J.A.; Saavedra, F.; Wiechmann, E.P. A technology review of idler condition based monitoring systems for critical overland conveyors in open-pit mining applications. In Proceedings of the 2017 IEEE Industry Applications Society Annual Meeting, Cincinnati, OH, USA, 1–5 October 2017; pp. 1–8. [Google Scholar] [CrossRef]
  8. Peng, C.; Li, Z.P.; Yang, M.; Fei, M.; Wang, Y. An audio-based intelligent fault diagnosis method for belt conveyor rollers in sand carrier. Control Eng. Pract. 2020, 105, 104650. [Google Scholar] [CrossRef]
  9. Yang, M.; Peng, C.; Li, Z. An Audio-based Intelligent Fault Classification System for Belt Conveyor Rollers. In Proceedings of the Chinese Control Conference, CCC, Shanghai, China, 26–28 July 2021; pp. 4647–4652. [Google Scholar] [CrossRef]
  10. Siami, M.; Barszcz, T.; Wodecki, J.; Zimroz, R. Semantic segmentation of thermal defects in belt conveyor idlers using thermal image augmentation and U-Net-based convolutional neural networks. Sci. Rep. 2024, 14, 5748. [Google Scholar] [CrossRef]
  11. Chamorro, J.; Vallejo, L.; Maynard, C.; Guevara, S.; Solorio, J.A.; Soto, N.; Singh, K.V.; Bhate, U.; Kumar, G.V.R.; Garcia, J.; et al. Health monitoring of a conveyor belt system using machine vision and real-time sensor data. CIRP J. Manuf. Sci. Technol. 2022, 38, 38–50. [Google Scholar] [CrossRef]
  12. Alharbi, F.; Luo, S. A Review of Fault Detecting Devices for Belt Conveyor Idlers. J. Mech. Eng. Sci. Technol. (JMEST) 2024, 8, 39. [Google Scholar] [CrossRef]
  13. Hoff, H. Using Distributed Fibre Optic Sensors for Detecting Fires and Hot Rollers on Conveyor Belts. In Proceedings of the 2017 2nd International Conference for Fibre-optic and Photonic Sensors for Industrial and Safety Applications (OFSIS), Brisbane, Australia, 8–10 January 2017; pp. 70–76. [Google Scholar] [CrossRef]
  14. Wijaya, H.; Rajeev, P.; Gad, E.; Vivekanantham, R. Automatic fault detection system for mining conveyor using distributed acoustic sensor. Measurement 2022, 187, 110330. [Google Scholar] [CrossRef]
  15. Yasutomi, A.Y.; Enoki, H.; Yamaguchi, S.; Tamura, K. Inspection System for Automatic Measurement of Level Differences in Belt Conveyors Using Inertial Measurement Unit. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 6155–6161. [Google Scholar] [CrossRef]
  16. Staab, H.; Botelho, E.; Lasko, D.T.; Shah, H.; Eakins, W.; Richter, U. A Robotic Vehicle System for Conveyor Inspection in Mining. In Proceedings of the 2019 IEEE International Conference on Mechatronics (ICM), Ilmenau, Germany, 18–20 March 2019; Volume 1, pp. 352–357. [Google Scholar] [CrossRef]
  17. Liu, Y.; Miao, C.; Li, X.; Xu, G. Research on Deviation Detection of Belt Conveyor Based on Inspection Robot and Deep Learning. Complexity 2021, 2021, 3734560. [Google Scholar] [CrossRef]
  18. Liu, Y.; Miao, C.; Li, X.; Ji, J.; Meng, D. Research on the fault analysis method of belt conveyor idlers based on sound and thermal infrared image features. Measurement 2021, 186, 110177. [Google Scholar] [CrossRef]
  19. Lösch, R.; Grehl, S.; Donner, M.; Buhl, C.; Jung, B. Design of an Autonomous Robot for Mapping, Navigation, and Manipulation in Underground Mines. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1407–1412. [Google Scholar] [CrossRef]
  20. Shiri, H.; Wodecki, J.; Ziętek, B.; Zimroz, R. Inspection Robotic UGV Platform and the Procedure for an Acoustic Signal-Based Fault Detection in Belt Conveyor Idler. Energies 2021, 14, 7646. [Google Scholar] [CrossRef]
  21. Kim, H.; Choi, Y. Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety. Appl. Sci. 2023, 13, 3717. [Google Scholar] [CrossRef]
  22. Skoczylas, A.; Stefaniak, P.; Anufriiev, S.; Jachnik, B. Belt Conveyors Rollers Diagnostics Based on Acoustic Signal Collected Using Autonomous Legged Inspection Robot. Appl. Sci. 2021, 11, 2299. [Google Scholar] [CrossRef]
  23. Rocha, F.; Garcia, G.; Pereira, R.F.; Faria, H.D.; Silva, T.H.; Andrade, R.H.; Barbosa, E.S.; Almeida, A.; Cruz, E.; Andrade, W.; et al. ROSI: A Robotic System for Harsh Outdoor Industrial Inspection—System Design and Applications. J. Intell. Robot. Syst. Theory Appl. 2021, 103, 1–22. [Google Scholar] [CrossRef]
  24. Carvalho, R.; Nascimento, R.; D’Angelo, T.; Delabrida, S.; Bianchi, A.G.C.; Oliveira, R.A.R.; Azpúrua, H.; Uzeda Garcia, L.G. A UAV-Based Framework for Semi-Automated Thermographic Inspection of Belt Conveyors in the Mining Industry. Sensors 2020, 20, 2243. [Google Scholar] [CrossRef]
  25. Tatsch, C.; Bredu, J.A.; Covell, D.; Tulu, I.B.; Gu, Y. Rhino: An Autonomous Robot for Mapping Underground Mine Environments. In Proceedings of the 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Seattle, WA, USA, 27 June–1 July 2023; pp. 1166–1173. [Google Scholar]
  26. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  27. Yang, T.; Hu, J.; Li, Y.; Zhao, C.; Sun, L.; Krajnik, T.; Yan, Z. 3D ToF LiDAR for Mobile Robotics in Harsh Environments: A Review. Unmanned Syst. 2024, 1–23. [Google Scholar] [CrossRef]
  28. Ghasemieh, A.; Kashef, R. 3D object detection for autonomous driving: Methods, models, sensors, data, and challenges. Transp. Eng. 2022, 8, 100115. [Google Scholar] [CrossRef]
  29. Tang, Y.; He, H.; Wang, Y.; Mao, Z.; Wang, H. Multi-modality 3D object detection in autonomous driving: A review. Neurocomputing 2023, 553, 126587. [Google Scholar] [CrossRef]
  30. Baek, J.; Park, J.; Cho, S.; Lee, C. 3D Global Localization in the Underground Mine Environment Using Mobile LiDAR Mapping and Point Cloud Registration. Sensors 2022, 22, 2873. [Google Scholar] [CrossRef]
  31. Wang, W.; Wang, B.; Zhao, P.; Chen, C.; Clark, R.; Yang, B.; Markham, A.; Trigoni, N. PointLoc: Deep Pose Regressor for LiDAR Point Cloud Localization. IEEE Sens. J. 2021, 22, 959–968. [Google Scholar] [CrossRef]
  32. Sun, L.; Adolfsson, D.; Magnusson, M.; Andreasson, H.; Posner, I.; Duckett, T. Localising Faster: Efficient and precise lidar-based robot localisation in large-scale environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4386–4392. [Google Scholar] [CrossRef]
  33. Aschoff, T.; Spiecker, H. Algorithms for the automatic detection of trees. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 36, 71–75. [Google Scholar]
  34. Bienert, A.; Scheller, S.; Keane, E.; Mohan, F.; Nugent, C. Tree detection and diameter estimations by analysis of forest terrestrial laserscanner point clouds. In Proceedings of the ISPRS Workshop on Laser Scanning 2007 and SilviLaser, Espoo, Finland, 12–14 September 2007; pp. 50–55. [Google Scholar]
  35. Bienert, A.; Georgi, L.; Kunz, M.; von Oheimb, G.; Maas, H.G. Automatic extraction and measurement of individual trees from mobile laser scanning point clouds of forests. Ann. Bot. 2021, 128, 787–804. [Google Scholar] [CrossRef]
  36. Rodríguez-Cuenca, B.; García-Cortés, S.; Ordóñez, C.; Alonso, M.C. Automatic Detection and Classification of Pole-Like Objects in Urban Point Cloud Data Using an Anomaly Detection Algorithm. Remote Sens. 2015, 7, 12680–12703. [Google Scholar] [CrossRef]
  37. Lu, R.; Brilakis, I.; Middleton, C.R. Detection of Structural Components in Point Clouds of Existing RC Bridges. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 191–212. [Google Scholar] [CrossRef]
  38. Documentation of ROS2 Humble Hawksbill. Available online: https://docs.ros.org/en/humble/index.html (accessed on 8 November 2024).
  39. Ubuntu 22.04 LTS. Available online: https://ubuntu.com/blog/tag/ubuntu-22-04 (accessed on 8 November 2024).
  40. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  41. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  42. Miądlicki, K.; Saków, M. LiDAR Based System for Tracking Loader Crane Operator. In Advances in Manufacturing II, MANUFACTURING 2019, Poznań, Poland, 19–22 May 2019; Springer: Cham, Switzerland, 2019; pp. 406–421. [Google Scholar] [CrossRef]
  43. Martínez-Otzeta, J.M.; Rodríguez-Moreno, I.; Mendialdua, I.; Sierra, B. RANSAC for Robotic Applications: A Survey. Sensors 2022, 23, 327. [Google Scholar] [CrossRef]
  44. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef]
  45. Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Ph.D. Thesis, Computer Science Department, Technische Universitaet Muenchen, Munich, Germany, 2009. [Google Scholar]
  46. Chan, T.O.; Cosandier, D.; Durgham, K.; Lichti, D.D.; Roesler, G.; Al-Durgham, K. Range scale-factor calibration of the velodyne VLP-16 LIDAR system for position tracking applications. In Proceedings of the The 11th International Conference on Mobile Mapping Technology (MMT 2019), Shenzhen, China, 6–8 May 2019. [Google Scholar]
  47. Lee, J.; Hwang, K.i. YOLO with adaptive frame control for real-time object detection applications. Multimed. Tools Appl. 2022, 81, 36375–36396. [Google Scholar] [CrossRef]
Figure 1. Idler supports of typical belt conveyors: (a) [4], (b) [5].
Figure 1. Idler supports of typical belt conveyors: (a) [4], (b) [5].
Applsci 14 10774 g001
Figure 2. Activity diagram of point cloud processing.
Figure 2. Activity diagram of point cloud processing.
Applsci 14 10774 g002
Figure 3. A scheme of the experiment location with marked robot path segments A and B.
Figure 3. A scheme of the experiment location with marked robot path segments A and B.
Applsci 14 10774 g003
Figure 4. Images of the experiment location. (a) Path A. (b) Path B. Green rectangles mark the idlers’ supports to be detected.
Figure 4. Images of the experiment location. (a) Path A. (b) Path B. Green rectangles mark the idlers’ supports to be detected.
Applsci 14 10774 g004
Figure 5. Mobile platform with the sensor module at the experiment site [4].
Figure 5. Mobile platform with the sensor module at the experiment site [4].
Applsci 14 10774 g005
Figure 6. Transformation of a point cloud in preprocessing stage. (a) Original image from the LIDAR sensor. The red rectangle indicates the area with conveyors. (b) Point cloud with distant points clipped. The boxes show the location of idler supports. (c) The results of the RANSAC algorithm—the ground points marked in red. (d) Aligned point cloud with ground removal.
Figure 6. Transformation of a point cloud in preprocessing stage. (a) Original image from the LIDAR sensor. The red rectangle indicates the area with conveyors. (b) Point cloud with distant points clipped. The boxes show the location of idler supports. (c) The results of the RANSAC algorithm—the ground points marked in red. (d) Aligned point cloud with ground removal.
Applsci 14 10774 g006
Figure 7. Two-dimensional histograms for a single scan. (a) Projection to the horizontal plane H X Y with manually marked support locations. (b) Projection to the front plane H Y Z , marked elongated objects.
Figure 7. Two-dimensional histograms for a single scan. (a) Projection to the horizontal plane H X Y with manually marked support locations. (b) Projection to the front plane H Y Z , marked elongated objects.
Applsci 14 10774 g007
Figure 8. The results of the density-based segmentation (the points of interest marked in blue-green). (a) Points from the X Y segmentation. (b) Points from the Y Z segmentation. (c) The set difference of points from the X Y and Y Z segmentations.
Figure 8. The results of the density-based segmentation (the points of interest marked in blue-green). (a) Points from the X Y segmentation. (b) Points from the Y Z segmentation. (c) The set difference of points from the X Y and Y Z segmentations.
Applsci 14 10774 g008
Figure 9. Clusters representing idler frame candidates.
Figure 9. Clusters representing idler frame candidates.
Applsci 14 10774 g009
Figure 10. Examples of detection.
Figure 10. Examples of detection.
Applsci 14 10774 g010
Figure 11. Spatial distribution of detection results in robot local coordinates and unrestricted range. (a) Along Path A. (b) Along Path B.
Figure 11. Spatial distribution of detection results in robot local coordinates and unrestricted range. (a) Along Path A. (b) Along Path B.
Applsci 14 10774 g011
Figure 12. Detection results in areas with various theoretical numbers of active LIDAR channels. (a) Along Path A. (b) Along Path B.
Figure 12. Detection results in areas with various theoretical numbers of active LIDAR channels. (a) Along Path A. (b) Along Path B.
Applsci 14 10774 g012
Figure 13. Spatial distribution of detection results in robot local coordinates in region limited to 6 and more LIDAR planes. (a) Along Path A. (b) Along Path B.
Figure 13. Spatial distribution of detection results in robot local coordinates in region limited to 6 and more LIDAR planes. (a) Along Path A. (b) Along Path B.
Applsci 14 10774 g013
Figure 14. Detection of the supports in time—X coordinate of the objects. (a) Path A—in the first row to the left of the robot. (b) Path A—in the first row to the right of the robot. (c) Path B—in the first row to the left of the robot.
Figure 14. Detection of the supports in time—X coordinate of the objects. (a) Path A—in the first row to the left of the robot. (b) Path A—in the first row to the right of the robot. (c) Path B—in the first row to the left of the robot.
Applsci 14 10774 g014
Figure 15. Duration of processing stages along Path A. (a) For each scan along the trajectory. (b) Box plots of the duration of stages.
Figure 15. Duration of processing stages along Path A. (a) For each scan along the trajectory. (b) Box plots of the duration of stages.
Applsci 14 10774 g015
Figure 16. Duration of processing stages along Path B. (a) For each scan along the trajectory. (b) Box plots of the duration of stages.
Figure 16. Duration of processing stages along Path B. (a) For each scan along the trajectory. (b) Box plots of the duration of stages.
Applsci 14 10774 g016
Table 1. The number of points at each stage of point reduction along Path A.
Table 1. The number of points at each stage of point reduction along Path A.
Filtration StepMinMaxMeanStd Dev
Initial PC21,35022,26621,880205
5 m Filter917014,39511,133865
Ground Filter232771734862618
Table 2. The number of points at each stage of point reduction along Path B.
Table 2. The number of points at each stage of point reduction along Path B.
Filtration StepMinMaxMeanStd Dev
Initial PC21,45322,34121,906225
5 m Filter806215,61211,3091804
Ground Filter2114808547721307
Table 3. Detection results and accuracy in a whole scan and within region of interest 6+.
Table 3. Detection results and accuracy in a whole scan and within region of interest 6+.
CaseObjectsCorrect (TP)Missed (FN)Wrong (FP)
No restrictions
Path A11,3374959 (43.7%)5836 (51.5%)542 (4.8%)
Path B44461388 (31.2%)2106 (47.4%)952 (21.4%)
Region of interest 6+
Path A39303289 (83.7%)490 (12.5%)151 (3.8%)
Path B1101936 (85.0%)157 (14.3%)8 (0.7%)
Table 4. The durations at each stage of the point cloud processing along Path A.
Table 4. The durations at each stage of the point cloud processing along Path A.
Process StepMin [ms]Max [ms]Mean [ms]Std Dev [ms]
Cropping and alignment (CA)3.90526.6898.7092.979
Density segmentation (DS)0.1269.8193.9591.639
Support clustering (SCt)0.0214.5640.8010.467
Support classification (SCs)0.0000.0190.0090.002
Position estimation (PE)0.0000.0220.0090.003
Whole process (Total)4.29931.4513.4573.963
Table 5. The durations at each stage of the point cloud processing along Path B.
Table 5. The durations at each stage of the point cloud processing along Path B.
Process StepMin [ms]Max [ms]Mean [ms]Std Dev [ms]
Cropping and alignment (CA)3.77656.78113.3958.787
Density segmentation (DS)0.19727.7116.7983.559
Support clustering (SCt)0.07.5851.1070.873
Support classification (SCs)0.00.6200.0180.022
Position estimation (PE)0.00.0980.0150.007
Whole process (Total)6.25874.94021.33410.658
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jakubiak, J.; Delicat, J. Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot. Appl. Sci. 2024, 14, 10774. https://doi.org/10.3390/app142310774

AMA Style

Jakubiak J, Delicat J. Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot. Applied Sciences. 2024; 14(23):10774. https://doi.org/10.3390/app142310774

Chicago/Turabian Style

Jakubiak, Janusz, and Jakub Delicat. 2024. "Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot" Applied Sciences 14, no. 23: 10774. https://doi.org/10.3390/app142310774

APA Style

Jakubiak, J., & Delicat, J. (2024). Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot. Applied Sciences, 14(23), 10774. https://doi.org/10.3390/app142310774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop