Next Article in Journal
Social Trust Confirmation-Based Selfish Node Detection Algorithm in Socially Aware Networks
Previous Article in Journal
Takagi–Sugeno Fuzzy Parallel Distributed Compensation Control for Low-Frequency Oscillation Suppression in Wind Energy-Penetrated Power Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Wild Horse Crossing Event Detection Using Roadside LiDAR

Department of Civil and Environmental Engineering, University of Nevada, Reno, 1664 N. Virginia Str., Reno, NV 89557, USA
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3796; https://doi.org/10.3390/electronics13193796
Submission received: 30 July 2024 / Revised: 19 September 2024 / Accepted: 23 September 2024 / Published: 25 September 2024

Abstract

:
Wild horse crossing events are a major concern for highway safety in rural and suburban areas in many states of the United States. This paper provides a practical and real-time approach to detecting wild horses crossing highways using 3D light detection and ranging (LiDAR) technology. The developed LiDAR data processing procedure includes background filtering, object clustering, object tracking, and object classification. Considering that the background information collected by LiDAR may change over time, an automatic background filtering method that updates the background in real-time has been developed to subtract the background effectively over time. After a standard object clustering and a fast object tracking method, eight features were extracted from the clustering group, including a feature developed to specifically identify wild horses, and a vertical point distribution was used to describe the objects. The classification results of the four classifiers were compared, and the experiments showed that the support vector machine (SVM) had more reliable results. The field test results showed that the developed method could accurately detect a wild horse within the detection range of LiDAR. The wild horse crossing information can warn drivers about the risks of wild horse–vehicle collisions in real-time.

1. Introduction

Wildlife–vehicle collision (WVC) is a global issue. With the continuous increase in the number of WVCs in many countries in recent years, it has begun to receive more attention. In Nevada, vehicle collisions with wild and domestic animals result in an average of more than 500 reported collisions each year, costing the Nevada public more than USD 19 million in collision costs and an estimated 5032 wildlife deaths. Of these wildlife–vehicle collisions, those involving deer, cattle, and wild horses average over 340 per year and are responsible for the vast majority of wildlife–vehicle severe collisions [1]. Therefore, to improve traffic safety, reduce property damage caused by such accidents, and protect these animals, there is an urgent need to develop appropriate methods to reduce such accidents. Some state departments of transportation have chosen to install warning signs at multiple incident locations to alert road users to wildlife crossing the street [2], but previous studies have shown that simply installing warning signs is ineffective in reducing wildlife–vehicle collisions [3]. It is widely known that the rectangular rapid flashing beacon (RRFB) installed at crosswalks effectively increases drivers’ yield rate at the location [4]. The difference between the two types of equipment may be the reason for this contradictory result. The RRFB only flashes when a pedestrian is crossing the street, whereas the warning sign for wild horses keeps flashing all the time or does not flash, so for the drivers who come the first time, this sign may have some influence.
However, if the drivers often pass through this location and see the warning sign without wild horses appearing, this sign may have a rare effect. Therefore, a system must be developed that automatically detects wild horse crossing events so the warning sign can flash when a wild horse crossing event is detected, thus providing a better warning effect. However, with automatic detection of other objects, such as pedestrians, automatic detection of wildlife is more complicated. One reason it is relatively simple to detect pedestrians who need to cross the street automatically is that it only needs to detect pedestrians in the crosswalk and surrounding area to activate the RRFB. However, if we want to detect wildlife crossing events automatically, the range of detection required is extensive, as we cannot predict where and in what direction wildlife will want to cross the street. Furthermore, the difference between wildlife, especially wild horses, and some vehicles is significantly smaller than the difference between pedestrians and vehicles, making detecting wild horse crossing events more challenging.
Many methods and techniques have been used to detect wildlife crossing events automatically. For example, many papers have used cameras to detect wildlife automatically due to the recent rapid development of image processing methods [5,6,7], but cameras are susceptible to changes in lighting, making them difficult to identify at night [8]. Meanwhile, the detection of wildlife crossing events has a much more extensive range than other image processing tasks, and it is difficult for the camera to obtain accurate distance information, making the identification algorithm very difficult. In order to solve the weakness of the camera, which is vulnerable to light, some scholars proposed using a thermal camera to identify wildlife. A thermal camera describes the object by detecting the infrared ray emitted by the object, which does solve the problem of light. However, the thermal camera still fails to solve the detection range problem. Experimental results from [9] show that when the distance exceeds 20 m, it is difficult for the thermal camera to detect the target effectively. Radar is the most widely used technique for detecting wildlife crossing events in practice. Viani et al. [10] used a Doppler radar to detect deer crossing streets and claimed that the error rate was below the required level while identifying targets up to 16 m away in a response time of 1 s. However, the radar’s insensitivity to tangent angle movement and limited detection distance lead to a relatively homogeneous application scenario. At the same time, they only use speed as the feature for detecting animals, leading them to misidentify the vibration of leaves as animals. There are also methods of detecting the location of animals through GPS, radio-waves, etc. [11,12], by special binding equipment to the animal. However, this type of method is impractical, as it is not only time-consuming but also impossible to guarantee that all animals can be tracked.
LiDAR is a new sensor technology that has recently emerged. Compared to conventional sensors, LiDAR is virtually unaffected by changes in light and has a significantly greater effective detection range than other sensors. It creates a point cloud map by rotating the laser beams and receiving them back from the objects in its range through a receiver, resulting in massive information. Recent advancements in point cloud object detection have led to the development of both single-stage and two-stage detectors. PointPillars [13] is a single-stage method that transforms point clouds into sparse pseudo-image representations using vertical pillars, allowing for efficient object detection through 2D convolutional layers. This method has proven effective for real-time applications due to its low computational complexity. Another significant approach is VoxelNet [14], which converts raw point clouds into voxel grids and applies 3D convolutional layers to extract spatial features, leading to improved accuracy in detecting objects in 3D space. On the two-stage detection side, PointRCNN [15] generates 3D proposals directly from point clouds in a bottom-up manner. By using semantic segmentation to validate points and regress 3D bounding boxes, PointRCNN significantly improves detection accuracy, particularly for complex scenes. These methods highlight diverse approaches to leveraging point clouds for accurate and efficient 3D object detection. Ref. [16] have also explored the potential of using large language models (LLMs) for 3D object detection tasks. There has been much research using onboard LiDAR for various tasks [17], but not enough research has been carried out using roadside LiDAR. Ref. [18] evaluated roadside LiDAR and a vision-based model for trajectory extraction and mentioned that roadside LiDAR has better performance. The authors of [19] used roadside LiDAR to classify road users. The authors of [20] used roadside LiDAR for traffic statistics information. Also, authors of [21] used it to study the vehicle–pedestrian near-collision event identification. These studies demonstrate the potential of roadside LiDAR for identifying various types of objects. Therefore, this paper considers the use of roadside LiDAR for the automatic detection of wildlife crossing events. This paper used wild horses, a wildlife more difficult to detect, as an example, providing an advanced system for the real-time detection of wild horse crossing events. At the same time, due to the limited availability of annotated point cloud data for wild horses, the use of popular deep learning methods may not be effective in recognizing the target. Therefore, this paper adopts traditional machine learning approaches to address the detection task. It also summarizes the possible drawbacks of this system and future research directions.

2. Methods

Automatic wild horse crossing detection requires extracting the points of the object from the raw LiDAR data and distinguishing and tracking wild horses from all objects. In this paper, a complete procedure is developed, and the whole process is shown in Figure 1, including background filtering, object clustering, object tracking, and object classification. The individual steps are described in detail below.

2.1. Background Filtering

Background filtering is the first step in detecting wild horses. Tracking a target requires background filtering to remove the background completely, while accurate object classification requires background filtering to ultimately retain information about the object of interest to extract the correct and valid features. Unlike the data from roadside LiDAR used for urban traffic, roadside LiDAR used to detect wild horses is often set up in the rural area and therefore has relatively more complex background information (including bushes, rocks, woods, ditches, etc.). Moreover, due to wind, sensor vibrations, or even the temporal stability of the LiDAR itself, LiDAR may not detect the exact background information at the same location. These two reasons make it more difficult for the background filtering of roadside LiDAR data used to detect wild horses. Some previous methods [22,23] have failed to take into account the background drift of LiDAR, so the results of such background filtering methods become increasingly poor over long periods. While some scholars [24] have recently started to consider these problems, they argue that the background will only vibrate in a small range, but our data show that the background can be shifted significantly within a few days or even a day. Figure 2 shows the distance information collected by the LiDAR over a week with fixed vertical and horizontal angles. Based on the changes in distance shown in the figure, it can be inferred that the variation in distance is positively correlated with changes in lighting conditions. At a detection distance of around 28 m, the difference in distance can reach 1 m in a day, and this difference would increase as the distance increased. The other information collected by the LiDAR, including values along the x, y, and z coordinates, is inferred from the distance and the detection angle, so this information will also change when the detected distance changes, which can prove the background will change with time. It is also necessary to consider any human-induced changes in the background (e.g., the insertion of signs). Therefore, we considered that for the roadside LiDAR for detecting wild horse crossing events for long periods, the background information needs to be updated in real-time to ensure that the background filtering is always valid.
Firstly, in our method, we divide the entire detection area into numbers of square subspaces and use the road, the road shoulder, and the surrounding area as the region of interest. Because our goal is to detect wild horse crossing events to improve road safety and protect wild horses, we are not interested in wild horse activity in other areas. In these regions, the background is mainly the road or the ground itself, so by knowing the height of the ground in this subspace in the LiDAR data, the background can be filtered directly by the height.
After obtaining the region of interest, we manually select a period of time when basically no objects appear within the detection range to generate the initial background. Specifically, we count the z-values, which is the height in LiDAR data, of all points occurring in each subspace during this time and select the 95-percentile value as the height of the ground in this subspace. The initial background file could be generated. This background is suitable for background filtering at the beginning, but if keeping this background file unchanged, the background filtering will worsen as time goes on, so this background should be updated in real-time.
The primary consideration is to ensure that the background filtering is able to subtract as much of the background as possible. Because if the background is not entirely removed, the remaining background points may be considered as objects by subsequent steps, which may affect the final detection accuracy, so it requires constant updating of the background height to track possible changes. Of course, we cannot update the background height as soon as points above the background appear because these points may be passing objects. Given that background changes are generally slow and long-lasting, we can continuously monitor the change in the background in each subspace, and if points above the background persist in a time interval α in any subspace, then we can assume that the background in this subspace may have changed. However, suppose the number of consecutive frames where there are points above the background of the subspace fails to change. This means there is a frame where there are no points above the background in α , then we assume that the points previously above the background are the points of objects, and we reset the time interval. Nevertheless, sometimes the object can stay within a subspace, leading to the background being updated incorrectly. To mitigate this error, the lowest point above the background during this period is chosen as the new background height. Also, a method to fix this error is considered afterward. The process of updating the background upwards is shown in Equation (1).
z i = min z p , f i z p > z o , 0 < i < α z o
z i represents the updated background height of the i subspace, z p represents the height of the detected point, f i represents the i frame in the time period, and the background is updated only if there is a point above the background height in any frame in α . Otherwise, the background will remain at the original background height z o .
However, if the set background height is higher than the actual background, then there is a risk that the target information is missing in the subsequent steps. Also, to fix the previously mentioned error, we monitor the change in background height in each subspace for a period of time β in the same way, and if all points are below the background height for that period of time, we need to update the background downwards, and we use the height of the highest point below the set background height as the new background. The process of updating the background downwards is shown in Equation (2). The symbols have the same meaning as in Equation (1).
z i = max z p , f i z p > z o , 0 < i < β z o
From our updating method, it is clear that we do not intend to significantly update the background over a short period, as this could lead to the misidentification of stationary objects such as feeding animals or people working in the area. Therefore, the background is updated gradually to avoid such issues.
At the same time, there is another situation to consider. Due to the intrinsic nature of LiDAR, there are gaps between the laser beams where no information can be acquired. Meanwhile, when there is a change in the background, a subspace that could have been scanned may become a gap area. As no information can be obtained for these gaps, the background cannot be updated. In general, for these subspaces, whenever any points appear, then they should be considered object points. Thus, for a subspace, assuming that the time interval between the first occurrence of a point and the next occurrence of a point is greater than δ, we consider this subspace to be the gap region and use the height of the lowest of all points received the second time as the background.

2.2. Object Clustering

After background filtering, the points filtered out from the raw LiDAR data should all belong to the points of the objects that appear within the detection range. The points belonging to the same object need to be clustered for the subsequent processing steps of this object. This paper uses a commonly used point clustering method: density-based spatial clustering of applications with noise (DBSCAN) [25]. This method does not require a predetermined number of clusters and is insensitive to noise. In real-time systems, the efficiency of DBSCAN operation is also satisfactory.

2.3. Object Tracking

Object tracking helps the system to obtain information other than shape information, such as speed and direction. In [26,27,28,29], novel object tracking algorithms were proposed, demonstrating excellent multi-object tracking performance for tasks with limited annotated data and complex environments. However, considering the need to deploy the system in an outdoor environment, where the processing equipment is relatively basic, a much simpler method was chosen for object tracking. When an object appears in the detection range first frame, its exact position is recorded, and in the next frame, the nearest object to this position is searched for within a limited range, which is considered to be the object to be tracked, and its speed and direction of movement can be deduced from its position in these two frames so that its position in the next frame can be searched for within a limited distance of the presumed position. Even if there is an occlusion, we can use the speed and direction to deduce its position and continue to track it the next time it appears. Since the detection site is not in the urban area, the occlusion problem is not too serious, and object tracking performed well with this method.

2.4. Object Classification

Automatic wild horse detection requires the identification of wild horses that appear within detection range of other objects. Thus, the final step of the automatic wild horse detection algorithm corresponds to a two-class classification. Classification algorithms need effective and discriminatory features, as good features can help the classifier distinguish between targets, while poor features can lead the classifier to make incorrect classifications. We drew on the experience of related papers [30,31] and studied a feature specifically designed to detect wild horses, so we selected the following features to describe the object:
Length, width, and height
Length, width, and height are essential features in the classification algorithm previously used for roadside LiDAR data, and the length, width, and height features played an important role in the previous papers on classifying vehicles. Similarly, they are also discriminative for wild horses and other objects. To obtain the length and width features, we selected four corner points of the target, namely maxX, minX, maxY, and minY. Then, we took the farthest point from the sensor among these four points as the reference point and calculated the distance between the two closest points among the other three points, the longer one being the length and the shorter the width. The height is obtained in a relatively simple way by simply subtracting the lowest point from the highest point of the target.
Number of points
The number of points is actually an extension of the length, width, and height characteristics, with targets of greater length, height, and width generally having a greater number of points. However, due to the difference in shape between horses and vehicles and other types of objects, and also due to the angular disparity of the LiDAR laser in the vertical direction, the number of points does not follow the above pattern exactly.
Distance
The distance itself is not used as a basis for discriminating the object class, but due to the method of LiDAR acquisition of target information, the individual features of the object will differ at different distances from the sensor, so distance is also used as a feature input to the subsequent classifier.
Direction
Direction is a distinguishing feature for two-lane roads because in most cases, objects other than wild horses (mainly vehicles) move along the lane, whereas wild horses appear to cross the street, so their directions are clearly different. However, if the detection location is an intersection (such as the one we tested), then the direction of movement of the vehicles will be more varied, and then the directional feature will certainly not be the only feature used to differentiate. We still use the point at which the target is furthest from the sensor as the reference point and the angle of the line connecting the two frames of reference points as the direction.
Speed
In our calibration data and some other rough observations, it appears that wild horses are slow when on the side of the road or crossing the street, in most cases not exceeding five mph. However, again, we cannot use speed as the only feature for two reasons: firstly, during peak traffic periods, oncoming traffic is also usually congested. Secondly, the wild horse’s speed is not necessarily kept low either; for example, the maximum speed of the wild horse is over 50 mph. As with the directions, we pick the furthest point as the reference point and calculate the speed by calculating the distance between the reference points of the two frames.
Vertical point distribution
The previous features are sufficient for the classification of vehicles and pedestrians, but for wild horses, the above features may not be sufficiently discriminating. Therefore, after examining the shape characteristics of wild horses, it can be found that the legs of wild horses make up a smaller proportion of the total body, as shown in Figure 3. This resulted in a significantly lower ratio of points below the center of the body to the total number of points on the target in the raw LiDAR data than in the other categories, so we use this ratio as the new feature calculated as Equation (3). Figure 4 shows the distribution of wild horses and other objects in each discriminatory feature.
r h = N C N P   , z c = 1 2 γ 2   z m a x + 1 + 2 γ 2   z m i n
where z c is the calculated height center value, calculated from the set ratio γ and the highest and lowest heights of the object, N P is the number of all point clouds for this object, and N C is the number of point clouds for this object below the height center value z c .

2.5. Classifier

Once the extracted features have been determined, it is necessary to select a suitable classifier to classify the target. In many previous studies on roadside LiDAR, many classifiers, including RF, NB, KNN, and SVM [30], have been used for classification. Many of these papers have analyzed the advantages and disadvantages of the various classifiers and the appropriate applications. However, these previous studies have focused on classifying various vehicles and pedestrians. Whereas for this paper, the main objective is to distinguish wild horses from other targets, i.e., only a two-class classification is required.
Decision Tree
Decision trees (DTs) are suitable for handling high-dimensional data samples with missing attributes; for data with inconsistent sample sizes across categories, the results of information gain are biased towards those features with more values. It does not support online learning; the decision tree needs to be completely rebuilt upon the arrival of new samples, which are prone to overfitting. It is computationally simple, easy to understand, and interpretable; able to deal with uncorrelated features; produces feasible and good results in a relatively short time for a large dataset; and ignores correlations between data.
Naive Bayes
Naive Bayes (NB) performs well on small datasets, can handle multiple-class classification tasks, is suitable for incremental training, is less sensitive to missing data, and is commonly used in generative models for text classification. Its low variance and high bias assume that in a given class, all features are independent of each other, require the calculation of prior probabilities, error rates exist for classification decisions, and are sensitive to the representation of the input data. Fewer parameters need to be estimated.
K-nearest Neighbor
K-nearest neighbor (KNN) has strong consistent results, suitable for automatic classification of class domains with relatively large sample sizes. A larger K value in classification can reduce the effect of noise but will make the boundaries between classes blurred. Moreover, it is computationally intensive. When samples are unbalanced, it is possible that when inputting a new sample, the sample has a majority of K neighbors in the large-capacity class.
Support Vector Machine
Support vector machine (SVM) has advantages in the small dataset, non-linear, and high-dimensional pattern recognition problems; even if the data are linearly indistinguishable in the original feature space, it can run well given a suitable kernel function. SVM is a method based on classification boundaries; points in low-dimensional space are mapped to higher-dimensional space so that they become linearly divisible, and then the principle of linear division is used to determine the classification boundaries. It can handle the interaction of non-linear features without relying on the whole data while allowing for improved generalization. However, it is not very efficient when there are many observed samples, there is no general solution to the non-linear problem, and it is sometimes difficult to find a suitable kernel function.

2.6. Wild Horse Detection

After the classifier has classified each object at each frame, we want to be able to use object tracking to improve the accuracy of wild horse detection. So only if the object’s trajectory exceeds five frames and the number of frames in which the object is classified as a wild horse exceeds 90% of the total number of frames in which it is detected will it be identified as a wild horse. This step significantly reduces the misidentification rate of other objects being classified as wild horses.

3. Results

The test site was selected at the USA Pkwy and Pittsburgh Ave intersection in Sparks, Nevada, USA. The historical data and field observation show that there were wild horses’ activities. The team of the University of Nevada, Reno, deployed a Velodyne 32-channel LiDAR sensor (10 Hz) at the east side of the intersection on a trailer. The LiDAR is powered by four solar panels and a wind turbine, which is shown in Figure 5a. Extra energy is stored in a battery, which is used when there is no sunshine or wind. The installation location and the geometric information of the site are shown in Figure 5b. A pond is located at the east of the intersection, which attracts the surrounding wild horses to drink water there. The deployment period was from 22 April 2022 to 30 April 2022. The local weather conditions during this time were mostly sunny, with occasional light to moderate winds. The deployment site was located at the center of a gravel area beside a road, surrounded by a complex environment that included bushes, slopes, and ditches.

3.1. Parameter Setting

First, we need to determine the size of the subspace. As our detection targets are wild horses with an average body length of around 1–2 m, we divide the subspace into squares with a side length of 1 m. A region of interest for the experimental site is set up, including the road itself and the area around the road. The detection range is from −100 m to +100 m along the X-axis and −80 m to +40 m along the Y-axis, with the LiDAR itself as the origin. After that, we need to determine the parameters that will be used to update the background information. For the upward update of the background parameter α , the upward update needs to take into account possible slow-moving targets. If α is set too short, there is a risk that the updated background will be higher than the actual background due to these objects, and thus, the object information will be lost in the subsequent process. Possible slow-moving objects at the experimental site include peak hour and turning vehicles, pedestrians, and various wildlife, including coyotes and wild horses. Given the speed of these objects and LiDAR’s period being 10 frames per second, we set α to 100 frames. For parameter β , which is used for downward updates, relatively little consideration is needed, but it is generally not advisable to set β to be smaller than α . Therefore, we set β to be 100 frames as well. For the parameter δ , which detects the gap area, δ is set to 20,000 frames, considering the possibility that the corresponding laser rays may be blocked due to object stagnation, and thus the corresponding area cannot be detected by the LiDAR. For the feature extraction step, γ needs to be set to determine which part of the object’s body to focus on. In this case, γ was set to 0.1.

3.2. Model Evaluation

Firstly, this paper calculates two types of errors to exhibit direct evaluations of background filtering. Error Type 1 counts the rate of points misrecognized as objects’ points to all points, whereas Error Type 2 counts the rate of points misrecognized as background points to all points. To calculate these two types of errors, we selected one frame of raw LiDAR data at random every half hour and collected ten consecutive days’ data, which included various traffic conditions such as peak hour, free-flow, weekday, and weekend traffic at the test site, and the object and background were manually calibrated. Figure 6 shows the example of background filtering. Table 1 shows the test results.
As recent background filtering algorithms only take into account background drift to a small range, their results of tests using ten days of data are not comparable to the proposed method, so only the method in this paper is listed in Table 1. Also, in order to show the effect of the background update of this method, the results of the tenth day and last day of background filtering are presented separately in the second column of Table 1. The two types of errors share a common cause: the initially extracted background fails to fully represent the true background. In some subspaces, the background height may be overestimated, while in others, it may be underestimated. However, with the background update method introduced in this paper, the cause of these errors gradually diminishes, as seen in the transition from the background filtering results averaged over 10 days to the results on the final day.
Additionally, the first type of error, where background points are mistaken for object points, can be observed in Figure 6. These errors mainly occur in areas where the LiDAR sensor is further away and the ground surface is more complex, such as slopes and ditches. This is due to the inherent characteristics of LiDAR, where greater distances may lead to slight deviations in scanning accuracy at the same location. For steep surfaces, this deviation may affect the extracted height at those points. However, such deviations are discrete and unpredictable, and the resulting error points typically appear as isolated noise, which can be easily filtered out in the subsequent clustering process. Therefore, this error does not affect the overall method.
In contrast, the second type of error, where object points are mistaken for background, is primarily found at the boundaries between two subspaces, especially where neighboring subspaces have differing background heights. This error mainly affects objects such as vehicles, where a very small number of points near the bottom (e.g., around the tires) may be misclassified as background when passing through the subspace boundary. However, this type of error has minimal impact on the subsequent feature extraction process.
Then, this paper compares the classification results of the four previously mentioned classifiers using confusion matrices. We manually observe ten days of raw LiDAR data, calibrate wild horses and other objects, and randomly select samples with balanced data from both classes. We divided all the samples into four and tested three as training samples and one as a test sample. In the confusion matrix, positive samples are wild horses, and negative samples are other targets. After background filtering, object clustering, object tracking, and feature extraction, we input the features of each cluster to the four classifiers for classification, and the experimental results are shown in Table 2.
As can be seen from the table, the SVM has the lowest false negatives (actual wild horses that are incorrectly classified as other targets). The lowest number of false positives is DT, but in this system, false negatives are not allowed, and the system should be able to identify every wild horse, so this paper recommends using SVM as the final classifier. For the SVM, we employed a radial basis function (RBF) kernel, as it effectively handles non-linearly separable data, which is common in object classification tasks within complex environments. The regularization parameter c was set to 1.0 to balance the trade-off between maximizing the margin and minimizing the classification error. A gamma value of 0.1 was used to control the influence of individual training examples. Lower gamma values ensure that the model considers a broader range of points for classification, thus helping to prevent overfitting. Figure 7 presents the F1 score curve from the SVM testing results.
For the causes of misclassification, although false negative instances are rare, they typically occur when multiple wild horses are clustered together and have just entered the LiDAR detection range. At this moment, the point cloud data are aggregated into a single object, resulting in inaccurate measurements of dimensions such as length and width. Additionally, as the horses have only recently entered the detection range, their speed cannot be accurately estimated. However, as the horses continue to move, more accurate features are obtained, allowing the classifier to make more reliable decisions. Regarding false positives, these are more common at longer distances from the LiDAR sensor. The limited number of point clouds captured at such distances leads to occasional misclassifications. However, after applying the post-processing methods described in the wild horse detection section, these misclassifications do not result in new wild horse detection events.
We compared our results with PointPillars, a commonly used model in 3D object detection, in Table 3. PointPillars transforms point clouds into a pseudo-image format by dividing the 3D space into vertical pillars, allowing for efficient real-time detection. Using the PointPillars model pretrained on the KITTI dataset, we fine-tuned it on our manually labeled dataset and conducted detection on the same test set. As expected, due to the pretraining on the KITTI dataset, the model performed exceptionally well in recognizing pedestrians and vehicles. However, for wild horses, the limited amount of training data hindered the deep learning method’s ability to achieve effective detection.
Finally, this paper presents real-time detection results. The proposed method detected n wild horse crossing events over ten days from 25 April 2022 to 4 May 2022, and after manually observing the raw LiDAR data for these ten days, it can be confirmed that the proposed method successfully detected all wild horse crossing events, which are listed in Table 4. A visualization of the detection results is shown in Figure 8.

4. Conclusions

This paper provides an effective and real-time method to detect wild horse crossing events using LiDAR sensors. The procedure developed in this paper includes background filtering, object clustering, object tracking, and object classification. This real-time information can be used to trigger flashing warning signs to provide warning messages to drivers. This innovative method can also collect comprehensive information on wild horses to help analyze their behavior. It should be mentioned that although the detection range of LiDAR is already extensive compared to other sensors, the cost of placing multiple LiDARs to monitor an entire area of wild horse activity is exceptionally high. Therefore, this method is more suitable for installation in specific locations where wild horse crossing events are frequent. Background filtering based solely on ground height may not be fully applicable in other locations, including forested areas. It is expected that more widely applicable background filtering methods will be developed and tested in more locations.

Author Contributions

Study conception and design: Z.W. and H.X.; data collection: Z.W. and F.G.; analysis and interpretation of results: Z.W., Z.C. and F.G.; draft manuscript preparation: Z.W. All authors reviewed the results and approved the final version of the manuscript.

Funding

This research was funded by the SOLARIS Institute, a Tier 1 University Transportation Center (UTC) under Grant No. DTRT13-G-UTC55, and matching funds by the Nevada Department of Transportation (NDOT) under Grant No. P224-14-803/TO #13.

Data Availability Statement

Data are unavailable due to privacy reasons.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cramer, P.; McGinty, C. Prioritization of Wild Horse-Vehicle Conflict in Nevada; No. 604-16-803; Nevada Department of Transportation: Las Vegas, NV, USA, 2018.
  2. Huijser, M.P.; Kociolek, A.V.; McGowen, P.T.; Ament, R.; Hardy, A.; Clevenger, A.P.; Western Transportation Institute. Wildlife-Vehicle Collision and Crossing Mitigation Measures: A Toolbox for the Montana Department of Transportation; Montana Department of Transportation: Helena, MT, USA, 2007. [CrossRef]
  3. Benten, A.; Hothorn, T.; Vor, T.; Ammer, C. Wild horse warning reflectors do not mitigate wild horse–vehicle collisions on roads. Accid. Anal. Prev. 2018, 120, 64–73. [Google Scholar] [CrossRef] [PubMed]
  4. Ross, J.; Serpico, D.; Lewis, R. Assessment of Driver Yield Rates Pre-and Post-RRFB Installation, Bend, Oregon; No. FHWA-OR-RD 12-05; Transportation Research Section: Salem, OR, USA, 2011. [Google Scholar]
  5. Lu, X.; Lu, X. An efficient network for multi-scale and overlapped wild horse detection. Signal Image Video Process. 2023, 17, 343–351. [Google Scholar] [CrossRef]
  6. Nguyen, H.; Maclagan, S.J.; Nguyen, T.D.; Nguyen, T.; Flemons, P.; Andrews, K.; Ritchie, E.G.; Phung, D. Animal recognition and identification with deep convolutional neural networks for automated wild horse monitoring. In Proceedings of the 2017 IEEE International Conference on Data Science & Advanced Analytics, Tokyo, Japan, 19–21 October 2017. [Google Scholar]
  7. Feng, W.; Ju, W.; Li, A.; Bao, W.; Zhang, J. High-efficiency progressive transmission and automatic recognition of wild horse monitoring images with WISNs. IEEE Access 2019, 7, 161412–161423. [Google Scholar] [CrossRef]
  8. Rashed, H.; Ramzy, M.; Vaquero, V.; El Sallab, A.; Sistu, G.; Yogamani, S. Fusemodnet: Real-time camera and lidar based moving object detection for robust low-light autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  9. Christiansen, P.; Steen, K.A.; Jørgensen, R.N.; Karstoft, H. Automated detection and recognition of wild horse using thermal cameras. Sensors 2014, 14, 13778–13793. [Google Scholar] [CrossRef] [PubMed]
  10. Viani, F.; Robol, F.; Giarola, E.; Benedetti, G.; De Vigili, S.; Massa, A. Advances in wild horse road-crossing early-alert system: New architecture and experimental validation. In Proceedings of the 8th European Conference on Antennas and Propagation (EuCAP 2014), The Hague, The Netherlands, 6–11 April 2014. [Google Scholar]
  11. Urbano, F.; Cagnacci, F. (Eds.) Spatial Database for GPS Wildlife Tracking Data: A Practical Guide to Creating a Data Management System with PostgreSQL/PostGIS and R, 2014th ed.; Springer: Cham, Switzerland; Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  12. Ryota, R. Harmful Wild horse Detection System Utilizing Deep Learning for Radio Wave Sensing on Multiple Frequency Bands. In Proceedings of the 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 11–13 February 2019. [Google Scholar]
  13. ALang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. PointPillars: Fast Encoders for Object Detection From Point Clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12689–12697. [Google Scholar] [CrossRef]
  14. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
  15. Shi, S.; Wang, X.; Li, H. PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
  16. Zhang, C.; Chen, J.; Li, J.; Peng, Y.; Mao, Z. Large language models for human–robot interaction: A review. Biomim. Intell. Robot. 2023, 3, 100131. [Google Scholar] [CrossRef]
  17. Arfaoui, A. Unmanned aerial vehicle: Review of onboard sensors, application fields, open problems and research issues. Int. J. Image Process 2017, 11, 12–24. [Google Scholar]
  18. Guan, F.; Xu, H.; Tian, Y. Evaluation of Roadside LiDAR-Based and Vision-Based Multi-Model All-Traffic Trajectory Data. Sensors 2023, 23, 537. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, J.; Xu, H.; Zheng, Y.; Zhang, Y.; Lv, B.; Tian, Z. Automatic Vehicle Classification using Roadside LiDAR Data. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 153–164. [Google Scholar] [CrossRef]
  20. Lv, B.; Xu, H.; Wu, J.; Tian, Y.; Zhang, Y.; Zheng, Y.; Yuan, C.; Tian, S. LiDAR-Enhanced Connected Infrastructures Sensing and Broadcasting High-Resolution Traffic Information Serving Smart Cities. IEEE Access 2019, 7, 79895–79907. [Google Scholar] [CrossRef]
  21. Wu, J.; Xu, H.; Zheng, Y.; Tian, Z. A novel method of vehicle-pedestrian near-collision identification with roadside LiDAR data. Accid. Anal. Prev. 2018, 121, 238–249. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, Z.; Zheng, J.; Wang, X.; Fan, X. Background filtering and vehicle detection with roadside lidar based on point association. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018. [Google Scholar]
  23. Wu, J.; Xu, H.; Zheng, J. Automatic background filtering and lane identification with roadside LiDAR data. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017. [Google Scholar]
  24. Zhang, T.; Jin, P.J. Roadside LiDAR Vehicle detection and tracking using range and intensity background subtraction. J. Adv. Transp. 2022, 2022, 2771085. [Google Scholar] [CrossRef]
  25. Bäcklund, H.; Hedblom, A.; Neijman, N. A density-based spatial clustering of application with noise. Data Min. 2011, 33, 11–30. [Google Scholar]
  26. Qi, Y.; Yao, H.; Sun, X.; Sun, X.; Zhang, Y.; Huang, Q. Structure-aware multi-object discovery for weakly supervised tracking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 466–470. [Google Scholar] [CrossRef]
  27. Ge, C.; Song, Y.; Ma, C.; Qi, Y.; Luo, P. Rethinking Attentive Object Detection via Neural Attention Learning. IEEE Trans. Image Process. 2023, 33, 1726–1739. [Google Scholar] [CrossRef] [PubMed]
  28. Yang, Y.; Li, G.; Qi, Y.; Huang, Q. Release the Power of Online-Training for Robust Visual Tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12645–12652. [Google Scholar] [CrossRef]
  29. Qi, Y.; Qin, L.; Zhang, S.; Huang, Q.; Yao, H. Robust visual tracking via scale-and-state-awareness. Neurocomputing 2018, 329, 75–85. [Google Scholar] [CrossRef]
  30. Zhao, J.; Xu, H.; Wu, D.; Liu, H. An artificial neural network to identify pedestrians and vehicles from roadside 360-degree LiDAR data. In Proceedings of the 97th Annual Transportation Research Board Meeting, Washington, DC, USA, 11–17 January 2018. [Google Scholar]
  31. Lee, H.; Coifman, B. Side-fire lidar-based vehicle classification. Transp. Res. Rec. J. Transp. Res. Board 2012, 2308, 173–183. [Google Scholar] [CrossRef]
Figure 1. Flow chart.
Figure 1. Flow chart.
Electronics 13 03796 g001
Figure 2. Distance collected over a week with fixed vertical and horizontal angles.
Figure 2. Distance collected over a week with fixed vertical and horizontal angles.
Electronics 13 03796 g002
Figure 3. (a) Wild horse and (b) vehicle in raw LiDAR data.
Figure 3. (a) Wild horse and (b) vehicle in raw LiDAR data.
Electronics 13 03796 g003
Figure 4. Distribution of wild horses and other objects in each discriminatory feature; the y axis represents the sample number: (a) speed; (b) direction; (c) length; and (d) vertical point distribution.
Figure 4. Distribution of wild horses and other objects in each discriminatory feature; the y axis represents the sample number: (a) speed; (b) direction; (c) length; and (d) vertical point distribution.
Electronics 13 03796 g004
Figure 5. (a) Trailer for data collection; (b) top view of the site and the location of the LiDAR.
Figure 5. (a) Trailer for data collection; (b) top view of the site and the location of the LiDAR.
Electronics 13 03796 g005
Figure 6. Example of background filtering: blue points represent the background and red points represent the foreground.
Figure 6. Example of background filtering: blue points represent the background and red points represent the foreground.
Electronics 13 03796 g006
Figure 7. F1 score curve for SVM validation result.
Figure 7. F1 score curve for SVM validation result.
Electronics 13 03796 g007
Figure 8. Example of wild horse crossing event detection: green points represent wild horses, red points represent other objects, and blue points represent background.
Figure 8. Example of wild horse crossing event detection: green points represent wild horses, red points represent other objects, and blue points represent background.
Electronics 13 03796 g008
Table 1. Results of background filtering.
Table 1. Results of background filtering.
Average Error RateLast Day Error Rate
Error Type 13.2%1.7%
Error Type 22.0%1.8%
Table 2. Classification result of each classifier.
Table 2. Classification result of each classifier.
DT
RealWild horseOthers
Predict
Wild horse4741231
Others1724260
NB
RealWild horseOthers
Predict
Wild horse4633389
Others2804102
KNN
RealWild horseOthers
Predict
Wild horse3857753
Others10563738
SVM
RealWild horseOthers
Predict
Wild horse4898256
Others154235
Table 3. Recall and precision comparison results with PointPillars.
Table 3. Recall and precision comparison results with PointPillars.
RecallPrecision
PointPillars [13]90.2%82.3%
Our method99.6%95.0%
Table 4. Detection result of all wild horse crossing events.
Table 4. Detection result of all wild horse crossing events.
Detected TimeDetected FrameStart PositionEnd Position
30 April 2022 3:008773–10,610(75.9, −54.7)(35.7, 40.7)
30 April 2022 7:004941–5226(−21.1, −56.9)(−25.9, −15.7)
25 April 2022 11:001665–2749(70.4, −77.6) (84.2, −77.3)
25 April 2022 18:3015,728–17,999(−35.4, −24.9)(−54.1, −64.0)
25 April 2022 7:002–1310(17.2, 36.2)(34.0, 39.0)
25 April 2022 7:302–937(70.7, 12.0)(91.9, −50.7)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Xu, H.; Guan, F.; Chen, Z. Real-Time Wild Horse Crossing Event Detection Using Roadside LiDAR. Electronics 2024, 13, 3796. https://doi.org/10.3390/electronics13193796

AMA Style

Wang Z, Xu H, Guan F, Chen Z. Real-Time Wild Horse Crossing Event Detection Using Roadside LiDAR. Electronics. 2024; 13(19):3796. https://doi.org/10.3390/electronics13193796

Chicago/Turabian Style

Wang, Ziru, Hao Xu, Fei Guan, and Zhihui Chen. 2024. "Real-Time Wild Horse Crossing Event Detection Using Roadside LiDAR" Electronics 13, no. 19: 3796. https://doi.org/10.3390/electronics13193796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop