Next Article in Journal
Combining Area-Based and Individual Tree Metrics for Improving Merchantable and Non-Merchantable Wood Volume Estimates in Coastal Douglas-Fir Forests
Next Article in Special Issue
Learned Design of a Compressive Hyperspectral Imager for Remote Sensing by a Physics-Constrained Autoencoder
Previous Article in Journal
Monitoring Lightning Location Based on Deep Learning Combined with Multisource Spatial Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data

1
School of Highway, Chang’an University, Xi’an 710064, China
2
Engineering Research Center of Highway Infrastructure Digitalization, Ministry of Education, Xi’an 710000, China
3
College of Transportation Engineering, Chang’an University, Xi’an 710064, China
4
School of Engineering, RMIT University, Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 2202; https://doi.org/10.3390/rs14092202
Submission received: 19 March 2022 / Revised: 29 April 2022 / Accepted: 29 April 2022 / Published: 5 May 2022

Abstract

:
Due to the widespread use of unmanned aerial vehicles (UAVs) in remote sensing, there are fully developed techniques for extracting vehicle speed and trajectory data from aerial video, using either a traditional method based on optical features or a deep learning method; however, there are few papers that discuss how to solve the issue of video shaking, and existing vehicle data are rarely linked to lane lines. To address the deficiencies in current research, in this study, we formulated a more reliable method for real traffic data acquisition that outperforms the traditional methods in terms of data accuracy and integrity. First, this method implements the scale-invariant feature transform (SIFT) algorithm to detect, describe, and match local features acquired from high-altitude fixed-point aerial photographs. Second, it applies “you only look once” version 5 (YOLOv5) and deep simple online and real-time tracking (DeepSORT) to detect and track moving vehicles. Next, it leverages the developed Python program to acquire data on vehicle speed and distance (to the marked reference line). The results show that this method achieved over 95% accuracy in speed detection and less than 20 cm tolerance in vehicle trajectory mapping. This method also addresses common problems involving the lack of quality aerial photographic data and accuracy in lane line recognition. Finally, this approach can be used to establish a Frenet coordinate system, which can further decipher driving behaviors and road traffic safety.

1. Introduction

The question of how to increase highway driving safety is a major issue around the globe. In 2021, the Decade of Action for Road Safety campaign, launched by the World Health Organization (WHO), set a goal that by 2030, global road traffic fatalities should be reduced by 50% [1]. A thorough understanding of driving data is required to improve road safety. Numerous studies have shown that the driving behaviors of drivers on the same road are correlated; therefore, speed and trajectory data can be utilized to quantify the risk of driving accidents [2,3,4]. The traditional methods used for collecting vehicle speed and trajectory data primarily use driving simulations, cross-section speed measurements, and real driving on the road; however, the data obtained using the traditional methods can lack accuracy and continuity, and therefore, they cannot be used to measure data correlations. In recent years, with the wide application of unmanned aerial vehicles (UAVs) in remote sensing (RS), driven by their academic and commercial success [5], data acquisition methods based on surveillance and aerial video have become more popular [6,7,8]. Even though these methods have promoted the application of remote sensing technology, such as finer object detection and real-time tracking [9,10,11], they have yet to fully address problems with changing illumination, object occlusion/obstruction, and environmental noise.
Acquiring driving data based on video footage has been well researched, and a wide range of methods have been formulated, including the inter-frame difference, background difference, and optical flow methods; however, these methods are limited by their identification accuracy and vulnerability to environmental noise [12,13]. With the development of neural networks, such as the convolutional neural network (CNN) and the recurrent neural network (RNN), object detection speed and accuracy have improved significantly [12,14]. The “you only look once” (YOLO) model, adapted from the Darknet network, is proven to be effective for UAV-based object detection, and thus was used in this study. It is worth noting that the process of improving driving safety is multi-faceted. In addition to the vehicle data itself, the correlation between the road and the vehicle, such as the relative position of lane and vehicle, can also be used to examine road safety. Current commonly used lane line recognition algorithms, such as edge detection and deep learning-based feature detection, are likely to fail when an object to be detected lacks clarity.
To achieve extraction of real vehicle data from aerial video with higher precision and better reliability, it is imperative to conduct an in-depth study in three areas: vehicle identification, vehicle tracking, and vehicle and lane line correlation. Section 2 examines the related research in these fields; Section 3 presents the settings of our traffic flow video capture technique; Section 4 elucidates the steps of acquiring high-precision real vehicle data and the data processing method; Section 5 describes the data results associated with the selected road sections; and Section 6 provides a discussion and conclusion.

2. Literature Review

Roads will have a variety of effects on traffic safety. Aside from the road alignment, which affects the driver’s judgment, markings, pavement materials, and damage all have an impact on the vehicle’s driving condition [15,16]. These factors are ultimately described by the vehicle speed and trajectory, upon which we can base a comprehensive road safety evaluation. These data can be obtained by different techniques, e.g., predicting vehicle speed, simulating driving, measuring cross-section speed, observing real driving behavior, examining trajectories, and so on [3,17]. The common problems of these techniques include data authenticity and acquisition discontinuity [18]. Nowadays, a trending technique is analyzing the natural flow data (e.g., NGSIM, HighD, etc.) from driving footage acquired from CCTV surveillance cameras, drone cameras, and other photographic devices. This technique is limited as well, as it can only reflect the driving situation of a certain road section.
DADS was the first commercialized road driving simulation system, developed at the end of the 19th century, but it can only run on UNIX and has few functions [19]. As computer-aided design, geographic information systems (GIS), and building information modelling (BIM) technologies continue to evolve, a series of driving simulation systems, such as UC-win/road, CarSim, TruckSim, and CARRS-Q, have been developed to support a variety of road safety and driving behavior studies [20,21,22,23]. The rationale is that they can generate a wide range of simulated driving scenarios for testing and analysis, which can help address real-world driving challenges to some extent. The development of radar and sensor technology has further facilitated the acquisition and accuracy of vehicle information, such as real-time position, speed, and braking behavior [24,25]; however, this method also has limitations. First, it is not robust enough to examine the driving behavior of many vehicles on the same road. Second, it is based on cross-section speed measurements, and this technique cannot handle vehicle occlusion problems [26,27,28]. Emerging UAV equipment and image processing algorithms enable a novel approach for vehicle data collection and analysis using high-altitude aerial photography. NGSIM and HighD are representative datasets generated from this technology [7,8]. These datasets have high precision and are rich in traffic flow data [29,30,31]; however, there are still limitations. For instance, the collected road sections in these databases have a few bifurcate structures and horizonal/vertical geometry, which cannot sufficiently support studies investigating car following and lane changing.
More literature on vehicle speed and trajectory data is summarized in Table 1. These data can play an important role in road safety and driving behavior research. Lili et al. [32] reviewed the traffic flow research and noted that the scarcity of real trajectory data is a key issue. Vehicle trajectory data with high precision and validity can be quite beneficial, but the acquisition, management, and application of such data are huge challenges [33]. The distance information between vehicle and lane line is another important indicator to understand driving safety and behavior, but the current research has not fully taken this into consideration; therefore, it is argued that obtaining reliable and precise data in this regard is another major issue that could benefit from the use of UAVs. There are two types of video-based vehicle recognition methods: one is based on optical properties, such as inter-frame difference, background difference, and optical flow, and the other is based on deep learning object detection algorithms. For example, the YOLO algorithm proposed by Redmon et al. [34] is widely used in the field of target recognition, and is also applied in this paper.
The inter-frame difference method makes use of the strong correlation between single-frame and multi-frame intervals in the image sequence to compare the grayscale difference and threshold value of pixels, from which moving objects can be identified [44,45]. When vehicles are moving fast, this method would suffer from the so-called ghosting problem, which is mistakenly identifying vehicles as background objects. The background difference method leverages a constructed static background model to reflect a foreground moving object [46,47]. This method cannot continuously track a vehicle’s coordinates. The optical flow method can calculate the traveling speed and direction of a target through pixel intensity change and correlation analysis [48,49]; however, it is less reliable when illumination conditions change. An accurate vehicle trajectory extraction method using the Canny ensemble detector and kernelized correlation filter was developed [50]; however, this method tends to have data corruption problems, especially when applying vehicle trajectory tracking. The deep learning target instance detection approach can effectively solve some of the abovementioned problems, and it makes use of trained vehicle classifiers to recognize different vehicle types from consecutive frames. In a tough environment, this method can achieve a better performance [12,14].
Obtaining the speed and trajectory data of all cars throughout an entire road segment is more favorable for traffic safety analysis. This research presents a reliable method to obtain speed and trajectory data that addresses some shortcomings of traditional methods. This method leverages UAVs to acquire real traffic videos. It is a valid and accurate method that can relate vehicle data to road information. The road alignment in this study varies, and includes straight roads, sharp turns, ramps, intersections, road exits, and small clear distance roads located in Sichuan and Shaanxi, China.
In this paper, we aim to develop a more reliable method to extract vehicle data. In order to obtain better data, the aerial photography environment is a controllable condition, so UAV video extraction is performed in an ideal environment regardless of the conditions, such as strong wind, rainfall, or low visibility, which would adversely affect drone aerial photography. We collected a huge volume of streaming high-definition traffic video data to create a training set for YOLOv5 and Darknet. Based on the evaluation, the target frame was more stable than the optical approaches. YOLO has an inferior detection effect when target objects are relatively small and close; however, this would not be the case in the free-flow condition of highway vehicle detection. It should be noted that the robustness of the algorithm in terms of lane line recognition is affected by lighting, reflection, noise, and other factors. To ensure the accuracy of vehicle distance data, in this study, we propose calibrating the lane line parameters at the beginning. An integration of video registration, vehicle recognition, and Python big data processing underpins the vehicle data extraction method of this study. The vehicle recognition rate and data extraction accuracy in different environments show that the developed method outperforms the traditional methods in terms of data integrity and accuracy. Finally, in this study, we also examine noise data and construct a Frenet coordinate system to display the data patterns in different environments.

3. Collection of Real Traffic Flow

3.1. Research Location

In order to verify the effectiveness of data extraction in different environments, a number of mountainous highways in Shaanxi and Sichuan, China, were selected. The data were collected in different road and traffic conditions (Table 2 and Figure 1).

3.2. Research Equipment

UAVs can be used in many applications, such as wide-area traffic monitoring and management, traffic parameter gathering, and so on. According to the study’s data requirements, it was ultimately determined that the data would be collected using UAV fixed-point aerial photography. The drone used in this field investigation was a DJI Mavic 2 Zoom; it uses six batteries, and each battery can last 25 min. The built-in 12 million pixel lens of the UAV can support steady video recording of 4 k + 60 fps. This UAV is equipped with a 3-axis mechanical gimbal (pitch, roll, and pan) (Figure 2) with a pitch range of −90° to +17° (extended).

3.3. Research Process and Data Acquisition

The aerial photography was done vertically (−90°) to eliminate calibration errors due to the oblique view of the angle at different positions. A comparative experiment on the effectiveness of videos at various heights was conducted, and the findings are shown in Table 3. The drone height was kept in between 210 and 230 m and the photographed road range was about 300 and 160 m wide. To eliminate shooting angle errors, a fixed-point vertical aerial shooting approach was used.

4. Materials and Methods

The method outlined in this paper is divided into 5 steps, as shown in Figure 3:
(1)
Image registration: In drone aerial photography, camera shake is unavoidable and will result in significant mistakes. To solve this problem, we applied the SIFT algorithm to register the video.
(2)
Target detection: Following a comparative analysis, we employed YOLOv5 based on deep learning to realize vehicle detection, calibrating over 6000 images of cars of various forms and obtaining the best recognition model after 100 training cycles.
(3)
Continuous vehicle tracking: After achieving high-precision vehicle identification, the DeepSORT algorithm was used for continuous vehicle tracking and trajectory extraction, and the vehicle speed was recovered using the distance calibration value and time interval.
(4)
Lane line calibration: To ensure a solid linkage of vehicle data with lane lines, we first calibrated lane lines and then computed the relationship with the vehicle.
(5)
Data extraction: We used Python, based on the Helen formula, to extract vehicle and lane line data, and established a Frenet coordinate system for data display.
Figure 3. Data extraction process.
Figure 3. Data extraction process.
Remotesensing 14 02202 g003

4.1. Image Registration Based on SIFT Algorithm

Both wind and the machine itself can cause the drone to wobble slightly, which can affect the video shooting quality (Figure 4a,b). In our study, this problem leads to certain errors in the extraction of vehicle trajectories (which will be discussed in greater depth later) but it does not affect vehicle speed extraction.
To tackle the wobbling issue, in this study, we applied the SIFT registration algorithm to ensure that the relative position of vehicles to the road did not change too much over time. The SIFT algorithm has been widely used in image recognition, image retrieval, and 3D reconstruction [51]. Unlike conventional target detection algorithms, which are very sensitive to image size and rotation, SIFT is not sensitive to image size and orientation, and has the capability to resist illumination change and environmental noise.
We set the reference image as f(x,y) and the registered image as g(x,y). If point ( x ^ , y ^ ) on the reference image corresponds to point (x,y) in the image to be registered, an affine relationship exists:
[ x ^ y ^ ] = k [ cos θ sin θ sin θ cos θ ] [ x y ] + [ x y ]
where k is the scale parameter, θ is the rotation angle, and x and y are the translations of the two axes, respectively.
The road can be matched and the images of all frames in the video can be corrected to the same position as the first frame using more than 4 feature points, according to the SIFT algorithm. Figure 5 shows that the changes in relative position induced by the camera shake of the same vehicle over time will no longer exist due to the registered video. Cars between distinct frames can be identified as fixed elements by SIFT, especially vehicles with slower speeds, resulting in changes in the vehicle’s driving state due to registration. This problem was solved in this study by selecting the region of interest in the non-lane range when registering. The registration has no effect on the vehicle state in real operating settings, according to the before and after comparison experiment.
To show the effect of the registration more intuitively, a straight road portion was used for comparison, and 2 frames of photographs taken at different times were superimposed to evaluate the difference, as illustrated in Figure 6.
From Figure 6a, it can be seen that there is an offset between the images at the same position on the road before registration. It can be seen in Figure 6b that the image has a black border to cancel the shaking during registration, so that the image is consistent with the first frame, ensuring that the relative position of the vehicle and the lane line will not change with the shaking camera. The black borders formed by the registration are acceptable and have no effect on subsequent data gathering because the shaking is minimal.
The vehicle trajectory mistake in the unregistered video is primarily due to the shaking camera; despite our best efforts to use drone aerial photography in calm weather, it will still cause shaking of approximately the width of the lane, and the error is roughly 3~4 m. The trajectory error after registration is mainly generated by the oblique angle of view of aerial photography, which is generally about two pixels wide. When the height of the drone is 230 m, the real distance represented by one pixel is about 10 cm; therefore, by reprocessing the registered video, the trajectory error between the vehicle and the lane line can be controlled within 20 cm.

4.2. Deep Learning-Based Vehicle Data Extraction

4.2.1. Calibration of the Pixel Distance Parameter

Video registration is a preprocessing step that ensures data extraction accuracy. After registration, calibration must be conducted first, which involves calculating the ratio of the real distance to pixel distance. Drawing software such as Photoshop can be used to view the pixel coordinates of any image point. In this study, we used the 6 m dotted line in the road as the standard to calculate the real pixel distance equal to 1 m by calculating the pixel distance of the known length of the road, as shown in Figure 7. As the video is shot vertically, the image’s horizontal 1/4 or 3/4 position is used for computation and calibration to minimize the viewing angle inaccuracy as much as possible.
k = ( x 2 x 1 ) 2 ( y 2 y 1 ) 2 6
where k is the calibration parameter, x 1   and   y 1 are the pixel coordinates of the first point, and x 2   and   y 2 are the pixel coordinates of the second point. Here, it can be seen that the calibration parameter k = 6.34, that is, the pixel distance of 6.34 represents the real distance of 1 m. To avoid potential damage and blurring of a single marking line, length calibration should be performed for at least 3 marking lines in good condition, and the average is used as the final distance calibration value.

4.2.2. Vehicle Detection Based on YOLOv5

YOLO’s vehicle labeling and training are straightforward, and the vehicle frame selection is done on the training and analysis video, as illustrated in Figure 8. Model training was done with a ratio of training set: validation set: test set = 0.7:0.2:0.1, a total of 100 training rounds were completed, and the best test effect model was chosen for vehicle detection. After verification, the vehicle detection effect reached about 98%, and the detection frame was stable and not affected by shadows, as shown in Figure 9; however, there were also misidentifications, in which non-vehicle elements were incorrectly identified as vehicles. From the standpoint of data processing, subsequent solutions will be suggested. The results of the identification are shown in Table 4.
The results of vehicle recognition and tracking, taking 3 different types of road sections as examples, were analyzed. As the training and application sets were from the same video, the vehicle recognition and recall rates under diverse road circumstances could reach high values after adequate training.
The road markings and alignments on roads 1 and 2 were rather straightforward, and all indications attained a high level. ID switch mainly occurred during the process of vehicles entering the screen and could also occur due to occlusion between very few vehicles. All of the indicators on road 3 had lower values, and the ID switch was much higher than on other routes. The road markings on this part of the road are complicated, with relatively dense white horizontal markings and text, and when white vehicles interact with the complex noise elements, recognition accuracy suffers. At the same time, some non-vehicle objects in the image could be wrongly labeled as vehicles. A data cleaning method for this problem is proposed later.

4.2.3. Continuous Vehicle Tracking Using the DeepSORT Algorithm

The continuous tracking task must be done after utilizing YOLOv5 to identify cars in the image in difficult settings. The simple SORT algorithm with a Kalman filter and Hungarian algorithm as the core is widely used; however, SORT has many ID switches in some scenarios, such as occluded environments, resulting in poor tracking efficiency. Nicolai overcame this by using SORT to add appearance information, borrowing the re-identification (ReID) domain model to extract features, and reducing the number of ID switches, and proposed the DeepSORT algorithm, which is also the method used in this paper. Its main operation process is shown in Figure 10.
After the detection and continuous tracking of the vehicle target are completed, as shown in Figure 11, the time interval and calibration parameters obtained in Section 4.2.1 can be used to collect information such as speed and acceleration.

4.2.4. Validation of Data

It is vital to validate the accuracy of the vehicle speed recovered by the program in order to assure the accuracy of aerial video data extraction. We randomly selected 9 large and small cars from the data extracted from the exit section of the main line of Yakang Expressway and recorded the time it took to pass the dotted line on the road by playing the video frame by frame; as the real length of the dotted line on the road is known to be 6 m, the car’s true speed can be determined. Then, the speed detection accuracy was compared with the speed extracted by machine vision.
v r = n i c n 0 × 6 × 3.6
where v r is the real speed of the vehicle (km/h), n 0 is the video frame rate and   n 0 = 30 in this paper, and n i c is the number of frames in which the ith vehicle passed the dotted line. The results of the data test are shown in Table 5.
The data accuracy test table of real and detected speed values shows that the speed accuracy is high. The highest error value and the overall accuracy were assessed to ensure that the error did not have a substantial influence on the real findings. Overall accuracy is characterized by mean accuracy (T) and root mean square error (RMSE):
T = v ¯ r v ¯ × 100 %
where T is the accuracy of average speed, v ¯ r is the average value of the detected vehicle speed, and v ¯ is the average of real speed.
RMSE = 1 n t = 1 n ( v t v t ) 2
where RMSE is the root mean square error of vehicle speed, n is the number of cars, and v t   and   v t are the real and detected speed of the tth vehicle.
As shown by the analysis results in Table 6, the maximum error values of large and small cars are not significantly different, and the maximum error rate of large cars is considerably higher. The maximum and minimum error values were deleted to reduce the overwhelming influence of data from specific circumstances on overall accuracy. The average accuracy of small cars is 98.5%, and the average accuracy of large cars is 95.7%. The RMSE of large cars is 2.94 km/h, and the RMSE of small cars is 3.74 km/h. The accuracy of the speed data extracted by deep learning is above 95% for large cars and 98% for small cars.

4.3. Vehicle Data Associated with Lane Lines

4.3.1. Lane Line Calibration

In this study, a method of pre-calibrating the lane line and then computing the distance is provided as a replacement for lane line identification, which can reliably obtain the relationship between vehicle speed and lane line distance. The benefits of this approach are that it is simple to use, has a consistent effect, and is not affected by the state of the lane lines. The procedure is as follows:
Take an image from an aerial video and import it into CAD software using the coordinate origin in the upper left corner as a guide. By scaling, the coordinate values of the image length and width are equal to the pixel length. The pixel coordinates of the image, which are the same as the vehicle X coordinates recovered in Section 4.2.3, correspond to the coordinates of any point in the figure in the CAD software in this situation, and the Y coordinates are opposite integers. Through the graphic design function of road design software (such as HintVR), the same design line as the lane line in the picture is fitted and designed, and then the pile-by-pile coordinates of the design line are output according to fixed intervals. The accuracy of the video environment in this paper can achieve coordinate output every 10 cm on a real road. Taking the negative value of the pile-by-pile coordinate table’s Y-axis coordinate value, which is the lane line’s pixel coordinate calibration file, its coordinates are in the same coordinate system as the vehicle coordinates identified and retrieved in Section 4.2 and can be calculated immediately. The lane line marking is shown in Figure 12.

4.3.2. Section Speed Extraction

The continuous acquisition coordinates and related speed values of vehicles obtained in Section 4.2 are shown in Table 7, along with the lane line calibration file produced in Section 4.3.1.
The Python programming language was used to extract cross-section speed.
There are three layers of loops in all. To extract the continuous pixel coordinates and speed values of each car in the video, first traverse the numbers of all vehicles. Then, on the calibrated lane line, traverse each stake number to be extracted, collect every 1 m, and calculate the distance between each stake number and all coordinates of the numbered vehicles extracted from the previous layer.
l k = [ ( s k x c a r _ x i j ) 2 ( s k y c a r _ y i j ) 2 ]
where l k is the distance between stake k and the jth collection point of the ith vehicle; s k x are the X- and Y-coordinates of the lane line at stake k, respectively; and c a r _ x i j are the X- and Y-coordinates of the jth collection point of the ith vehicle, respectively.
The two positions of the ith vehicle before and after stake k are considered as the two minimum values of l k . Between the two collecting stakes, the vehicle can be assumed to be driving at a constant speed. The speed v i k of the ith vehicle at stake k is calculated using the front and rear speeds and the distance between the two points:
v i k = v 2 + l 1 l ( v 1 v 2 )
where v i k is the speed of the ith vehicle at stake k, v 2 is the vehicle speed at the collection point before stake k, v 1 is the vehicle speed at the collection point after stake k, l 1 is the the horizontal/vertical distance between stake k and the collection point before stake k, and l is the horizontal/vertical distance between the collection points before and after stake k.
The speed of each vehicle in each section can be determined by using this procedure to traverse each vehicle and section. Similarly, as indicated in Table 8, information such as vehicle speed and acceleration for all cars at each stake can be retrieved.

4.3.3. Extraction of Lane Line and Lateral Vehicle Distance

Through the method described in Section 4.3.2, distance l k between stake k and the jth collection point of the ith vehicle can be obtained. We take the minimum two values of l k , l k 1 , and l k 2 , which are the distances between the two positions of the ith vehicle before and after stake k, as shown in Figure 13.
If the coordinates of the three locations in the triangle created by the two collection points and stake k are known, the lengths of the triangle’s three sides can be calculated. The height h of this triangle is the lateral distance of the car at stake k. Heron’s formula can be used to calculate the vertical distance h of each vehicle from the lane line at each stake:
h = 2 d p ( p d ) ( p l k 1 ) ( p l k 1 )
where h is the distance between stake k and the car; d is the length between collection points 1 and 2 and is a known quantity, d = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 ; and x 1 , x 2 , y 1 , y 2 are the X- and Y-coordinates of collection points 1 and 2, respectively. p = l k 1 + l k 2 + d 2 represents half the circumference.
This method can be used at each lane line stake to determine the lateral distance between each section of the lane line and each vehicle, which is the vehicle’s offset. The lateral speed and lateral acceleration of the vehicle relative to the lane line can be calculated using the time difference. Table 9 shows the complete data composition.
In Table 9, Car 2 indicates the vehicle ID, and the longitudinal direction is the lane line stake, including the converted real distance stake and pixel coordinate stake. The lateral data include vehicle speed, acceleration, lateral distance, lateral velocity, and lateral acceleration information for each vehicle. The columns represent the vehicles’ continuous data, and the rows represent the speed and distance information for all cars at the stake. According to the vehicle number, collection coordinates, and collection time point, data such as the distance and the speed differential between adjacent vehicles can be collected in the follow-up.

4.3.4. Data Cleaning

After many aerial video vehicles have been trained, the vehicle recognition accuracy can be guaranteed to be 100%; however, when confronted with a new video after additional verification, the following scenarios may arise:
  • Vehicle misidentification can occur, including the possibility of mistaking non-vehicle elements for vehicles, although this is uncommon.
  • A vehicle is temporarily blocked by a gantry or sign, resulting in intermittent recognition.
  • Data from vehicles outside the lane being studied will be collected.
  • When the vehicle first appears on the screen, there is a gradual recognition process. As the vehicle’s middle is chosen as the detecting point, the speed information entered onto the screen may be erroneous.
  • A few vehicles may have number switching.
The following solutions and data cleaning procedures are used to address these identification issues and noisy data:
  • As the incorrectly identified items are all fixed objects on the screen, all data with an identification speed of less than 5 km/h are exported to 0 km/h in the software, which can be erased directly thereafter.
  • As long as the vehicle number does not change during the recognition interruption (caused by the obstruction of the gantry, sign, etc.) for around one second, there is no problem with the data.
  • The distance between all vehicles and the calibration lane line can be calculated; the distance greater than the width of the lane is the vehicle data of the remaining lanes, and all data of its number can be deleted.
  • The problem of gradual recognition of vehicles entering the screen is unavoidable. The collection range can be appropriately larger than the road section to be studied, then data within 10 m of the entry process can be deleted.
  • The number switching of a small number of vehicles can correspond to the numbering unification; when more vehicles are switched, the problem can be solved by increasing the data training.
Following the data cleaning procedure outlined above, high-quality, high-precision continuous vehicle speed and trajectory data can be obtained.

4.3.5. Frenet Coordinate System

The Cartesian coordinate system is usually used to describe the position of objects; however, it is not the greatest choice for cars traveling on curving roadways. Currently, vehicle coordinate information is in the form of pixel coordinates, and the origin of the Cartesian coordinate system is in the upper left corner of the image. When driving on a road with variable alignment, determining the position of the vehicle on the road might be challenging. The Frenet coordinate system describes the position of the car relative to the road, which can better describe the lane lines and the state of the vehicle on the road. In the Frenet coordinate system, s is the ordinate, representing the distance along the road, and d, the abscissa, is the distance from the longitudinal line, as shown in Figure 14. Traditionally, vehicle speed and trajectory data are stored in the Cartesian coordinate system and converting to the Frenet coordinate system is a time-consuming process. With the help of the Frenet coordinate system, the data distribution of vehicles can be easily established using vehicle and lane line contrasting techniques.

5. Results

We drew the data of the investigated road sections, and each road section included three parts: aerial image, trajectory data, and vehicle speed data. Using the data, we can see the acceleration and deceleration behavior of each road section, as well as the changes in distance between the vehicle and the lane line at various points. As seen in Figure 15, there is a degree of consistency in driving behavior.
The entire road segment can be divided into numerous squares of the same size using the Frenet coordinate system, depending on the distance between the d-axis and the s-axis. The average speed distribution of the entire road segment can be derived by computing the average vehicle speed of all trajectories passing through this location. The average vehicle speed graph clearly shows the impact of the interchange exit on the main line vehicle speed, as shown in Figure 16.
v i j = k = 1 n v i j k n
where v i j is the average vehicle speed at the position in row i and column j, v i j k is the speed of the kth passing vehicle at the position of the ith row and jth column, and k = 1 n v i j k indicates the sum of n vehicle speeds; n refers to the total number of vehicles that passed the position of the ith row and jth column.

6. Discussion and Conclusions

In this study, we formulated a more reliable method for extracting freeway vehicle data. This method utilizes the SIFT, YOLOv5, and DeepSORT algorithms to counteract the vibration of UAVs and realize whole-domain vehicle recognition and continuous tracking. An algorithm for lane line calibration and cross-section information acquisition was also proposed. Compared with current methods, our method effectively improves vehicle speed and trajectory data by eliminating the important but rarely mentioned video shaking problem and improves vehicle trajectory tracking accuracy from 3.54 to 30 cm. The problems of the poor stability and inaccuracy of current image-based lane lines, which cannot meet the needs of actual situations, are solved by lane line calibration, which also addresses the stable association between vehicle data and lane lines, and the structure of vehicle speed and trajectory data, which is beneficial to the establishment of the Frenet coordinate system.
Although there has been progress in video-based target recognition and tracking with the advent of deep learning technology (and this work presents a more reliable data extraction approach), there are still some issues that need to be addressed in the future, such as the following:
(1)
Accurate recognition of small target vehicles is still a challenge when using a larger range of aerial video.
(2)
Target recognition training using deep learning is still a somewhat hard task, which we can look into simplifying.
(3)
The data gained through video-based target recognition are not entirely usable. Data filtering was carried out in this work, but a more effective way to evaluate and optimize speed and trajectory data remains a key research focus.

Author Contributions

C.Z., Z.T. and B.W. developed the method and designed the experiments; Z.T. performed the experiments; M.Z. and B.W. analyzed the results; L.H. contributed to the use of analysis tools; all authors contributed to writing and reviewing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chang’an University (Xi’an, China) through the National Key Research and Development Program of China (grant numbers 2020YFC1512005, 2020YFC1512002) and Sichuan Science and Technology Program (NO:2022YFG0048).

Institutional Review Board Statement

Not available.

Informed Consent Statement

Not available.

Data Availability Statement

Not available.

Acknowledgments

The authors would like to thank the Communication Surveying and Design Institute for providing us with the road design data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. Global Plan for the Decade of Action for Road Safety 2021–2030; WHO: Geneva, Switzerland, 2021. [Google Scholar]
  2. Qaid, H.; Widyanti, A.; Salma, S.A.; Trapsilawati, F.; Wijayanto, T.; Syafitri, U.D.; Chamidah, N. Speed choice and speeding behavior on Indonesian highways: Extending the theory of planned behavior. IATSS Res. 2021. [Google Scholar] [CrossRef]
  3. Vos, J.; Farah, H.; Hagenzieker, M. Speed behaviour upon approaching freeway curves. Accid. Anal. Prev. 2021, 159, 106276. [Google Scholar] [CrossRef] [PubMed]
  4. Yang, Q.; Overton, R.; Han, L.D.; Yan, X.; Richards, S.H. The influence of curbs on driver behaviors in four-lane rural highways—A driving simulator based study. Accid. Anal. Prev. 2013, 50, 1289–1297. [Google Scholar] [CrossRef] [PubMed]
  5. Pajares, G. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–330. [Google Scholar] [CrossRef] [Green Version]
  6. Kim, E.-J.; Park, H.-C.; Ham, S.-W.; Kho, S.-Y.; Kim, D.-K. Extracting vehicle trajectories using unmanned aerial vehicles in congested traffic conditions. J. Adv. Transp. 2019, 2019, 9060797. [Google Scholar] [CrossRef]
  7. NGSIM: Next Generation Simulation; FHWA, U.S. Department of Transportation: Washington, DC, USA, 2007.
  8. Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
  9. Moranduzzo, T.; Melgani, F. Detecting cars in UAV images with a catalog-based approach. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6356–6367. [Google Scholar] [CrossRef]
  10. Rodríguez-Canosa, G.R.; Thomas, S.; Del Cerro, J.; Barrientos, A.; MacDonald, B. A real-time method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera. Remote Sens. 2012, 4, 1090–1111. [Google Scholar] [CrossRef] [Green Version]
  11. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, Y.; Yu, G.; Wu, X.; Wang, Y.; Ma, Y. An enhanced Viola-Jones vehicle detection method from unmanned aerial vehicles imagery. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1845–1856. [Google Scholar] [CrossRef]
  13. Ke, R.; Li, Z.; Kim, S.; Ash, J.; Cui, Z.; Wang, Y. Real-time bidirectional traffic flow parameter estimation from aerial videos. IEEE Trans. Intell. Transp. Syst. 2016, 18, 890–901. [Google Scholar] [CrossRef]
  14. Liu, K.; Mattyus, G. Fast multiclass vehicle detection on aerial images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1938–1942. [Google Scholar]
  15. Pomoni, M.; Plati, C.; Kane, M.; Loizos, A. Polishing behaviour of asphalt surface course containing recycled materials. Int. J. Transp. Sci. Technol. 2021. [Google Scholar] [CrossRef]
  16. Kogbara, R.B.; Masad, E.A.; Kassem, E.; Scarpas, A.T.; Anupam, K. A state-of-the-art review of parameters influencing measurement and modeling of skid resistance of asphalt pavements. Constr. Build. Mater. 2016, 114, 602–617. [Google Scholar] [CrossRef]
  17. He, Y.; Sun, X.; Zhang, J.; Hou, S. A comparative study on the correlation between highway speed and traffic safety in China and the United States. China J. Highw. Transp. 2010, 23, 73–78. [Google Scholar]
  18. Singh, H.; Kathuria, A. Analyzing driver behavior under naturalistic driving conditions: A review. Accid. Anal. Prev. 2021, 150, 105908. [Google Scholar] [CrossRef] [PubMed]
  19. Chang, K.-H. Motion Analysis. In e-Design; Academic Press: Cambridge, MA, USA, 2015; pp. 391–462. [Google Scholar] [CrossRef]
  20. Bruck, L.; Haycock, B.; Emadi, A. A review of driving simulation technology and applications. IEEE Open J. Veh. Technol. 2020, 2, 1–16. [Google Scholar] [CrossRef]
  21. Ali, Y.; Bliemer, M.C.; Zheng, Z.; Haque, M.M. Comparing the usefulness of real-time driving aids in a connected environment during mandatory and discretionary lane-changing manoeuvres. Transp. Res. Part C Emerg. Technol. 2020, 121, 102871. [Google Scholar] [CrossRef]
  22. Deng, W.; Ren, B.; Wang, W.; Ding, J. A Survey of Automatic Generation Methods for Simulation Scenarios for Autonomous Driving. China J. Highw. Transp. 2022, 35, 316–333. [Google Scholar]
  23. Sharma, A.; Zheng, Z.; Kim, J.; Bhaskar, A.; Haque, M.M. Is an informed driver a better decision maker? A grouped random parameters with heterogeneity-in-means approach to investigate the impact of the connected environment on driving behaviour in safety-critical situations. Anal. Methods Accid. Res. 2020, 27, 100127. [Google Scholar] [CrossRef]
  24. Stipancic, J.; Miranda-Moreno, L.; Saunier, N. Vehicle manoeuvers as surrogate safety measures: Extracting data from the gps-enabled smartphones of regular drivers. Accid. Anal. Prev. 2018, 115, 160–169. [Google Scholar] [CrossRef]
  25. Yang, G.; Xu, H.; Tian, Z.; Wang, Z. Vehicle speed and acceleration profile study for metered on-ramps in California. J. Transp. Eng. 2016, 142, 04015046. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Hao, X.; Wu, W. Research on the running speed prediction model of interchange ramp. Procedia-Soc. Behav. Sci. 2014, 138, 340–349. [Google Scholar] [CrossRef] [Green Version]
  27. Nguyen, T.V.; Krajzewicz, D.; Fullerton, M.; Nicolay, E. DFROUTER—Estimation of vehicle routes from cross-section measurements. In Modeling Mobility with Open Data; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–23. [Google Scholar]
  28. Zhang, C.; Yan, X.; Li, X.; Pan, B.; Wang, H.; Ma, X. Running speed model of passenger cars at the exit of a single lane of an interchange. China J. Highw. Transp. 2017, 6, 279–286. [Google Scholar]
  29. Kurtc, V. Studying car-following dynamics on the basis of the HighD dataset. Transp. Res. Rec. 2020, 2674, 813–822. [Google Scholar] [CrossRef]
  30. Lu, X.-Y.; Skabardonis, A. Freeway traffic shockwave analysis: Exploring the NGSIM trajectory data. In Proceedings of the 86th Annual Meeting of the Transportation Research Board, Washington, DC, USA, 21–25 January 2007. [Google Scholar]
  31. Li, Y.; Wu, D.; Lee, J.; Yang, M.; Shi, Y. Analysis of the transition condition of rear-end collisions using time-to-collision index and vehicle trajectory data. Accid. Anal. Prev. 2020, 144, 105676. [Google Scholar] [CrossRef]
  32. Li, L.; Jiang, R.; He, Z.; Chen, X.M.; Zhou, X. Trajectory data-based traffic flow studies: A revisit. Transp. Res. Part C Emerg. Technol. 2020, 114, 225–240. [Google Scholar] [CrossRef]
  33. Feng, Z.; Zhu, Y. A survey on trajectory data mining: Techniques and applications. IEEE Access 2016, 4, 2056–2067. [Google Scholar] [CrossRef]
  34. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  35. Chen, Q.; Gu, R.; Huang, H.; Lee, J.; Zhai, X.; Li, Y. Using vehicular trajectory data to explore risky factors and unobserved heterogeneity during lane-changing. Accid. Anal. Prev. 2021, 151, 105871. [Google Scholar] [CrossRef]
  36. Hu, Y.; Li, Y.; Huang, H.; Lee, J.; Yuan, C.; Zou, G. A high-resolution trajectory data driven method for real-time evaluation of traffic safety. Accid. Anal. Prev. 2022, 165, 106503. [Google Scholar] [CrossRef]
  37. Raju, N.; Kumar, P.; Arkatkar, S.; Joshi, G. Determining risk-based safety thresholds through naturalistic driving patterns using trajectory data on expressways. Saf. Sci. 2019, 119, 117–125. [Google Scholar] [CrossRef]
  38. Wang, L.; Abdel-Aty, M.; Ma, W.; Hu, J.; Zhong, H. Quasi-vehicle-trajectory-based real-time safety analysis for expressways. Transp. Res. Part C Emerg. Technol. 2019, 103, 30–38. [Google Scholar] [CrossRef]
  39. Ali, G.; McLaughlin, S.; Ahmadian, M. Quantifying the effect of roadway, driver, vehicle, and location characteristics on the frequency of longitudinal and lateral accelerations. Accid. Anal. Prev. 2021, 161, 106356. [Google Scholar] [CrossRef]
  40. Liu, T.; Li, Z.; Liu, P.; Xu, C.; Noyce, D.A. Using empirical traffic trajectory data for crash risk evaluation under three-phase traffic theory framework. Accid. Anal. Prev. 2021, 157, 106191. [Google Scholar] [CrossRef]
  41. Reinolsmann, N.; Alhajyaseen, W.; Brijs, T.; Pirdavani, A.; Hussain, Q.; Brijs, K. Investigating the impact of dynamic merge control strategies on driving behavior on rural and urban expressways—A driving simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 469–484. [Google Scholar] [CrossRef] [Green Version]
  42. Yu, R.; Han, L.; Zhang, H. Trajectory data based freeway high-risk events prediction and its influencing factors analyses. Accid. Anal. Prev. 2021, 154, 106085. [Google Scholar] [CrossRef]
  43. Hu, X.; Yuan, Y.; Zhu, X.; Yang, H.; Xie, K. Behavioral responses to pre-planned road capacity reduction based on smartphone GPS trajectory data: A functional data analysis approach. J. Intell. Transp. Syst. 2019, 23, 133–143. [Google Scholar] [CrossRef]
  44. Singla, N. Motion detection based on frame difference method. Int. J. Inf. Comput. Technol. 2014, 4, 1559–1565. [Google Scholar]
  45. Xue, L.-X.; Luo, Y.-L.; Wang, Z.-C. Detection algorithm of adaptive moving objects based on frame difference method. Appl. Res. Comput. 2011, 28, 1551–1552. [Google Scholar]
  46. Piccardi, M. Background subtraction techniques: A review. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), The Hague, The Netherlands, 10–13 October 2004; pp. 3099–3104. [Google Scholar]
  47. Barnich, O.; Van Droogenbroeck, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 2010, 20, 1709–1724. [Google Scholar] [CrossRef] [Green Version]
  48. Zhou, Y.; Zhang, L.; Liu, T.; Gong, S. Structural System Identification Based on Computer Vision. China Civ. Eng. J. 2018, 51, 21–27. [Google Scholar]
  49. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
  50. Chen, X.; Li, Z.; Yang, Y.; Qi, L.; Ke, R. High-resolution vehicle trajectory extraction and denoising from aerial videos. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3190–3202. [Google Scholar] [CrossRef]
  51. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
Figure 1. Research locations: (a) main section of Baomao expressway; (b) Tianquan interchange exit of Yakang expressway; (c) Duogong interchange ring ramp of Yakang expressway; (d) small net distance section of Huangguan tunnel of Beijing–Kunming expressway; (e) Luding interchange ring ramp of Yakang expressway.
Figure 1. Research locations: (a) main section of Baomao expressway; (b) Tianquan interchange exit of Yakang expressway; (c) Duogong interchange ring ramp of Yakang expressway; (d) small net distance section of Huangguan tunnel of Beijing–Kunming expressway; (e) Luding interchange ring ramp of Yakang expressway.
Remotesensing 14 02202 g001aRemotesensing 14 02202 g001b
Figure 2. DJI-Mavic 2 Zoom.
Figure 2. DJI-Mavic 2 Zoom.
Remotesensing 14 02202 g002
Figure 4. Camera wobbling issue: (a) vehicle trajectory at previous frame; (b) vehicle trajectory at a later frame.
Figure 4. Camera wobbling issue: (a) vehicle trajectory at previous frame; (b) vehicle trajectory at a later frame.
Remotesensing 14 02202 g004
Figure 5. Registration effects: (a) trajectory of previous frame after registration; (b) trajectory of later frame after registration.
Figure 5. Registration effects: (a) trajectory of previous frame after registration; (b) trajectory of later frame after registration.
Remotesensing 14 02202 g005
Figure 6. Comparison of different frames (a) before and (b) after registration.
Figure 6. Comparison of different frames (a) before and (b) after registration.
Remotesensing 14 02202 g006
Figure 7. Distance parameter calibration.
Figure 7. Distance parameter calibration.
Remotesensing 14 02202 g007
Figure 8. Labeling vehicles for training.
Figure 8. Labeling vehicles for training.
Remotesensing 14 02202 g008
Figure 9. Result of vehicle detection.
Figure 9. Result of vehicle detection.
Remotesensing 14 02202 g009
Figure 10. Continuous vehicle tracking process using DeepSORT algorithm.
Figure 10. Continuous vehicle tracking process using DeepSORT algorithm.
Remotesensing 14 02202 g010
Figure 11. Continuous vehicle trajectory tracking.
Figure 11. Continuous vehicle trajectory tracking.
Remotesensing 14 02202 g011
Figure 12. Lane marking.
Figure 12. Lane marking.
Remotesensing 14 02202 g012
Figure 13. Schematic diagram of lateral distance extraction.
Figure 13. Schematic diagram of lateral distance extraction.
Remotesensing 14 02202 g013
Figure 14. Converting Cartesian coordinate system to the Frenet coordinate system.
Figure 14. Converting Cartesian coordinate system to the Frenet coordinate system.
Remotesensing 14 02202 g014
Figure 15. Data of collected road sections: trajectory and speed data of (a) main line of Baomao Expressway; (b) Tianquan interchange; (c) Duogong interchange ramp; (d) Luding interchange ramp.
Figure 15. Data of collected road sections: trajectory and speed data of (a) main line of Baomao Expressway; (b) Tianquan interchange; (c) Duogong interchange ramp; (d) Luding interchange ramp.
Remotesensing 14 02202 g015aRemotesensing 14 02202 g015b
Figure 16. Average speed of vehicles at the exit of the Baomao Expressway.
Figure 16. Average speed of vehicles at the exit of the Baomao Expressway.
Remotesensing 14 02202 g016
Table 1. Summary of vehicle speed and trajectory data studies.
Table 1. Summary of vehicle speed and trajectory data studies.
LiteratureData SourceConclusions and Results
[3]Various speed profiles on 153 Dutch expressway curvesDeflection angle and curve length are connected to curve speed. Regardless of horizontal radius or speed, vehicles come to a halt and decelerate roughly 135 m into the curve.
[35]NGSIM datasetExamines elements that influence likelihood of collision during lane changes from the perspective of vehicle groups, as well as unobserved heterogeneity of individual lane change movements.
[36]HighD datasetActive safety management strategy developed by combining traffic conditions and conflicts.
[32]ReviewUse of traffic flow research based on trajectory data from microscopic, mesoscopic, and macroscopic perspectives is reviewed, with paucity of data at this stage identified as a major issue.
[37]Combines road collection with driving simulation dataUsing trajectory data, quantifies back-end risk and proposes thresholds.
[38]Less accurate quasi-vehicle trajectory dataBayesian matching case control logistic regression model is established to explore impact of traffic parameters along quasi-trajectory of vehicles on real-time collision risk.
[39]Second Strategic Highway Research Program Naturalistic Driving Study (SHRP2 NDS)Understanding and quantifying effects of factors such as road speed, driver age and gender, vehicle class, and location on longitudinal and lateral acceleration cycle rates.
[40]empirical vehicle trajectory data collected from Interstate 80 in California, USA, and Yingtian Expressway in Nanjing, ChinaAccording to the three-phase theory, three regression models were created to quantify impact of traffic flow factors and traffic states on collision risk, allowing the evaluation of collision risk for distinct traffic phases and phase transitions.
[41]Driving simulatorInvestigates efficiency of static and dynamic merging control in management of urban and rural expressway traffic.
[42]HighDCollision prediction using a random parameter logistic regression model that takes into account data heterogeneity problem.
[43]Smartphone GPS trajectory dataInvestigates behavior of vehicles when faced with traffic congestion and road closures.
Table 2. Road profiles.
Table 2. Road profiles.
ProvinceHighwayRoad sectionFeatures
ShaanxiBaomao
expressway
General main line segmentFree-flow straight sections, with stable and fast speed, and a small amount of lane-changing behavior.
SichuanYakang
expressway
Tianquan interchange expressway exitA number of lane-changing behaviors on the main line, which has a big radius and a right-biased curve.
SichuanYakang
expressway
Duogong interchange ring rampVehicle direction and speed fluctuate dramatically in the circular curve section with a small radius.
ShaanxiJingkun
expressway
Huangguan tunnel small net distance exitMain line is straight with a lane change and clear acceleration and deceleration characteristics.
SichuanYakang
expressway
Luding interchange ring rampHeavy traffic, slow speed, and many interfering vehicles.
Table 3. Flight altitude analysis.
Table 3. Flight altitude analysis.
Height (h)Effect of Vehicle RecognitionError Analysis
<200 m95% recognition, continuous trackingRoad section’s shooting range is narrow, vehicle’s process time entering and leaving the screen is long, and the process speed error is considerable, resulting in more erroneous data during entry and exit process.
>250 m93% recognition, some vehicles have identification (ID) switchingWhen faced with a more complex road environment, the vehicle is too small at this height, and it is easy to cause ID switch when the white car and white marking line overlap; if the height is too high, wind speed is high, and drone stability is poor, the real distance represented by one pixel is large, and the system error increases.
200–250 m98% recognition, continuous trackingAt this height, it can be adjusted according to road environment and wind speed to achieve continuous and stable tracking of the vehicle, with tiny errors between vehicle speed and trajectory.
Table 4. Result of vehicle detection.
Table 4. Result of vehicle detection.
RoadPrecision (%)Recall (%)True NegativeID Switch (%)
Main section of Baomao expressway (road 1)95.0098.5335
Tianquan interchange exit of Yakang expressway (road 2)92.3597.60712
Duogong interchange ring ramp of Yakang expressway (road 3)95.0096.0045
Table 5. Data accuracy test.
Table 5. Data accuracy test.
TypeSmall CarLarge Car
Real speed (km/h)797964.8868674.0586.46964.847.12454564.239.843.23957.661.5
Detected speed (km/h)777967808279916668454140.5653639395960
Table 6. Error analysis.
Table 6. Error analysis.
TypeMax (Error) (km/h)Maximum Error rate (%)Mean Error (km/h)Mean Error Rate (%)T (%)RMSE (km/h)
Large car5102.986.0695.72.94
Small car66.983.895.0898.53.74
Table 7. Data file.
Table 7. Data file.
Time (s)Vehicle IDSpeed (km/h)DisplacementXYAcceleration (m/s2)
91.091045.002.721518.0039.002.00
91.421047.427.221511.0075.002.01
91.761048.2911.551505.00110.000.72
92.091048.3015.521499.00142.000.01
92.431047.1019.501493.00174.00−1.00
……
Note: X and Y coordinates shown in the table are pixel coordinates.
Table 8. Speed data file.
Table 8. Speed data file.
Real StakePixel StakeCar 2
Speed
Car 2 AccelerationCar 3
Speed
Car 3
Acceleration
Car 9
Speed
Car 9
Acceleration
00102.5356.69699.271−0.64191.3970.081
540102.5356.696100.2810.84292.8161.183
1080102.444−0.0759100.2810.84292.8161.183
15120102.444−0.0759100.4350.12892.8600.037
20160102.9110.389100.4350.12892.8600.037
……
Table 9. Data file for vehicle detection.
Table 9. Data file for vehicle detection.
Real StakePixel StakeCar 2
Speed
Car 2
Acceleration
Car 2 Lateral DistanceCar 2 Lateral SpeedCar 2 Lateral
Acceleration
Car 3
Speed
00102.5356.6961.962−0.274−0.175……
540102.5356.6961.908−0.274−0.175……
1080102.444−0.07591.854−0.332−0.175……
15120102.444−0.07591.814−0.332−0.175……
20160102.9110.3891.826−0.0210.933……
……
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, C.; Tang, Z.; Zhang, M.; Wang, B.; Hou, L. Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data. Remote Sens. 2022, 14, 2202. https://doi.org/10.3390/rs14092202

AMA Style

Zhang C, Tang Z, Zhang M, Wang B, Hou L. Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data. Remote Sensing. 2022; 14(9):2202. https://doi.org/10.3390/rs14092202

Chicago/Turabian Style

Zhang, Chi, Zhongze Tang, Min Zhang, Bo Wang, and Lei Hou. 2022. "Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data" Remote Sensing 14, no. 9: 2202. https://doi.org/10.3390/rs14092202

APA Style

Zhang, C., Tang, Z., Zhang, M., Wang, B., & Hou, L. (2022). Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data. Remote Sensing, 14(9), 2202. https://doi.org/10.3390/rs14092202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop