Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = lane recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1106 KiB  
Article
End-to-End Lane Detection: A Two-Branch Instance Segmentation Approach
by Ping Wang, Zhe Luo, Yunfei Zha, Yi Zhang and Youming Tang
Electronics 2025, 14(7), 1283; https://doi.org/10.3390/electronics14071283 - 25 Mar 2025
Viewed by 286
Abstract
To address the challenges of lane line recognition failure and insufficient segmentation accuracy in complex autonomous driving scenarios, this paper proposes a dual-branch instance segmentation method that integrates multi-scale modeling and dynamic feature enhancement. By constructing an encoder-decoder architecture and a cross-scale feature [...] Read more.
To address the challenges of lane line recognition failure and insufficient segmentation accuracy in complex autonomous driving scenarios, this paper proposes a dual-branch instance segmentation method that integrates multi-scale modeling and dynamic feature enhancement. By constructing an encoder-decoder architecture and a cross-scale feature fusion network, the method effectively enhances the feature representation capability of multi-scale information through the integration of high-level feature maps (rich in semantic information) and low-level feature maps (retaining spatial localization details), thereby improving the prediction accuracy of lane line morphology and its variations. Additionally, hierarchical dilated convolutions (with dilation rates 1/2/4/8) are employed to achieve exponential expansion of the receptive field, enabling better fusion of multi-scale features. Experimental results demonstrate that the proposed method achieves F1-scores of 76.0% and 96.9% on the CULane and Tusimple datasets, respectively, significantly enhancing the accuracy and reliability of lane detection. This work provides a high-precision, real-time solution for autonomous driving perception in complex environments. Full article
Show Figures

Figure 1

30 pages, 16455 KiB  
Article
Automated Detection of Pedestrian and Bicycle Lanes from High-Resolution Aerial Images by Integrating Image Processing and Artificial Intelligence (AI) Techniques
by Richard Boadu Antwi, Prince Lartey Lawson, Michael Kimollo, Eren Erman Ozguven, Ren Moses, Maxim A. Dulebenets and Thobias Sando
ISPRS Int. J. Geo-Inf. 2025, 14(4), 135; https://doi.org/10.3390/ijgi14040135 - 23 Mar 2025
Viewed by 486
Abstract
The rapid advancement of computer vision technology is transforming how transportation agencies collect roadway characteristics inventory (RCI) data, yielding substantial savings in resources and time. Traditionally, capturing roadway data through image processing was seen as both difficult and error-prone. However, considering the recent [...] Read more.
The rapid advancement of computer vision technology is transforming how transportation agencies collect roadway characteristics inventory (RCI) data, yielding substantial savings in resources and time. Traditionally, capturing roadway data through image processing was seen as both difficult and error-prone. However, considering the recent improvements in computational power and image recognition techniques, there are now reliable methods to identify and map various roadway elements from multiple imagery sources. Notably, comprehensive geospatial data for pedestrian and bicycle lanes are still lacking across many state and local roadways, including those in the State of Florida, despite the essential role this information plays in optimizing traffic efficiency and reducing crashes. Developing fast, efficient methods to gather this data are essential for transportation agencies as they also support objectives like identifying outdated or obscured markings, analyzing pedestrian and bicycle lane placements relative to crosswalks, turning lanes, and school zones, and assessing crash patterns in the associated areas. This study introduces an innovative approach using deep neural network models in image processing and computer vision to detect and extract pedestrian and bicycle lane features from very high-resolution aerial imagery, with a focus on public roadways in Florida. Using YOLOv5 and MTRE-based deep learning models, this study extracts and segments bicycle and pedestrian features from high-resolution aerial images, creating a geospatial inventory of these roadway features. Detected features were post-processed and compared with ground truth data to evaluate performance. When tested against ground truth data from Leon County, Florida, the models demonstrated accuracy rates of 73% for pedestrian lanes and 89% for bicycle lanes. This initiative is vital for transportation agencies, enhancing infrastructure management by enabling timely identification of aging or obscured lane markings, which are crucial for maintaining safe transportation networks. Full article
(This article belongs to the Special Issue Spatial Information for Improved Living Spaces)
Show Figures

Figure 1

20 pages, 343 KiB  
Article
Mathematical Modeling and Parameter Estimation of Lane-Changing Vehicle Behavior Decisions
by Jianghui Wen, Yebei Xu, Min Dai and Nengchao Lyu
Mathematics 2025, 13(6), 1014; https://doi.org/10.3390/math13061014 - 20 Mar 2025
Viewed by 222
Abstract
Lane changing is a crucial scenario in traffic environments, and accurately recognizing and predicting lane-changing behavior is essential for ensuring the safety of both autonomous vehicles and drivers. Through considering the multi-vehicle information interaction characteristics in lane-changing behavior for vehicles and the impact [...] Read more.
Lane changing is a crucial scenario in traffic environments, and accurately recognizing and predicting lane-changing behavior is essential for ensuring the safety of both autonomous vehicles and drivers. Through considering the multi-vehicle information interaction characteristics in lane-changing behavior for vehicles and the impact of driver experience needs on lane-changing decisions, this paper proposes a lane-changing model for vehicles to achieve safe and comfortable driving. Firstly, a lane-changing intention recognition model incorporating interaction effects was established to obtain the initial lane-changing intention probability of the vehicles. Secondly, by accounting for individual driving styles, a lane-changing behavior decision model was constructed based on a Gaussian mixture hidden Markov model (GMM-HMM) along with a parameter estimation method. The initial lane-changing intention probability serves as the input for the decision model, and the final lane-changing decision is made by comparing the probabilities of lane-changing and non-lane-changing scenarios. Finally, the model was validated using real-world data from the Next Generation Simulation (NGSIM) dataset, with empirical results demonstrating its high accuracy in recognizing and predicting lane-changing behavior. This study provides a robust framework for enhancing lane-changing decision making in complex traffic environments. Full article
Show Figures

Figure 1

11 pages, 798 KiB  
Article
Understanding Bicycle Riding Behavior and Attention on University Campuses: A Hierarchical Modeling Approach
by Wenyun Tang, Yang Tao, Jiayu Gu, Jiahui Chen and Chaoying Yin
Behav. Sci. 2025, 15(3), 327; https://doi.org/10.3390/bs15030327 - 7 Mar 2025
Viewed by 642
Abstract
The traffic behavior characteristics within university campuses have received limited scholarly attention, despite their distinct differences from external road networks. These differences include the predominance of non-motorized vehicles and pedestrians in traffic flow composition, as well as traffic peaks primarily coinciding with class [...] Read more.
The traffic behavior characteristics within university campuses have received limited scholarly attention, despite their distinct differences from external road networks. These differences include the predominance of non-motorized vehicles and pedestrians in traffic flow composition, as well as traffic peaks primarily coinciding with class transition periods. To investigate the riding behavior of cyclists on university campuses, this study examines cyclist attention, proposes a novel method for constructing a rider attention recognition framework, utilizes a hierarchical ordered logistic model to analyze the factors influencing attention, and evaluates the model’s performance. The findings reveal that traffic density and riding style significantly influence cyclists’ eye-tracking characteristics, which serve as indicators of their attention levels. The covariates of lane gaze time and the coefficient of variation in pupil diameter exhibited significant effects, indicating that a hierarchical ordered logistic model incorporating these covariates can more effectively capture the impact of influencing factors on cyclist attention. Moreover, the hierarchical ordered logistic model achieved a 7.22% improvement in predictive performance compared to the standard ordered logistic model. Additionally, cyclists exhibiting a “conservative” riding style were found to be more attentive than those adopting a “aggressive” riding style. Similarly, cyclists navigating “sparse” traffic conditions were more likely to maintain attention compared to those in “dense” traffic scenarios. These findings provide valuable insights into the riding behavior of university campus cyclists and have significant implications for improving traffic safety within such environments. Full article
Show Figures

Figure 1

15 pages, 2430 KiB  
Article
Research on Vehicle Lane Change Intent Recognition Based on Transformers and Bidirectional Gated Recurrent Units
by Dan Zhou, Yujie Chen, Kexing Fan, Qi Bai, Yong Luo and Guodong Xie
World Electr. Veh. J. 2025, 16(3), 155; https://doi.org/10.3390/wevj16030155 - 6 Mar 2025
Viewed by 502
Abstract
In order to quickly and accurately identify the lane changing intention of vehicles, and to deeply consider the time series characteristics of vehicle driving processes and the interactive effects between vehicles, a lane changing intention recognition model, namely, Model_TA, was constructed by combining [...] Read more.
In order to quickly and accurately identify the lane changing intention of vehicles, and to deeply consider the time series characteristics of vehicle driving processes and the interactive effects between vehicles, a lane changing intention recognition model, namely, Model_TA, was constructed by combining the time series feature extraction ability of the encoder in the Transformer model, the bidirectional gating mechanism of the bidirectional gated recurrent unit, and the additive attention mechanism. The performance of the Model_TA model was trained and validated on the I-80 dataset in NGSIM. The experimental results showed that the accuracy of model intent recognition was 97.01%, which was 20.3%, 4.73%, and 1.73% higher than that of SVM, LSTM, and Transformer models, respectively; the prediction accuracy at 2.0 s, 2.5 s, and 3.0 s is 90.15%, 84.58%, and 83.13%, respectively, which is better than similar models. It is proved that the model can better predict the lane changing intention of vehicles. Full article
Show Figures

Figure 1

21 pages, 1226 KiB  
Article
RSS Tracking Control for AVs Under Bayesian-Network-Based Intelligent Learning Scheme
by Kun Zhang, Kezhen Han and Nanbin Zhao
Actuators 2025, 14(1), 37; https://doi.org/10.3390/act14010037 - 17 Jan 2025
Viewed by 721
Abstract
In complex real-world traffic environments, the task of automatic lane changing becomes extremely challenging for vehicle control systems. Traditional control methods often lack the flexibility and intelligence to accurately capture and respond to dynamic changes in traffic flow. Therefore, developing intelligent control strategies [...] Read more.
In complex real-world traffic environments, the task of automatic lane changing becomes extremely challenging for vehicle control systems. Traditional control methods often lack the flexibility and intelligence to accurately capture and respond to dynamic changes in traffic flow. Therefore, developing intelligent control strategies that can accurately predict the behavior of surrounding vehicles and make corresponding adjustments is crucial. This paper presents an intelligent driving control scheme for autonomous vehicles (AVs) based on a responsibility-sensitive safety (RSS) tracking control mechanism within a Bayesian network intelligent learning framework. Initially, the Bayesian evidence construction method for vehicle lane changing scenarios is studied. Using this method, prior probability tables for lane-hanging vehicles are constructed, and the Bayesian formula is applied to predict the lane changing probabilities of surrounding vehicles. Subsequently, an optimal control method is employed to integrate Bayesian lane changing probabilities into the design of performance indices and auxiliary systems, transforming tracking and safety avoidance tasks into an optimization control problem. Additionally, a critic learning optimal control algorithm is developed to determine the control law. Finally, the proposed tracking control scheme is validated through simulations, demonstrating its reliability and effectiveness. Full article
(This article belongs to the Special Issue Advances in Intelligent Control of Actuator Systems)
Show Figures

Figure 1

24 pages, 3494 KiB  
Article
Grid Anchor Lane Detection Based on Attribute Correlation
by Qiaohui Feng, Cheng Chi, Fei Chen, Jianhao Shen, Gang Xu and Huajie Wen
Appl. Sci. 2025, 15(2), 699; https://doi.org/10.3390/app15020699 - 12 Jan 2025
Viewed by 669
Abstract
The detection of road features is a necessary approach to achieve autonomous driving. And lane lines are important two-dimensional features on roads, which are crucial for achieving autonomous driving. Currently, research on lane detection mainly focuses on the positioning detection of local features [...] Read more.
The detection of road features is a necessary approach to achieve autonomous driving. And lane lines are important two-dimensional features on roads, which are crucial for achieving autonomous driving. Currently, research on lane detection mainly focuses on the positioning detection of local features without considering the association of long-distance lane line features. A grid anchor lane detection model based on attribute correlation is proposed to address this issue. Firstly, a grid anchor lane line expression method containing attribute information is proposed, and the association relationship between adjacent features is established at the data layer. Secondly, a convolutional reordering upsampling method has been proposed, and the model integrates the global feature information generated by multi-layer perceptron (MLP), achieving the fusion of long-distance lane line features. The upsampling and MLP enhance the dual perception ability of the feature pyramid network in detail features and global features. Finally, the attribute correlation loss function was designed to construct feature associations between different grid anchors, enhancing the interdependence of anchor recognition results. The experimental results show that the proposed model achieved first-place F1 scores of 93.05 and 73.27 in the normal and curved scenes on the CULane dataset, respectively. This model can balance the robustness of lane detection in both normal and curved scenarios. Full article
Show Figures

Figure 1

25 pages, 3292 KiB  
Article
Lane Detection Based on CycleGAN and Feature Fusion in Challenging Scenes
by Eric Hsueh-Chan Lu and Wei-Chih Chiu
Vehicles 2025, 7(1), 2; https://doi.org/10.3390/vehicles7010002 - 1 Jan 2025
Cited by 2 | Viewed by 1074
Abstract
Lane detection is a pivotal technology of the intelligent driving system. By identifying the position and shape of the lane, the vehicle can stay in the correct lane and avoid accidents. Image-based deep learning is currently the most advanced method for lane detection. [...] Read more.
Lane detection is a pivotal technology of the intelligent driving system. By identifying the position and shape of the lane, the vehicle can stay in the correct lane and avoid accidents. Image-based deep learning is currently the most advanced method for lane detection. Models using this method already have a very good recognition ability in general daytime scenes, and can almost achieve real-time detection. However, these models often fail to accurately identify lanes in challenging scenarios such as night, dazzle, or shadows. Furthermore, the lack of diversity in the training data restricts the capacity of the models to handle different environments. This paper proposes a novel method to train CycleGAN with existing daytime and nighttime datasets. This method can extract features of different styles and multi-scales, thereby increasing the richness of model input. We use CycleGAN as a domain adaptation model combined with an image segmentation model to boost the model’s performance in different styles of scenes. The proposed consistent loss function is employed to mitigate performance disparities of the model in different scenarios. Experimental results indicate that our method enhances the detection performance of original lane detection models in challenging scenarios. This research helps improve the dependability and robustness of intelligent driving systems, ultimately making roads safer and enhancing the driving experience. Full article
Show Figures

Figure 1

22 pages, 1781 KiB  
Article
Micro-Mobility Safety Assessment: Analyzing Factors Influencing the Micro-Mobility Injuries in Michigan by Mining Crash Reports
by Baraah Qawasmeh, Jun-Seok Oh and Valerian Kwigizile
Future Transp. 2024, 4(4), 1580-1601; https://doi.org/10.3390/futuretransp4040076 - 10 Dec 2024
Cited by 4 | Viewed by 1307
Abstract
The emergence of micro-mobility transportation in urban areas has led to a transformative shift in mobility options, yet it has also brought about heightened traffic conflicts and crashes. This research addresses these challenges by pioneering the integration of image-processing techniques with machine learning [...] Read more.
The emergence of micro-mobility transportation in urban areas has led to a transformative shift in mobility options, yet it has also brought about heightened traffic conflicts and crashes. This research addresses these challenges by pioneering the integration of image-processing techniques with machine learning methodologies to analyze crash diagrams. The study aims to extract latent features from crash data, specifically focusing on understanding the factors influencing injury severity among vehicle and micro-mobility crashes in Michigan’s urban areas. Micro-mobility devices analyzed in this study are bicycles, e-wheelchairs, skateboards, and e-scooters. The AlexNet Convolutional Neural Network (CNN) was utilized to identify various attributes from crash diagrams, enabling the recognition and classification of micro-mobility device collision locations into three categories: roadside, shoulder, and bicycle lane. This study utilized the 2023 Michigan UD-10 crash reports comprising 1174 diverse micro-mobility crash diagrams. Subsequently, the Random Forest classification algorithm was utilized to pinpoint the primary factors and their interactions that affect the severity of micro-mobility injuries. The results suggest that roads with speed limits exceeding 40 mph are the most significant factor in determining the severity of micro-mobility injuries. In addition, micro-mobility rider violations and motorists left-turning maneuvers are associated with more severe crash outcomes. In addition, the findings emphasize the overall effect of many different variables, such as improper lane use, violations, and hazardous actions by micro-mobility users. These factors demonstrate elevated rates of prevalence among younger micro-mobility users and are found to be associated with distracted motorists, elderly motorists, or those who ride during nighttime. Full article
(This article belongs to the Special Issue Emerging Issues in Transport and Mobility)
Show Figures

Figure 1

16 pages, 9530 KiB  
Article
Development of Robust Lane-Keeping Algorithm Using Snow Tire Track Recognition in Snowfall Situations
by Donghyun Kim and Yonghwan Jeong
Sensors 2024, 24(23), 7802; https://doi.org/10.3390/s24237802 - 5 Dec 2024
Viewed by 772
Abstract
This study proposed a robust lane-keeping algorithm designed for snowy road conditions, utilizing a snow tire track detection model based on machine learning. The proposed algorithm is structured into two primary modules: a snow tire track detector and a lane center estimator. The [...] Read more.
This study proposed a robust lane-keeping algorithm designed for snowy road conditions, utilizing a snow tire track detection model based on machine learning. The proposed algorithm is structured into two primary modules: a snow tire track detector and a lane center estimator. The snow tire track detector utilizes YOLOv5, trained on custom datasets generated from public videos captured on snowy roads. Video frames are annotated with the Computer Vision Annotation Tool (CVAT) to identify pixels containing snow tire tracks. To mitigate overfitting, the detector is trained on a combined dataset that incorporates both snow tire track images and road scenes from the Udacity dataset. The lane center estimator uses the detected tire tracks to estimate a reference line for lane keeping. Detected tracks are binarized and transformed into a bird’s-eye view image. Then, skeletonization and Hough transformation techniques are applied to extract tire track lines from the classified pixels. Finally, the Kalman filter estimates the lane center based on tire track lines. Evaluations conducted on unseen images demonstrate that the proposed algorithm provides a reliable lane reference, even under heavy snowfall conditions. Full article
Show Figures

Figure 1

20 pages, 5568 KiB  
Article
A Method of Intelligent Driving-Style Recognition Using Natural Driving Data
by Siyang Zhang, Zherui Zhang and Chi Zhao
Appl. Sci. 2024, 14(22), 10601; https://doi.org/10.3390/app142210601 - 17 Nov 2024
Viewed by 1314
Abstract
At present, achieving efficient, sustainable, and safe transportation has led to increasing attention on driving behavior recognition and advancements in autonomous driving. Identifying diverse driving styles and corresponding types is crucial for providing targeted training and assistance to drivers, enhancing safety awareness, optimizing [...] Read more.
At present, achieving efficient, sustainable, and safe transportation has led to increasing attention on driving behavior recognition and advancements in autonomous driving. Identifying diverse driving styles and corresponding types is crucial for providing targeted training and assistance to drivers, enhancing safety awareness, optimizing driving costs, and improving autonomous driving systems responses. However, current studies mainly focus on specific driving scenarios, such as free driving, car-following, and lane-changing, lacking a comprehensive and systematic framework to identify the diverse driving styles. This study proposes a novel, data-driven approach to driving-style recognition utilizing naturalistic driving data NGSIM. Specifically, the NGSIM dataset is employed to categorize car-following and lane-changing groups according to driving-state extraction conditions. Then, characteristic parameters that fully represent driving styles are optimized through correlation analysis and principal component analysis for dimensionality reduction. The K-means clustering algorithm is applied to categorize the car-following and lane-changing groups into three driving styles: conservative, moderate, and radical. Based on the clustering results, a comprehensive evaluation of the driving styles is conducted. Finally, a comparative evaluation of SVM, Random Forest, and KNN recognition indicates the superiority of the SVM algorithm and highlights the effectiveness of dimensionality reduction in optimizing characteristic parameters. The proposed method achieves over 97% accuracy in identifying car-following and lane-changing behaviors, confirming that the approach based on naturalistic driving data can effectively and intelligently recognize driving styles. Full article
Show Figures

Figure 1

18 pages, 8219 KiB  
Article
Evolution of the “4-D Approach” to Dynamic Vision for Vehicles
by Ernst Dieter Dickmanns
Electronics 2024, 13(20), 4133; https://doi.org/10.3390/electronics13204133 - 21 Oct 2024
Viewed by 1032
Abstract
Spatiotemporal models for the 3-D shape and motion of objects allowed large progress in the 1980s in visual perception of moving objects observed from a moving platform. Despite the successes demonstrated with several vehicles, the “4-D approach” has not been accepted generally. Its [...] Read more.
Spatiotemporal models for the 3-D shape and motion of objects allowed large progress in the 1980s in visual perception of moving objects observed from a moving platform. Despite the successes demonstrated with several vehicles, the “4-D approach” has not been accepted generally. Its advantage is that only the last image of the sequence needs to be analyzed in detail to allow the full state vectors of moving objects, including their velocity components, to be reconstructed by the feedback of prediction errors. The vehicle carrying the cameras can, thus, together with conventional measurements, directly create a visualization of the situation encountered. In 1994, at the final demonstration of the project PROMETHEUS, two sedan vehicles using this approach were the only ones worldwide capable of driving autonomously in standard heavy traffic on three-lane Autoroutes near Paris at speeds up to 130 km/h (convoy driving, lane changes, passing). Up to ten vehicles nearby could be perceived. In this paper, the three-layer architecture of the perception system is reviewed. At the end of the 1990s, the system evolved from mere recognition of objects in motion, to understanding complex dynamic scenes by developing behavioral capabilities, like fast saccadic changes in the gaze direction for flexible concentration on objects of interest. By analyzing motion of objects over time, the situation for decision making was assessed. In the third-generation system “EMS-vision” behavioral capabilities of agents were represented on an abstract level for characterizing their potential behaviors. These maneuvers form an additional knowledge base. The system has proven capable of driving in networks of minor roads, including off-road sections, with avoidance of negative obstacles (ditches). Results are shown for road vehicle guidance. Potential transitions to a robot mind and to the now-favored CNN are touched on. Full article
(This article belongs to the Special Issue Advancement on Smart Vehicles and Smart Travel)
Show Figures

Figure 1

19 pages, 20082 KiB  
Article
An Ontology-Based Vehicle Behavior Prediction Method Incorporating Vehicle Light Signal Detection
by Xiaolong Xu, Xiaolin Shi, Yun Chen and Xu Wu
Sensors 2024, 24(19), 6459; https://doi.org/10.3390/s24196459 - 6 Oct 2024
Viewed by 1281
Abstract
Although deep learning techniques have potential in vehicle behavior prediction, it is difficult to integrate traffic rules and environmental information. Moreover, its black-box nature leads to an opaque and difficult-to-interpret prediction process, limiting its acceptance in practical applications. In contrast, ontology reasoning, which [...] Read more.
Although deep learning techniques have potential in vehicle behavior prediction, it is difficult to integrate traffic rules and environmental information. Moreover, its black-box nature leads to an opaque and difficult-to-interpret prediction process, limiting its acceptance in practical applications. In contrast, ontology reasoning, which can utilize human domain knowledge and mimic human reasoning, can provide reliable explanations for the speculative results. To address the limitations of the above deep learning methods in the field of vehicle behavior prediction, this paper proposes a front vehicle behavior prediction method that combines deep learning techniques with ontology reasoning. Specifically, YOLOv5s is first selected as the base model for recognizing the brake light status of vehicles. In order to further enhance the performance of the model in complex scenes and small target recognition, the Convolutional Block Attention Module (CBAM) is introduced. In addition, so as to balance the feature information of different scales more efficiently, a weighted bi-directional feature pyramid network (BIFPN) is introduced to replace the original PANet structure in YOLOv5s. Next, using a four-lane intersection as an application scenario, multiple factors affecting vehicle behavior are analyzed. Based on these factors, an ontology model for predicting front vehicle behavior is constructed. Finally, for the purpose of validating the effectiveness of the proposed method, we make our own brake light detection dataset. The accuracy and mAP@0.5 of the improved model on the self-made dataset are 3.9% and 2.5% higher than that of the original model, respectively. Afterwards, representative validation scenarios were selected for inference experiments. The ontology model created in this paper accurately reasoned out the behavior that the target vehicle would slow down until stopping and turning left. The reasonableness and practicality of the front vehicle behavior prediction method constructed in this paper are verified. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

20 pages, 6767 KiB  
Article
Highly Accurate Deep Learning Models for Estimating Traffic Characteristics from Video Data
by Bowen Cai, Yuxiang Feng, Xuesong Wang and Mohammed Quddus
Appl. Sci. 2024, 14(19), 8664; https://doi.org/10.3390/app14198664 - 26 Sep 2024
Viewed by 1592
Abstract
Traditionally, traffic characteristics such as speed, volume, and travel time are obtained from a range of sensors and systems such as inductive loop detectors (ILDs), automatic number plate recognition cameras (ANPR), and GPS-equipped floating cars. However, many issues associated with these data have [...] Read more.
Traditionally, traffic characteristics such as speed, volume, and travel time are obtained from a range of sensors and systems such as inductive loop detectors (ILDs), automatic number plate recognition cameras (ANPR), and GPS-equipped floating cars. However, many issues associated with these data have been identified in the existing literature. Although roadside surveillance cameras cover most road segments, especially on freeways, existing techniques to extract traffic data (e.g., speed measurements of individual vehicles) from video are not accurate enough to be employed in a proactive traffic management system. Therefore, this paper aims to develop a technique for estimating traffic data from video captured by surveillance cameras. This paper then develops a deep learning-based video processing algorithm for detecting, tracking, and predicting highly disaggregated vehicle-based data, such as trajectories and speed, and transforms such data into aggregated traffic characteristics such as speed variance, average speed, and flow. By taking traffic observations from a high-quality LiDAR sensor as ‘ground truth’, the results indicate that the developed technique estimates lane-based traffic volume with an accuracy of 97%. With the application of the deep learning model, the computer vision technique can estimate individual vehicle-based speed calculations with an accuracy of 90–95% for different angles when the objects are within 50 m of the camera. The developed algorithm was then utilised to obtain dynamic traffic characteristics from a freeway in southern China and employed in a statistical model to predict monthly crashes. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Transportation Engineering)
Show Figures

Figure 1

24 pages, 5578 KiB  
Article
Study on Nighttime Pedestrian Trajectory-Tracking from the Perspective of Driving Blind Spots
by Wei Zhao, Congcong Ren and Ao Tan
Electronics 2024, 13(17), 3460; https://doi.org/10.3390/electronics13173460 - 31 Aug 2024
Cited by 1 | Viewed by 1220
Abstract
With the acceleration of urbanization and the growing demand for traffic safety, developing intelligent systems capable of accurately recognizing and tracking pedestrian trajectories at night or under low-light conditions has become a research focus in the field of transportation. This study aims to [...] Read more.
With the acceleration of urbanization and the growing demand for traffic safety, developing intelligent systems capable of accurately recognizing and tracking pedestrian trajectories at night or under low-light conditions has become a research focus in the field of transportation. This study aims to improve the accuracy and real-time performance of nighttime pedestrian-detection and -tracking. A method that integrates the multi-object detection algorithm YOLOP with the multi-object tracking algorithm DeepSORT is proposed. The improved YOLOP algorithm incorporates the C2f-faster structure in the Backbone and Neck sections, enhancing feature extraction capabilities. Additionally, a BiFormer attention mechanism is introduced to focus on the recognition of small-area features, the CARAFE module is added to improve shallow feature fusion, and the DyHead dynamic target-detection head is employed for comprehensive fusion. In terms of tracking, the ShuffleNetV2 lightweight module is integrated to reduce model parameters and network complexity. Experimental results demonstrate that the proposed FBCD-YOLOP model improves lane detection accuracy by 5.1%, increases the IoU metric by 0.8%, and enhances detection speed by 25 FPS compared to the baseline model. The accuracy of nighttime pedestrian-detection reached 89.6%, representing improvements of 1.3%, 0.9%, and 3.8% over the single-task YOLO v5, multi-task TDL-YOLO, and the original YOLOP models, respectively. These enhancements significantly improve the model’s detection performance in complex nighttime environments. The enhanced DeepSORT algorithm achieved an MOTA of 86.3% and an MOTP of 84.9%, with ID switch occurrences reduced to 5. Compared to the ByteTrack and StrongSORT algorithms, MOTA improved by 2.9% and 0.4%, respectively. Additionally, network parameters were reduced by 63.6%, significantly enhancing the real-time performance of nighttime pedestrian-detection and -tracking, making it highly suitable for deployment on intelligent edge computing surveillance platforms. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop