Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (184)

Search Parameters:
Keywords = traffic sign detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
6 pages, 310 KB  
Proceeding Paper
Simulated Attacks and Defenses Using Traffic Sign Recognition Machine Learning Models
by Chu-Hsing Lin, Chao-Ting Yu and Yan-Ling Chen
Eng. Proc. 2025, 108(1), 11; https://doi.org/10.3390/engproc2025108011 - 1 Sep 2025
Abstract
Physically simulated attack experiments were conducted using LED lights of different colors, the You Look Only Once (YOLO) v5 model, and the German Traffic Sign Recognition Benchmark (GTSRB) dataset. We attacked and interfered with the traffic sign detection model and tested the model’s [...] Read more.
Physically simulated attack experiments were conducted using LED lights of different colors, the You Look Only Once (YOLO) v5 model, and the German Traffic Sign Recognition Benchmark (GTSRB) dataset. We attacked and interfered with the traffic sign detection model and tested the model’s recognition performance when it was interfered with by LED lights. The model’s accuracy in identifying objects was calculated with the interference. We conducted a series of experiments to test the interference effects of colored lighting. The attack with different colored lights caused a certain degree of interference to the machine learning model, which affected the self-driving vehicle’s ability to recognize traffic signs. It caused the self-driving system to fail to detect the existence of the traffic sign or commit recognition errors. To defend from this attack, we fed back the traffic signs into the training dataset and re-trained the machine learning model. This enabled the machine learning model to resist related attacks and avoid disturbance. Full article
Show Figures

Figure 1

22 pages, 5825 KB  
Article
Development of a Smart Energy-Saving Driving Assistance System Integrating OBD-II, YOLOv11, and Generative AI
by Meng-Hua Yen, You-Xuan Lin, Kai-Po Huang and Chi-Chun Chen
Electronics 2025, 14(17), 3435; https://doi.org/10.3390/electronics14173435 - 28 Aug 2025
Viewed by 191
Abstract
In recent years, generative AI and autonomous driving have been highly popular topics. Additionally, with the increasing global emphasis on carbon emissions and carbon trading, integrating autonomous driving technologies that can instantly perceive environ-mental changes with vehicle-based generative AI would enable vehicles to [...] Read more.
In recent years, generative AI and autonomous driving have been highly popular topics. Additionally, with the increasing global emphasis on carbon emissions and carbon trading, integrating autonomous driving technologies that can instantly perceive environ-mental changes with vehicle-based generative AI would enable vehicles to better under-stand their surroundings and provide drivers with recommendations for more energy-efficient and comfortable driving. This study employed You Only Look Once version11 (YOLOv11) for visual detection of the driving environment, integrating it with vehicle speed data received from the OBD-II system. All information is integrated and processed using the embedded Nvidia Jetson AGX Orin platform. For visual detection validation, part of the test set includes standard Taiwanese road signs. Experimental results show that incorporating Squeeze-and-Excitation Attention (SEAttention), into YOLOv11 improves the mAP50–95 accuracy by 10.1 percentage points. Generative AI processed this information in real time and provided the driver with appropriate driving recommendations, such as gently braking, detecting a pedestrian ahead, or warning of excessive speed. These recommendations are delivered through voice output to prevent driver distraction caused by looking at an interface. When a red light or pedestrian is detected, early deceleration is suggested, effectively reducing fuel consumption while also enhancing driving comfort, ultimately achieving the goal of energy-efficient driving. Full article
(This article belongs to the Special Issue Intelligent Computing and System Integration)
Show Figures

Figure 1

18 pages, 7729 KB  
Article
A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments
by Chen Xing, Haoran Sun and Jiafu Yang
World Electr. Veh. J. 2025, 16(8), 478; https://doi.org/10.3390/wevj16080478 - 21 Aug 2025
Viewed by 814
Abstract
Traffic sign detection is a core function of autonomous driving systems, requiring real-time and accurate target recognition in complex road environments. Existing lightweight detection models struggle to balance accuracy, efficiency, and robustness under computational constraints of vehicle-mounted edge devices. To address this, we [...] Read more.
Traffic sign detection is a core function of autonomous driving systems, requiring real-time and accurate target recognition in complex road environments. Existing lightweight detection models struggle to balance accuracy, efficiency, and robustness under computational constraints of vehicle-mounted edge devices. To address this, we propose a lightweight model integrating FasterNet, Efficient Multi-scale Attention (EMA), Bidirectional Feature Pyramid Network (BiFPN), and Group Separable Convolution (GSConv) based on YOLOv8s (FEBG-YOLOv8s). Key innovations include reconstructing the Cross Stage Partial Network 2 with Focus (C2f) module using FasterNet blocks to minimize redundant computation; integrating an EMA mechanism to enhance robustness against small and occluded targets; refining the neck network based on BiFPN via channel compression, downsampling layers, and skip connections to optimize shallow–deep semantic fusion; and designing a GSConv-based hybrid serial–parallel detection head (GSP-Detect) to preserve cross-channel information while reducing computational load. Experiments on Tsinghua–Tencent 100K (TT100K) show FEBG-YOLOv8s improves mean Average Precision at Intersection over Union 0.5 (mAP50) by 3.1% compared to YOLOv8s, with 4 million fewer parameters and 22.5% lower Giga Floating-Point Operations (GFLOPs). Generalizability experiments on the CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB) validate robustness, with 3.3% higher mAP50, demonstrating its potential for real-time traffic sign detection on edge platforms. Full article
Show Figures

Figure 1

22 pages, 5403 KB  
Article
SSF-Roundabout: A Smart and Self-Regulated Roundabout with Right-Turn Bypass Lanes
by Marco Guerrieri and Masoud Khanmohamadi
Appl. Sci. 2025, 15(16), 8971; https://doi.org/10.3390/app15168971 - 14 Aug 2025
Viewed by 240
Abstract
This paper presents the novel, smart, commutable, and self-regulated SSF-Roundabout as one of the potential solutions in the environment of smart mobility. The SSF-Roundabout implements traffic counting systems, smart cameras, LED road markers, and Variable Message Signs (VMS) on arms. Based on the [...] Read more.
This paper presents the novel, smart, commutable, and self-regulated SSF-Roundabout as one of the potential solutions in the environment of smart mobility. The SSF-Roundabout implements traffic counting systems, smart cameras, LED road markers, and Variable Message Signs (VMS) on arms. Based on the instantaneous detection of the traffic demand level, vehicles can be properly channelled or not into right-turn bypass lanes, which the roundabout is equipped with in every arm, to guarantee the requested capacity, Level of Service (LOS), and safety. In total, fifteen very different layout configurations of the SSF-Roundabout are available. Several traffic analyses were performed by using ad hoc traffic engineering closed-form models and case studies based on many origin-destination traffic matrices (MO/D(t)) and proportions of CAVs in the traffic stream (from 0% to 100%). Simulation results demonstrate the correlation between layout scenarios, traffic intensity, distribution among arms, and composition in terms of CAVs and their impact on entry and total capacity, control delay, and LOS of the SSF-Roundabout. For instance, the right-turn bypass lane activation may produce an entry capacity increase of 48% and a total capacity increase of 50% in the case of 100% of CAVs in traffic streams. Full article
(This article belongs to the Special Issue Communication Technology for Smart Mobility Systems)
Show Figures

Figure 1

24 pages, 3254 KB  
Article
Ghost-YOLO-GBH: A Lightweight Framework for Robust Small Traffic Sign Detection via GhostNet and Bidirectional Multi-Scale Feature Fusion
by Jingyi Tang, Bu Xu, Jue Li, Mengyuan Zhang, Chao Huang and Feng Li
Eng 2025, 6(8), 196; https://doi.org/10.3390/eng6080196 - 7 Aug 2025
Viewed by 310
Abstract
Traffic safety is a significant global concern, and traffic sign recognition (TSR) is essential for the advancement of intelligent transportation systems. Traditional YOLO11s-based methods often struggle to balance detection accuracy and processing speed, particularly in the context of small traffic signs within complex [...] Read more.
Traffic safety is a significant global concern, and traffic sign recognition (TSR) is essential for the advancement of intelligent transportation systems. Traditional YOLO11s-based methods often struggle to balance detection accuracy and processing speed, particularly in the context of small traffic signs within complex environments. To address these challenges, this study presents Ghost-YOLO-GBH, an innovative lightweight model that incorporates three key enhancements: (1) the integration of a GhostNet backbone, which substitutes the conventional YOLO11s architecture and utilizes Ghost modules to exploit feature redundancy, resulting in a 40.6% reduction in computational load while ensuring effective feature extraction for small targets; (2) the development of a HybridFocus module that combines large separable kernel attention with multi-scale pooling, effectively minimizing background interference and improving contextual feature aggregation by 4.3% in isolated tests; and (3) the implementation of a Bidirectional Dynamic Multi-Scale Feature Pyramid Network (BiDMS-FPN) that allows for bidirectional cross-stage feature fusion, significantly enhancing the accuracy of small target detection. Experimental results on the TT100K dataset indicate that Ghost-YOLO-GBH achieves an impressive 81.10% mean Average Precision (mAP) at a threshold of 0.5, along with an 11.7% increase in processing speed (45 FPS) and an 18.2% reduction in model parameters (7.74 M) compared to the baseline YOLO11s. Overall, Ghost-YOLO-GBH effectively balances accuracy, efficiency, and lightweight deployment, demonstrating superior performance in real-world applications characterized by small signs and cluttered backgrounds. This research provides a novel framework for resource-constrained TSR applications, contributing to the evolution of intelligent transportation systems. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

15 pages, 4592 KB  
Article
SSAM_YOLOv5: YOLOv5 Enhancement for Real-Time Detection of Small Road Signs
by Fatima Qanouni, Hakim El Massari, Noreddine Gherabi and Maria El-Badaoui
Digital 2025, 5(3), 30; https://doi.org/10.3390/digital5030030 - 29 Jul 2025
Viewed by 561
Abstract
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with [...] Read more.
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with detection speed and efficiency being critical parameters. In terms of these parameters, to enhance performance in road-sign detection under diverse conditions, we proposed a comprehensive methodology, SSAM_YOLOv5, to handle feature extraction and small-road-sign detection performance. The method was based on a modified version of YOLOv5s. First, we introduced attention modules into the backbone to focus on the region of interest within video frames; secondly, we replaced the activation function with the SwishT_C activation function to enhance feature extraction and achieve a balance between inference, precision, and mean average precision (mAP@50) rates. Compared to the YOLOv5 baseline, the proposed improvements achieved remarkable increases of 1.4% and 1.9% in mAP@50 on the Tiny LISA and GTSDB datasets, respectively, confirming their effectiveness. Full article
Show Figures

Figure 1

19 pages, 7674 KB  
Article
Development of Low-Cost Single-Chip Automotive 4D Millimeter-Wave Radar
by Yongjun Cai, Jie Bai, Hui-Liang Shen, Libo Huang, Bing Rao and Haiyang Wang
Sensors 2025, 25(15), 4640; https://doi.org/10.3390/s25154640 - 26 Jul 2025
Viewed by 752
Abstract
Traditional 3D millimeter-wave radars lack target height information, leading to identification failures in complex scenarios. Upgrading to 4D millimeter-wave radars enables four-dimensional information perception, enhancing obstacle detection and improving the safety of autonomous driving. Given the high cost-sensitivity of in-vehicle radar systems, single-chip [...] Read more.
Traditional 3D millimeter-wave radars lack target height information, leading to identification failures in complex scenarios. Upgrading to 4D millimeter-wave radars enables four-dimensional information perception, enhancing obstacle detection and improving the safety of autonomous driving. Given the high cost-sensitivity of in-vehicle radar systems, single-chip 4D millimeter-wave radar solutions, despite technical challenges in imaging, are of great research value. This study focuses on developing single-chip 4D automotive millimeter-wave radar, covering system architecture design, antenna optimization, signal processing algorithm creation, and performance validation. The maximum measurement error is approximately ±0.2° for azimuth angles within the range of ±30° and around ±0.4° for elevation angles within the range of ±13°. Extensive road testing has demonstrated that the designed radar is capable of reliably measuring dynamic targets such as vehicles, pedestrians, and bicycles, while also accurately detecting static infrastructure like overpasses and traffic signs. Full article
Show Figures

Figure 1

20 pages, 1816 KB  
Article
A Self-Attention-Enhanced 3D Object Detection Algorithm Based on a Voxel Backbone Network
by Zhiyong Wang and Xiaoci Huang
World Electr. Veh. J. 2025, 16(8), 416; https://doi.org/10.3390/wevj16080416 - 23 Jul 2025
Viewed by 680
Abstract
3D object detection is a fundamental task in autonomous driving. In recent years, voxel-based methods have demonstrated significant advantages in reducing computational complexity and memory consumption when processing large-scale point cloud data. A representative method, Voxel-RCNN, introduces Region of Interest (RoI) pooling on [...] Read more.
3D object detection is a fundamental task in autonomous driving. In recent years, voxel-based methods have demonstrated significant advantages in reducing computational complexity and memory consumption when processing large-scale point cloud data. A representative method, Voxel-RCNN, introduces Region of Interest (RoI) pooling on voxel features, successfully bridging the gap between voxel and point cloud representations for enhanced 3D object detection. However, its robustness deteriorates when detecting distant objects or in the presence of noisy points (e.g., traffic signs and trees). To address this limitation, we propose an enhanced approach named Self-Attention Voxel-RCNN (SA-VoxelRCNN). Our method integrates two complementary attention mechanisms into the feature extraction phase. First, a full self-attention (FSA) module improves global context modeling across all voxel features. Second, a deformable self-attention (DSA) module enables adaptive sampling of representative feature subsets at strategically selected positions. After extracting contextual features through attention mechanisms, these features are fused with spatial features from the base algorithm to form enhanced feature representations, which are subsequently input into the region proposal network (RPN) to generate high-quality 3D bounding boxes. Experimental results on the KITTI test set demonstrate that SA-VoxelRCNN achieves consistent improvements in challenging scenarios, with gains of 2.49 and 1.87 percentage points at Moderate and Hard difficulty levels, respectively, while maintaining real-time performance at 22.3 FPS. This approach effectively balances local geometric details with global contextual information, providing a robust detection solution for autonomous driving applications. Full article
Show Figures

Figure 1

16 pages, 2108 KB  
Article
One Possible Path Towards a More Robust Task of Traffic Sign Classification in Autonomous Vehicles Using Autoencoders
by Ivan Martinović, Tomás de Jesús Mateo Sanguino, Jovana Jovanović, Mihailo Jovanović and Milena Djukanović
Electronics 2025, 14(12), 2382; https://doi.org/10.3390/electronics14122382 - 11 Jun 2025
Viewed by 781
Abstract
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: [...] Read more.
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experiments on the German Traffic Sign Recognition Benchmark (GTSRB) dataset show that, although these attacks can significantly degrade system performance, the proposed models are capable of partially recovering lost accuracy. Notably, the defense demonstrates strong capabilities in both detecting and reconstructing manipulated traffic signs, even under low-perturbation scenarios. Additionally, a feature-based autoencoder is introduced, which—despite a high false positive rate—achieves perfect detection in critical conditions, a tradeoff considered acceptable in safety-critical contexts. These results highlight the potential of autoencoder-based architectures as a foundation for resilient AV perception while underscoring the need for hybrid models integrating visual-language frameworks for real-time, fail-safe operation. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

24 pages, 13773 KB  
Article
TSD-Net: A Traffic Sign Detection Network Addressing Insufficient Perception Resolution and Complex Background
by Chengcheng Ma, Chang Liu, Litao Deng and Pengfei Xu
Sensors 2025, 25(11), 3511; https://doi.org/10.3390/s25113511 - 2 Jun 2025
Viewed by 633
Abstract
With the rapid development of intelligent transportation systems, traffic sign detection plays a crucial role in ensuring driving safety and preventing accidents. However, detecting small traffic signs in complex road environments remains a significant challenge due to issues such as low resolution, dense [...] Read more.
With the rapid development of intelligent transportation systems, traffic sign detection plays a crucial role in ensuring driving safety and preventing accidents. However, detecting small traffic signs in complex road environments remains a significant challenge due to issues such as low resolution, dense distribution, and visually similar background interference. Existing methods face limitations including high computational cost, inconsistent feature alignment, and insufficient resolution in detection heads. To address these challenges, we propose the Traffic Sign Detection Network (TSD-Net), an improved framework designed to enhance the detection performance of small traffic signs in complex backgrounds. TSD-Net integrates a Feature Enhancement Module (FEM) to expand the network’s receptive field and enhance its capability to capture target features. Additionally, we introduce a high-resolution detection branch and an Adaptive Dynamic Feature Fusion (ADFF) detection head to optimize cross-scale feature fusion and preserve critical details of small objects. By incorporating the C3k2 module and dynamic convolution into the network, the framework achieves enhanced feature extraction flexibility while maintaining high computational efficiency. Extensive experiments on the TT100K benchmark dataset demonstrate that TSD-Net outperforms most existing methods in small object detection and complex background handling, achieving 91.4 mAP and 49.7 FPS on 640 × 640 low-resolution images, meeting the requirements of practical applications. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

24 pages, 1039 KB  
Article
A Method for Improving the Robustness of Intrusion Detection Systems Based on Auxiliary Adversarial Training Wasserstein Generative Adversarial Networks
by Guohua Wang and Qifan Yan
Electronics 2025, 14(11), 2171; https://doi.org/10.3390/electronics14112171 - 27 May 2025
Viewed by 645
Abstract
To improve the robustness of intrusion detection systems constructed using deep learning models, a method based on an auxiliary adversarial training WGAN (AuxAtWGAN) is proposed from the defender’s perspective. First, one-dimensional traffic data are downscaled and processed into two-dimensional image data via a [...] Read more.
To improve the robustness of intrusion detection systems constructed using deep learning models, a method based on an auxiliary adversarial training WGAN (AuxAtWGAN) is proposed from the defender’s perspective. First, one-dimensional traffic data are downscaled and processed into two-dimensional image data via a stacked autoencoder (SAE), and mixed adversarial samples are generated using the fast gradient sign method (FGSM), Projected Gradient Descent (PGD) and Carlini and Wagner (C&W) adversarial attacks. Second, the improved WGAN with an integrated perceptual network module is trained with mixed training samples composed of mixed adversarial samples and normal samples. Finally, the adversary-trained AuxAtWGAN model is attached to the original model for adversary sample detection, and the detected adversary samples are removed and input into the original model to improve the robustness of the original model. The average attack success rate of the original convolutional neural network (CNN) model against multiple adversarial samples is 75.17%, and after using AuxAtWGAN, the average attack success rate of the adversarial attacks decreases to 27.56%; moreover, the detection accuracy of the original CNN model against normal samples is still 93.57%. The experiment proves that AuxAtWGAN improves the robustness of the original model. In addition, validation experiments are conducted by attaching the AuxAtWGAN model to the Long Short-Term Memory Network (LSTM) and Residual Network34 (ResNet) models, which prove that the proposed method has high generalization performance. Full article
Show Figures

Figure 1

9 pages, 3054 KB  
Proceeding Paper
Simulated Adversarial Attacks on Traffic Sign Recognition of Autonomous Vehicles
by Chu-Hsing Lin, Chao-Ting Yu, Yan-Ling Chen, Yo-Yu Lin and Hsin-Ta Chiao
Eng. Proc. 2025, 92(1), 15; https://doi.org/10.3390/engproc2025092015 - 25 Apr 2025
Viewed by 557
Abstract
With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles also have to [...] Read more.
With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles also have to respond to information security attacks. If they cannot defend against such attacks, traffic accidents might be caused, leaving passengers exposed to risks. Therefore, we investigated adversarial attacks on the traffic sign recognition of autonomous vehicles in this study. We used You Look Only Once (YOLO) to build a machine learning model for traffic sign recognition and simulated attacks on traffic signs. The simulated attacks included LED light strobes, color-light flash, and Gaussian noise. Regarding LED strobes and color-light flash, translucent images were used to overlay the original traffic sign images to simulate corresponding attack scenarios. In the Gaussian noise attack, Python 3.11.10 was used to add noise to the original image. Different attack methods interfered with the original machine learning model to a certain extent, hindering autonomous vehicles from recognizing traffic signs and detecting them accurately. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

20 pages, 6073 KB  
Article
A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets
by Tianpeng Xie, Jingguo Zhu, Chunxiao Wang, Feng Li and Zhe Meng
Appl. Sci. 2025, 15(7), 3904; https://doi.org/10.3390/app15073904 - 2 Apr 2025
Cited by 1 | Viewed by 1297
Abstract
LiDAR point clouds of reflective targets often contain significant noise, which severely impacts the feature extraction accuracy and performance of object detection algorithms. These challenges present substantial obstacles to point cloud processing and its applications. In this paper, we propose a Unified Denoising [...] Read more.
LiDAR point clouds of reflective targets often contain significant noise, which severely impacts the feature extraction accuracy and performance of object detection algorithms. These challenges present substantial obstacles to point cloud processing and its applications. In this paper, we propose a Unified Denoising Framework (UDF) aimed at removing noise and restoring the geometry of reflective targets. The proposed method consists of three steps: veiling effect denoising using an improved pass-through filter, range anomalies correction through M-estimator Sample Consensus (MSAC) plane fitting and ray projection, and blooming effect denoising based on an adaptive error ellipse. The parameters of the error ellipse are automatically determined using the divergence angle of the laser beam, blooming factors, and the normal vector along the boundary of the point cloud. The proposed method was validated on a self-constructed traffic sign point cloud dataset. The experimental results showed that the method achieved a mean square error (MSE) of 0.15 cm2, a mean city-block distance (MCD) of 0.05 cm, and relative height and width errors of 1.92% and 1.91%, respectively. Compared to five representative algorithms, the proposed method demonstrated superior performance in both denoising accuracy and the restoration of target geometric features. Full article
Show Figures

Figure 1

40 pages, 11010 KB  
Review
PRISMA Review: Drones and AI in Inventory Creation of Signage
by Geovanny Satama-Bermeo, Jose Manuel Lopez-Guede, Javad Rahebi, Daniel Teso-Fz-Betoño, Ana Boyano and Ortzi Akizu-Gardoki
Drones 2025, 9(3), 221; https://doi.org/10.3390/drones9030221 - 19 Mar 2025
Viewed by 1152
Abstract
This systematic review explores the integration of unmanned aerial vehicles (UAVs) and artificial intelligence (AI) in automating road signage inventory creation, employing the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology to analyze recent advancements. The study evaluates cutting-edge technologies, including [...] Read more.
This systematic review explores the integration of unmanned aerial vehicles (UAVs) and artificial intelligence (AI) in automating road signage inventory creation, employing the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology to analyze recent advancements. The study evaluates cutting-edge technologies, including UAVs equipped with deep learning algorithms and advanced sensors like light detection and ranging (LiDAR) and multispectral cameras, highlighting their roles in enhancing traffic sign detection and classification. Key challenges include detecting minor or partially obscured signs and adapting to diverse environmental conditions. The findings reveal significant progress in automation, with notable improvements in accuracy, efficiency, and real-time processing capabilities. However, limitations such as computational demands and environmental variability persist. By providing a comprehensive synthesis of current methodologies and performance metrics, this review establishes a robust foundation for future research to advance automated road infrastructure management to improve safety and operational efficiency in urban and rural settings. Full article
Show Figures

Figure 1

28 pages, 68080 KB  
Article
KRID: A Large-Scale Nationwide Korean Road Infrastructure Dataset for Comprehensive Road Facility Recognition
by Hyeongbok Kim, Eunbi Kim, Sanghoon Ahn, Beomjin Kim, Sung Jin Kim, Tae Kyung Sung, Lingling Zhao, Xiaohong Su and Gilmu Dong
Data 2025, 10(3), 36; https://doi.org/10.3390/data10030036 - 14 Mar 2025
Cited by 1 | Viewed by 1713
Abstract
Comprehensive datasets are crucial for developing advanced AI solutions in road infrastructure, yet most existing resources focus narrowly on vehicles or a limited set of object categories. To address this gap, we introduce the Korean Road Infrastructure Dataset (KRID), a large-scale dataset designed [...] Read more.
Comprehensive datasets are crucial for developing advanced AI solutions in road infrastructure, yet most existing resources focus narrowly on vehicles or a limited set of object categories. To address this gap, we introduce the Korean Road Infrastructure Dataset (KRID), a large-scale dataset designed for real-world road maintenance and safety applications. Our dataset covers highways, national roads, and local roads in both city and non-city areas, comprising 34 distinct types of road infrastructure—from common elements (e.g., traffic signals, gaze-directed poles) to specialized structures (e.g., tunnels, guardrails). Each instance is annotated with either bounding boxes or polygon segmentation masks under stringent quality control and privacy protocols. To demonstrate the utility of this resource, we conducted object detection and segmentation experiments using YOLO-based models, focusing on guardrail damage detection and traffic sign recognition. Preliminary results confirm its suitability for complex, safety-critical scenarios in intelligent transportation systems. Our main contributions include: (1) a broader range of infrastructure classes than conventional “driving perception” datasets, (2) high-resolution, privacy-compliant annotations across diverse road conditions, and (3) open-access availability through AI Hub and GitHub. By highlighting critical yet often overlooked infrastructure elements, this dataset paves the way for AI-driven maintenance workflows, hazard detection, and further innovations in road safety. Full article
Show Figures

Figure 1

Back to TopTop