Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (212)

Search Parameters:
Keywords = traffic sign detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 18061 KB  
Article
Water Damage Assessment in Flexible Pavements Through GPR and MLS Integration
by Luca Bianchini Ciampoli, Alessandro Di Benedetto, Margherita Fiani, Luigi Petti and Andrea Benedetto
NDT 2026, 4(2), 13; https://doi.org/10.3390/ndt4020013 - 20 Apr 2026
Viewed by 190
Abstract
The fast drainage of surface water from road pavements is essential to ensure both driving safety and adequate infrastructure service life. For close-graded asphalt mixtures, surface runoff relies on sufficient longitudinal and transverse slopes that convey water toward hydraulic drainage devices. However, construction [...] Read more.
The fast drainage of surface water from road pavements is essential to ensure both driving safety and adequate infrastructure service life. For close-graded asphalt mixtures, surface runoff relies on sufficient longitudinal and transverse slopes that convey water toward hydraulic drainage devices. However, construction defects, surface distress, or inadequate placement of drainage systems may compromise this process and reduce pavement durability. When water infiltrates beneath the wearing course and saturates the underlying layers, heavy traffic loads can accelerate deterioration through erosion, pumping, interlayer delamination, and subgrade overstress. This work investigates the joint use of Ground Penetrating Radar (GPR) and Mobile Laser Scanning (MLS) to evaluate drainage deficiencies and detect signs of layer delamination in bituminous pavements. A highway section in Salerno (Italy) was selected as a case study due to known hydraulic-related issues. MLS data were used to reconstruct pavement geometry and model surface runoff patterns, while GPR surveys assessed the condition of the bonding between asphalt and base layers. The results revealed ineffective runoff management and identified multiple areas affected by delamination, confirming a relationship between surface drainage behaviour and subsurface damage. These findings highlight the broader potential of the integrated GPR–MLS framework as a scalable and transferable approach for proactive drainage assessment and structural monitoring in pavement management practices. Full article
Show Figures

Figure 1

26 pages, 619 KB  
Article
ARMv8/NEON Optimization of NCC-Sign for Mixed-Radix NTT: Cycle-Accurate Evaluation on Apple M1 Pro and Cortex-A72
by Minwoo Lee, Minjoo Sim, Siwoo Eum and Hwajeong Seo
Electronics 2026, 15(7), 1456; https://doi.org/10.3390/electronics15071456 - 31 Mar 2026
Viewed by 328
Abstract
This paper presents an ARMv8/NEON-oriented implementation of NCC-Sign targeting the NTT-friendly trinomial parameter sets (NCC-Sign-1/3/5), whose dominant cost arises from mixed-radix NTT computations with n=2a·3b. We design lane-local SIMD kernels—including a four-lane Montgomery multiply–reduce, a centered [...] Read more.
This paper presents an ARMv8/NEON-oriented implementation of NCC-Sign targeting the NTT-friendly trinomial parameter sets (NCC-Sign-1/3/5), whose dominant cost arises from mixed-radix NTT computations with n=2a·3b. We design lane-local SIMD kernels—including a four-lane Montgomery multiply–reduce, a centered modular reduction pass, a fused stage-0 butterfly, and streamlined radix-2/radix-3 pipelines—and extend them with three further optimizations: (i) radix-2 multi-stage butterfly merging to halve intermediate load/store traffic, (ii) a stride-3 vectorization technique exploiting NEON structure load/store instructions (vld3q/vst3q) to fully vectorize small-len radix-3 stages that would otherwise fall back to scalar execution, and (iii) NEON-parallel pointwise Montgomery multiplication. Using cycle-accurate PMU measurements under identical toolchains for baseline and optimized builds on Apple M1 Pro, we observe geometric-mean speedups of 1.40× for key generation, 2.24× for signing, and 2.01× for verification across NCC-Sign-1/3/5, with per-kernel gains of up to 5–6× for NTT/INTT and 7.5× for pointwise multiplication. To contextualize these results, we provide a direct comparison with the NEON-optimized ML-DSA (Dilithium) implementation of Becker et al. on the same platform, a cross-platform evaluation on Arm Cortex-A72 (Raspberry Pi 4), a Montgomery-versus-Barrett microbenchmark supporting our design choice, and an empirical constant-time assessment via dudect confirming that no timing leakage is detected in any NEON kernel under 30 million measurements. Full article
Show Figures

Figure 1

26 pages, 2382 KB  
Article
Evaluating the Effectiveness of Explainable AI for Adversarial Attack Detection in Traffic Sign Recognition Systems
by Bill Deng Pan, Yupeng Yang, Richard Guo, Yongxin Liu, Hongyun Chen and Dahai Liu
Mathematics 2026, 14(6), 971; https://doi.org/10.3390/math14060971 - 12 Mar 2026
Viewed by 451
Abstract
Connected autonomous vehicles (CAVs) rely on deep neural network-based perception systems to operate safely in complex driving environments. However, these systems remain vulnerable to adversarial perturbations that can induce misclassification without perceptible changes to human observers. Explainable artificial intelligence (XAI) has been proposed [...] Read more.
Connected autonomous vehicles (CAVs) rely on deep neural network-based perception systems to operate safely in complex driving environments. However, these systems remain vulnerable to adversarial perturbations that can induce misclassification without perceptible changes to human observers. Explainable artificial intelligence (XAI) has been proposed as a potential adversarial detection mechanism by exposing inconsistencies in model attention. This study evaluated the effectiveness of NoiseCAM-based explanation-space detection on the German Traffic Sign Recognition Benchmark (GTSRB) using a single 32 × 32 CNN architecture. Adversarial examples were generated using FGSM under perturbation budgets ϵ = 0.01–0.10, and detection performance was evaluated using accuracy, precision, recall, F1-score, and ROC–AUC. Results show that NoiseCAM achieves detection accuracies between 51.8% and 52.9% with ROC–AUC values of 0.52–0.53, only marginally above random discrimination (0.5). Class-wise analysis further reveals substantial variability in detection reliability across traffic sign categories, with visually structured regulatory signs exhibiting higher separability than complex warning signs. These findings suggest that explanation-space inconsistencies alone provide limited adversarial detection capability in low-resolution, safety-critical perception pipelines. The study contributes to the understanding of the operational limits of explanation-based adversarial detection and highlights the need to integrate XAI signals with complementary robustness or uncertainty-aware mechanisms for reliable deployment in autonomous driving systems. Full article
Show Figures

Figure 1

33 pages, 6958 KB  
Article
Short-Term Performance of Visual Attention Prompt Methods Across Driver Proficiency in a Driving Simulator
by Jinwei Liang and Makio Ishihara
Multimodal Technol. Interact. 2026, 10(3), 28; https://doi.org/10.3390/mti10030028 - 11 Mar 2026
Viewed by 458
Abstract
In complex driving environments, drivers must continuously detect and respond to critical visual information such as traffic signs and pedestrians. However, important targets may sometimes be overlooked due to high cognitive load during driving. Therefore, visual attention prompt methods have been proposed to [...] Read more.
In complex driving environments, drivers must continuously detect and respond to critical visual information such as traffic signs and pedestrians. However, important targets may sometimes be overlooked due to high cognitive load during driving. Therefore, visual attention prompt methods have been proposed to guide drivers’ gaze toward relevant targets. A visual attention prompt method is a visual cue presented in a key area in a user’s field of view to draw his/her visual attention. This study evaluates the short-term performance of five visual attention prompt methods (Point, Arrow, Blur, Dusk, and ModAF) in a driving simulator and compares their performance between novice and proficient drivers. Eye-tracking data and multiple analyses are used to examine whether the influence of these methods could be maintained after they are disabled and to clarify drivers’ response patterns across methods in consideration with their driving proficiency. The results indicate that visual attention prompt methods could induce a short-term transfer effect, as drivers still tend to fixate on target traffic signs earlier after the methods are disabled, and the elapsed-time analysis estimates that this effect lasts about 84.35 s. Overall, the Point, Arrow, and Dusk methods show relatively stronger performance with significant reductions in the elapsed time to fixate on the traffic sign. The clustering analysis further shows that drivers’ response patterns are not uniform, with two clusters for novice drivers and three clusters for proficient drivers. The results suggest that most novice drivers tend to benefit from explicit non-directional visual cues that enhance target salience, such as the Point method, whereas proficient drivers are more likely to benefit from explicit directional visual cues that provide clear directional guidance, such as the Arrow method. These findings suggest that visual attention prompt methods may be useful for developing driver training strategies tailored to different levels of driving proficiency, helping drivers maintain more effective visual attention allocation during driving and potentially contributing to improved driving safety. Full article
Show Figures

Figure 1

25 pages, 21909 KB  
Article
ADAS-TSR: A Deep Learning-Based Traffic Sign Recognition System with Voice Alerts for Andean Historic City Centers
by Eduardo J. Urbina-Dominguez, Hemerson Lizarbe-Alarcon, Yuri Galvez-Gastelu, Efrain E. Porras-Flores, Wilmer E. Moncada-Sosa, Jose E. Estrada-Cardenas, Edward Leon-Palacios and Diego O. Tenorio-Huarancca
Appl. Sci. 2026, 16(6), 2664; https://doi.org/10.3390/app16062664 - 11 Mar 2026
Cited by 1 | Viewed by 749
Abstract
Colonial historic city centers represent a paradigmatic challenge for modern road safety, as they are characterized by narrow streets originally designed for carriage and pedestrian traffic. This research presents ADAS-TSR, a deep learning-based advanced driver assistance system for vertical traffic sign detection with [...] Read more.
Colonial historic city centers represent a paradigmatic challenge for modern road safety, as they are characterized by narrow streets originally designed for carriage and pedestrian traffic. This research presents ADAS-TSR, a deep learning-based advanced driver assistance system for vertical traffic sign detection with voice alerts, specifically designed for the Historic Center of Ayacucho, Peru, which is located at 2761 m a.s.l. An original dataset comprising 2250 images with 2450 instances corresponding to 14 sign classes according to Peruvian regulations was constructed. The dataset was captured under real operational conditions, including deteriorated, partially occluded, and vehicle impact-deformed signage. A comprehensive multi-model benchmark experiment was conducted, comparing four CNN-based detectors (YOLOv8m, YOLO11n, YOLO26n, YOLO26s) and one transformer-based detector (RT-DETR-l) spanning both classical and state-of-the-art architectures released through January 2026. YOLO26s achieved the best overall performance, with an mAP@0.5 of 0.994 and mAP@0.5:0.95 of 0.989 while using only 9.5 M parameters. YOLO11n matched the performance of YOLOv8m with 10× fewer parameters (2.6 M vs. 25.9 M). Uncertainty analysis revealed that modern architectures exhibit significantly higher prediction confidence (mean > 0.90) compared to YOLOv8m (0.82), and fairness analysis confirmed equitable detection across all 14 classes (Gini < 0.002). A voice alert system with five priority levels and rule-based temporal filtering for detection stabilization was implemented. Validation across five urban circuits spanning 14.11 km demonstrated a detection rate of 94.7% with a 73% reduction in redundant alerts. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 3357 KB  
Article
Sequence-Preserving Dual-FoV Defense for Traffic Sign and Light Recognition in Autonomous Vehicles
by Abhishek Joshi, Janhavi Krishna Koda and Abhishek Phadke
Sensors 2026, 26(5), 1737; https://doi.org/10.3390/s26051737 - 9 Mar 2026
Viewed by 513
Abstract
For Autonomous Vehicles (AVs), recognizing traffic lights and signs is critical for safety because perception errors directly affect navigation decisions. Real-world disturbances such as glare, rain, dirt, and graffiti, as well as digital adversarial attacks, can lead to dangerous misclassifications. Current research lacks [...] Read more.
For Autonomous Vehicles (AVs), recognizing traffic lights and signs is critical for safety because perception errors directly affect navigation decisions. Real-world disturbances such as glare, rain, dirt, and graffiti, as well as digital adversarial attacks, can lead to dangerous misclassifications. Current research lacks (i) temporal continuity (stable detection across consecutive frames to prevent flickering misclassifications), (ii) multi-field-of-view (FoV) sensing, and (iii) integrated defenses against both digital and natural degradation. This paper presents two principal contributions: (1) a three-layer defense framework integrating feature squeezing, inference-time temperature scaling (softmax τ = 3 without distillation training), and entropy-based anomaly detection with sequence-level temporal voting; (2) a 500 sequence dual-FoV benchmark (30k base frames, 150k with perturbations) from aiMotive, Waymo, Udacity, and Texas sources across four operational design domains. The unified defense stack achieves 79.8% mAP on a 100-sequence test set (6k base frames, 30k with perturbations), reducing attack success rate from 37.4% to 18.2% (51% reduction) and high-risk misclassifications by 32%. Cross-FoV validation and temporal voting enhance stability under lighting changes (+3.5% mAP) and occlusions (+2.7% mAP). Defense improvements (+9.5–9.6% mAP) remain consistent across native 3D (aiMotive, Waymo) and projected 2D (Udacity, Texas) annotations. Preliminary recapture experiments (n = 15 scenarios) show 2.5% synthetic–physical ASR gap (p = 0.18), though larger validation is needed. Code, models, and dataset reconstruction tools are publicly available. Full article
Show Figures

Figure 1

23 pages, 37474 KB  
Article
Semi-Supervised Traffic Sign Detection with Dual Confidence Fusion Module and Structured Block-Regularized Neck
by Chenhui Xia, Yeqin Shao, Meiqin Che and Guoqing Yang
Sensors 2026, 26(5), 1601; https://doi.org/10.3390/s26051601 - 4 Mar 2026
Viewed by 252
Abstract
Reliable traffic sign detection is essential for the safety of autonomous driving systems. However, manually annotating large-scale datasets for this task is resource-intensive, making semi-supervised learning (SSL) a vital alternative. Despite their potential, current SSL methods often struggle with unreliable pseudo-label filtering and [...] Read more.
Reliable traffic sign detection is essential for the safety of autonomous driving systems. However, manually annotating large-scale datasets for this task is resource-intensive, making semi-supervised learning (SSL) a vital alternative. Despite their potential, current SSL methods often struggle with unreliable pseudo-label filtering and limited detection accuracy. To address these limitations, we propose a novel framework integrating a Dual Confidence Fusion (DC-Fusion) module and a Structured Block-Regularized Neck (SBR-Neck). The former improves pseudo-label reliability by combining classification and localization confidence scores, while the latter optimizes feature representation through multi-scale fusion and block-wise regularization. To preserve high-frequency spatial details, SBR-Neck incorporates Spatial-Context-Aware Upsampling (SCA-Upsampling), which utilizes multi-granularity feature decomposition. Experimental results on a proprietary traffic sign dataset demonstrate that our method achieves mAP50 scores of 10.4%, 17.8%, 23.7%, and 32.1% using 1%, 2%, 5%, and 10% labeled data, respectively. These results surpass the “Efficient Teacher” baseline by margins ranging from 3.07% to 11%, confirming the framework’s ability to provide robust detection in complex traffic scenarios. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

18 pages, 1735 KB  
Article
A High-Precision Time-Varying Survival Model for Early Prediction of Patient Deterioration: A Retrospective Cohort Study
by Nishchay Joshi, Brian Wood, David Chapman, Martin Farrier and Thomas Ingram
J. Clin. Med. 2026, 15(5), 1690; https://doi.org/10.3390/jcm15051690 - 24 Feb 2026
Viewed by 562
Abstract
Background: Clinicians rely on clinical judgement and vital sign monitoring to identify patient deterioration, commonly supported by systems such as the National Early Warning Score 2 (NEWS2). However, NEWS2 is associated with a high false-positive burden, contributing to alert fatigue in increasingly pressured [...] Read more.
Background: Clinicians rely on clinical judgement and vital sign monitoring to identify patient deterioration, commonly supported by systems such as the National Early Warning Score 2 (NEWS2). However, NEWS2 is associated with a high false-positive burden, contributing to alert fatigue in increasingly pressured clinical environments. Consequently, there is a growing need for early warning systems (EWS) that not only detect deterioration but do so with higher precision to prioritise clinically meaningful alerts. We aimed to develop and validate a prognostic EWS capable of predicting real-time clinical deterioration in hospitalised adult patients. Methods: We conducted a retrospective observational cohort study using routinely collected Electronic Patient Record (EPR) data. A Cox proportional hazards model with time-varying covariates was developed to estimate dynamic risk of deterioration. Deterioration was defined as unplanned transfer to intensive care, unplanned surgery, or in-hospital death. Data for model development comprised 37,989 adult inpatient episodes admitted between January 2022 and October 2024, and were initially split into training, temporal validation and test datasets. An extended evaluation period included 11,048 patients admitted through September 2025. Model performance was compared with NEWS2 at the emergency-response threshold (≥7). Results: The final model produced a tiered “traffic-light” risk profile and demonstrated substantially higher precision than NEWS2 while maintaining comparable recall in our test data. At the red alert threshold, precision was 60% compared with 16% for NEWS2 ≥7, with 82% versus 43% of alerts occurring within 24 h of deterioration. Performance remained consistent across the extended evaluation period. Conclusions: A survival-based EWS incorporating time-varying covariates achieved higher precision and improved temporal alignment with deterioration events compared with NEWS2. A tiered amber–red alert framework may support more targeted escalation, reduce alert fatigue, and enhance early identification of clinical deterioration. Full article
(This article belongs to the Section Intensive Care)
Show Figures

Figure 1

38 pages, 6725 KB  
Article
A BIM-Based Digital Twin Framework for Urban Roads: Integrating MMS and Municipal Geospatial Data for AI-Ready Urban Infrastructure Management
by Vittorio Scolamiero and Piero Boccardo
Sensors 2026, 26(3), 947; https://doi.org/10.3390/s26030947 - 2 Feb 2026
Viewed by 999
Abstract
Digital twins (DTs) are increasingly adopted to enhance the monitoring, management, and planning of urban infrastructure. While DT development for buildings is well established, applications to urban road networks remain limited, particularly in integrating heterogeneous geospatial datasets into semantically rich, multi-scale representations. This [...] Read more.
Digital twins (DTs) are increasingly adopted to enhance the monitoring, management, and planning of urban infrastructure. While DT development for buildings is well established, applications to urban road networks remain limited, particularly in integrating heterogeneous geospatial datasets into semantically rich, multi-scale representations. This study presents a methodology for developing a BIM-based DT of urban roads by integrating geospatial data from Mobile Mapping System (MMS) surveys with semantic information from municipal geodatabases. The approach follows a multi-modal (point clouds, imagery, vector data), multi-scale and multi-level framework, where ‘multi-level’ refers to modeling at different scopes—from a city-wide level, offering a generalized representation of the entire road network, to asset-level detail, capturing parametric BIM elements for individual road segments or specific components such as road sign and road marker, lamp posts and traffic light. MMS-derived LiDAR point clouds allow accurate 3D reconstruction of road surfaces, curbs, and ancillary infrastructure, while municipal geodatabases enrich the model with thematic layers including pavement condition, road classification, and street furniture. The resulting DT framework supports multi-scale visualization, asset management, and predictive maintenance. By combining geometric precision with semantic richness, the proposed methodology delivers an interoperable and scalable framework for sustainable urban road management, providing a foundation for AI-ready applications such as automated defect detection, traffic simulation, and predictive maintenance planning. The resulting DT achieved a geometric accuracy of ±3 cm and integrated more than 45 km of urban road network, enabling multi-scale analyses and AI-ready data fusion. Full article
(This article belongs to the Special Issue Intelligent Sensors and Artificial Intelligence in Building)
Show Figures

Figure 1

25 pages, 7567 KB  
Article
Robust Traffic Sign Detection for Obstruction Scenarios in Autonomous Driving
by Xinhao Wang, Limin Zheng, Yuze Song and Jie Li
Symmetry 2026, 18(2), 226; https://doi.org/10.3390/sym18020226 - 27 Jan 2026
Viewed by 470
Abstract
With the rapid advancement of autonomous driving technology, Traffic Sign Detection and Recognition (TSDR) has become a critical component for ensuring vehicle safety. However, existing TSDR systems still face significant challenges in accurately detecting partially occluded traffic signs, which poses a substantial risk [...] Read more.
With the rapid advancement of autonomous driving technology, Traffic Sign Detection and Recognition (TSDR) has become a critical component for ensuring vehicle safety. However, existing TSDR systems still face significant challenges in accurately detecting partially occluded traffic signs, which poses a substantial risk in real-world applications. To address this issue, this study proposes a comprehensive solution from three perspectives: data augmentation, model architecture enhancement, and dataset construction. We propose an innovative network framework tailored for occluded traffic sign detection. The framework enhances feature representation through a dual-path convolutional mechanism (DualConv) that preserves information flow even when parts of the sign are blocked, and employs a spatial attention module (SEAM) that helps the model focus on visible sign regions while ignoring occluded areas. Finally, we construct the Jinzhou Traffic Sign (JZTS) occlusion dataset to provide targeted training and evaluation samples. Extensive experiments on the public Tsinghua-Tencent 100K (TT-100K) dataset and our JZTS dataset demonstrate the superior performance and strong generalisation capability of our model under occlusion conditions. This work not only advances the robustness of TSDR systems for autonomous driving but also provides a valuable benchmark for future research. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

32 pages, 2129 KB  
Article
Artificial Intelligence-Based Depression Detection
by Gabor Kiss and Patrik Viktor
Sensors 2026, 26(2), 748; https://doi.org/10.3390/s26020748 - 22 Jan 2026
Viewed by 832
Abstract
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, [...] Read more.
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, there is an urgent need for fast, objective, and reliable detection methods. In our study, we present an artificial intelligence-based system that combines iris-based identification with the analysis of pupillometric and eye movement biomarkers, enabling the real-time detection of physiological signs of depression before driving or flying. The two-module model was evaluated based on data from 242 participants: the iris identification module operated with an Equal Error Rate of less than 0.5%, while the depression-detecting CNN-LSTM network achieved 89% accuracy and an AUC value of 0.94. Compared to the neutral state, depressed individuals responded to negative news with significantly greater pupil dilation (+27.9% vs. +18.4%), while showing a reduced or minimal response to positive stimuli (−1.3% vs. +6.2%). This was complemented by slower saccadic movement and longer fixation time, which is consistent with the cognitive distortions characteristic of depression. Our results indicate that pupillometric deviations relative to individual baselines can be reliably detected and used with high accuracy for depression screening. The presented system offers a preventive safety solution that could reduce the number of accidents caused by human error related to depression in road and air traffic in the future. Full article
Show Figures

Figure 1

23 pages, 41532 KB  
Article
CW-DETR: An Efficient Detection Transformer for Traffic Signs in Complex Weather
by Tianpeng Wang, Qiaoshuang Teng, Shangyu Sun, Weidong Song, Jinhe Zhang and Yuxuan Li
Sensors 2026, 26(1), 325; https://doi.org/10.3390/s26010325 - 4 Jan 2026
Viewed by 744
Abstract
Traffic sign detection under adverse weather conditions remains challenging due to severe feature degradation caused by rain, fog, and snow, which significantly impairs the performance of existing detection systems. This study presents the CW-DETR (Complex Weather Detection Transformer), an end-to-end detection framework designed [...] Read more.
Traffic sign detection under adverse weather conditions remains challenging due to severe feature degradation caused by rain, fog, and snow, which significantly impairs the performance of existing detection systems. This study presents the CW-DETR (Complex Weather Detection Transformer), an end-to-end detection framework designed to address weather-induced feature deterioration in real-time applications. Building upon the RT-DETR, our approach integrates four key innovations: a multipath feature enhancement network (FPFENet) for preserving fine-grained textures, a Multiscale Edge Enhancement Module (MEEM) for combating boundary degradation, an adaptive dual-stream bidirectional feature pyramid network (ADBF-FPN) for cross-scale feature compensation, and a multiscale convolutional gating module (MCGM) for suppressing semantic–spatial confusion. Extensive experiments on the CCTSDB2021 dataset demonstrate that the CW-DETR achieves 69.0% AP and 94.4% AP50, outperforming state-of-the-art real-time detectors by 2.3–5.7 percentage points while maintaining computational efficiency (56.8 GFLOPs). A cross-dataset evaluation on TT100K, the TSRD, CNTSSS, and real-world snow conditions (LNTU-TSD) confirms the robust generalization capabilities of the proposed model. These results establish CW-DETR as an effective solution for all-weather traffic sign detection in intelligent transportation systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

27 pages, 26736 KB  
Article
A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments
by Zonghong Feng, Liangchang Li, Kai Xu and Yong Wang
Appl. Sci. 2026, 16(1), 326; https://doi.org/10.3390/app16010326 - 28 Dec 2025
Cited by 1 | Viewed by 748
Abstract
With the increasing frequency of traffic safety issues and the rapid development of autonomous driving technology, traffic sign detection is highly susceptible to adverse weather conditions such as changes in light intensity, fog, rain, snow, and partial occlusion, which places higher demands on [...] Read more.
With the increasing frequency of traffic safety issues and the rapid development of autonomous driving technology, traffic sign detection is highly susceptible to adverse weather conditions such as changes in light intensity, fog, rain, snow, and partial occlusion, which places higher demands on the accurate recognition of traffic signs. This paper proposes an improved DAYOLO model based on YOLOv8n, aiming to balance detection accuracy and model complexity. First, the Bottleneck in the C2f module of the YOLOv8n backbone network is replaced with Bottleneck DAttention. Introducing DAttention allows for more effective feature extraction, thereby improving model performance. Second, an ultra-lightweight and efficient upsampler, Dysample, is introduced into the neck network to further improve performance and reduce computational overhead. Finally, a Task-Aligned Dynamic Detection Head (TADDH) is introduced. TADDH enhances task interaction through a dynamic mechanism and utilizes shared convolutional modules to reduce parameters and improve efficiency. Simultaneously, an additional Layer2 detection head is added to the model to strengthen the extraction and fusion of features at different scales, thereby improving the detection accuracy of small traffic signs. Furthermore, replacing SlideLoss with NWDLoss can better handle prediction results with more complex distributions and more accurately measure the distance between predicted and ground truth boxes in the feature space during object detection. Experimental results show that DAYOLO achieves 97.2% mAP on the SDCCVP dataset, which is 5.3 higher than the baseline model YOLOv8n; the frame rate reaches 120, which is 37.8% higher than YOLOv8; and the number of parameters is reduced by 6.2%, outperforming models such as YOLOv3, YOLOv5, YOLOv6, and YOLOv7. In addition, DAYOLO achieves 80.8 mAP on the TT100K dataset, which is 9.2% higher than the baseline model YOLOv8n. The proposed method achieves a balance between model size and detection accuracy, meets the needs of traffic sign detection, and provides new ideas and methods for future research in the field of traffic sign detection. Full article
Show Figures

Figure 1

8 pages, 817 KB  
Proceeding Paper
Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles
by Chu-Hsing Lin and Guan-Wei Chen
Eng. Proc. 2025, 120(1), 7; https://doi.org/10.3390/engproc2025120007 - 25 Dec 2025
Viewed by 573
Abstract
In recent years, artificial intelligence technology has developed rapidly, and the automobile industry has launched autonomous driving systems. However, autonomous driving systems installed in unmanned vehicles still have room to be strengthened in terms of cybersecurity. Many potential attacks may lead to traffic [...] Read more.
In recent years, artificial intelligence technology has developed rapidly, and the automobile industry has launched autonomous driving systems. However, autonomous driving systems installed in unmanned vehicles still have room to be strengthened in terms of cybersecurity. Many potential attacks may lead to traffic accidents and expose passengers to danger. We explored two potential attacks against autonomous driving systems: stroboscopic attacks and colored light illumination attacks, and analyzed the impact of these attacks on the accuracy of traffic sign recognition based on deep learning models, such as convolutional neural networks (CNNs) and You Only Look Once (YOLO)v5. We used the German Traffic Sign Recognition Benchmark dataset to train CNN and YOLOv5 to establish a machine learning model, and then conducted various attacks on traffic signs, including the following: LED strobe, various colors of LED light illumination and other attacks. By setting up an experimental environment, we tested how LED lights with different flashing frequencies and light color changes affect the recognition accuracy of the machine learning model. From the experimental results, we found that, compared to YOLOv5, CNN has better resilience in resisting the above attacks. In addition, different attack methods will interfere with the original machine learning model to some extent, affecting the ability of self-driving cars to recognize traffic signs. This may cause the self-driving system to fail to detect the presence of traffic signs, or make incorrect decisions about identification results. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

36 pages, 9216 KB  
Article
LSTM-CA-YOLOv11: A Road Sign Detection Model Integrating LSTM Temporal Modeling and Multi-Scale Attention Mechanism
by Tianlei Ye, Yajie Pang, Yihong Li, Enming Liang, Yunfei Wang and Tong Zhou
Appl. Sci. 2026, 16(1), 116; https://doi.org/10.3390/app16010116 - 22 Dec 2025
Viewed by 694
Abstract
Traffic sign detection is crucial for intelligent transportation and autonomous driving, yet faces challenges such as illumination variations, occlusions, and scale changes that impact accuracy. To address these issues, the paper proposes the LSTM-CA-YOLOv11 model. This approach pioneers the integration of a Bi-LSTM [...] Read more.
Traffic sign detection is crucial for intelligent transportation and autonomous driving, yet faces challenges such as illumination variations, occlusions, and scale changes that impact accuracy. To address these issues, the paper proposes the LSTM-CA-YOLOv11 model. This approach pioneers the integration of a Bi-LSTM (Bi-directional Long-Short Term Memory) into the YOLOv11 backbone network to model spatial-sequence dependencies, thereby enhancing structured feature extraction capabilities. The lightweight CA (Coordinate Attention) module encodes precise positional information by capturing horizontal and vertical features. The MSEF (Multi-Scale Enhancement Fusion) module addresses scale variations through parallel convolutional and pooling branches with adaptive fusion processing. We further introduce the SPP-Plus (Spatial Pyramid Pooling-Plus) module to expand the receptive field while preserving fine details, and employ a focus IoU (Intersection over Union) loss to prioritise challenging samples, thereby improving regression accuracy. On a private dataset comprising 10,231 images, experiments demonstrate that this model achieves a mAP@0.5 of 93.4% and a mAP@0.5:0.95 of 79.5%, representing improvements of 5.3% and 4.7% over the baseline, respectively. Furthermore, the model’s generalisation performance on the public TT100K (Tsinghua-Tencent 100K) dataset surpassed the latest YOLOv13n by 5.3% in mAP@0.5 and 3.9% in mAP@0.5:0.95, demonstrating robust cross-dataset capabilities and exceptional practical deployment feasibility. Full article
(This article belongs to the Special Issue AI in Object Detection)
Show Figures

Figure 1

Back to TopTop