Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = roadside LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3931 KB  
Article
Vehicle Speed Estimation Using Infrastructure-Mounted LiDAR via Rectangle Edge Matching
by Injun Hong and Manbok Park
Appl. Sci. 2026, 16(5), 2513; https://doi.org/10.3390/app16052513 - 5 Mar 2026
Viewed by 277
Abstract
Smart transportation infrastructure is increasingly deployed, and cooperative perception using stationary Light Detection and Ranging (LiDAR) sensors installed at intersections and along roadsides is becoming more important. However, infrastructure LiDAR often suffers from sparse point-cloud data (PCD) at long ranges and frequent occlusions, [...] Read more.
Smart transportation infrastructure is increasingly deployed, and cooperative perception using stationary Light Detection and Ranging (LiDAR) sensors installed at intersections and along roadsides is becoming more important. However, infrastructure LiDAR often suffers from sparse point-cloud data (PCD) at long ranges and frequent occlusions, which can degrade the stability of inter-frame displacement and speed estimation. This paper proposes a real-time vehicle speed estimation method that operates robustly under sparse and partially observed conditions. The proposed approach extracts boundary points from clustered vehicle PCD and removes outliers, and then fits a 2D rectangle to the vehicle contour via Gauss–Newton optimization by minimizing distance-based residuals between boundary points and rectangle edges. To further improve robustness, we incorporate Hessian augmentation terms that account for boundary states and size variations, thereby alleviating excessive boundary violations and abnormal deformation of the width and height parameters during iterations. Next, from the fitted rectangles in consecutive frames, we construct a nearest corner with respect to the LiDAR origin and an auxiliary point, and perform 2D SVD-based alignment using only these two representative points. This enables efficient computation of inter-frame displacement and speed without full point-cloud registration (e.g., iterative closest point (ICP)). Experiments conducted at an intersection in K-City (Hwaseong, Republic of Korea) using a 40-channel LiDAR, a test vehicle (Genesis G70), and a real-time kinematic (RTK) system (MRP-2000) show that the proposed method stably preserves representative points and fits rectangles, even in sparse regions where only about two LiDAR rings are observed. Using CAN-based vehicle speed as the reference, the proposed method achieves an MAE of 0.76–1.37 kph and an RMSE of 0.90–1.58 kph over the tested speed settings (30, 50, and 70 kph, as well as high speed (~90 kph)) and trajectory scenarios. Furthermore, per-object processing-time measurements confirm the real-time feasibility of the proposed algorithm. Full article
Show Figures

Figure 1

14 pages, 5168 KB  
Article
The Concept of a Digital Twin in the Arctic Environment
by Ari Pikkarainen, Timo Sukuvaara, Kari Mäenpää, Hannu Honkanen and Pyry Myllymäki
Electronics 2026, 15(5), 1001; https://doi.org/10.3390/electronics15051001 - 28 Feb 2026
Viewed by 268
Abstract
A Digital Twin is a virtual environment that simulates, predicts, and optimizes the performance of its physical counterpart. Digital Twin models hold great potential in wireless networking testing and development. This paper aims to envision our concept of simulating the operation of different [...] Read more.
A Digital Twin is a virtual environment that simulates, predicts, and optimizes the performance of its physical counterpart. Digital Twin models hold great potential in wireless networking testing and development. This paper aims to envision our concept of simulating the operation of different sensors in vehicle test-track conditions. Vehicle parameters are embedded into the edge computing entity, which uses them to generate a test configuration for the Digital Twin. This configuration is then applied in simulated sensor-output prediction, ultimately producing event data for the vehicle entity. The sensor suite—comprising radar, cameras, GPS and LiDAR—is modeled to provide the multi-modal input required for generating simulated perception data in the Digital Twin. To ensure realistic perception behavior, the physical vehicle is represented within a digital environment that reproduces the actual test track. This allows LiDAR occlusions to be attributed to genuine environmental structures (e.g., trees, buildings, other vehicles) rather than simulation artifacts. Within the Digital Twin, the objective is to evaluate how sensor signals—such as radar waves and LiDAR light pulses—propagate through the environment and how real-world obstacles may weaken or distort them. Historical datasets are used to calibrate and validate the Digital Twin, ensuring that the simulated sensor behavior aligns with real-world observations; the data collected during previous test runs can be used for visualization and analysis. Weather conditions are modeled to evaluate how rain, fog and snow impact sensor performance within the Digital Twin environment, to learn about the effects and predict sensor operation in different weather conditions. In this article, we examine the Digital Twin of our test track as a development environment for designing, deploying and testing ITS-enhanced road-weather services and warnings. These services integrate real-world road-weather observations, forecast data, roadside sensors and on-board vehicle measurements to support safe driving and optimize vehicle trajectories for both passenger and autonomous vehicles. This research is expected to benefit stakeholders involved in automotive testing, simulation and road-weather service development. Full article
Show Figures

Figure 1

20 pages, 3202 KB  
Article
Robust LiDAR-Based Train Detection via Point Cloud Segmentation for Railway Safety
by Yuxing Yang, Siyue Yu and Jimin Xiao
Sensors 2026, 26(5), 1514; https://doi.org/10.3390/s26051514 - 27 Feb 2026
Viewed by 294
Abstract
Ensuring railway safety requires reliable monitoring of trains in critical safety areas, such as station throat zones and railway crossings. Compared with cameras, roadside LiDAR can more reliably capture the geometry of trains under low-light, high-speed, and adverse weather conditions. However, industrial LiDAR [...] Read more.
Ensuring railway safety requires reliable monitoring of trains in critical safety areas, such as station throat zones and railway crossings. Compared with cameras, roadside LiDAR can more reliably capture the geometry of trains under low-light, high-speed, and adverse weather conditions. However, industrial LiDAR solutions still primarily use the background comparison technique, which compares each sample against a pre-recorded clean map and then applies a size-based filter. Such approaches are highly sensitive to point cloud background changes arising from varying LiDAR installation distances, train speeds, and surface materials, often resulting in fragmented clustering and missed detections. In this paper, train detection is reformulated as a point-level semantic segmentation problem. A lightweight 3D segmentation network that directly predicts train points from raw data is designed, and clustering-based post-processing is applied to generate train-level events in real time. Experiments on real railway data under various operating conditions show that the proposed method achieves higher detection accuracy and greater robustness than traditional compare-based methods and representative deep learning benchmark methods, and is therefore suitable for practical railway safety monitoring. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

28 pages, 9300 KB  
Article
Multi-Target Tracking with Collaborative Roadside Units Under Foggy Conditions
by Tao Shi, Xuan Wang, Wei Jiang, Xiansheng Huang, Ming Cen, Shuai Cao and Hao Zhou
Sensors 2026, 26(3), 998; https://doi.org/10.3390/s26030998 - 3 Feb 2026
Viewed by 368
Abstract
The Intelligent Road Side Unit (RSU) is a crucial component of Intelligent Transportation Systems (ITSs), where roadside LiDAR are widely utilized for their high precision and resolution. However, water droplets and atmospheric particles in fog significantly attenuate and scatter LiDAR beams, posing a [...] Read more.
The Intelligent Road Side Unit (RSU) is a crucial component of Intelligent Transportation Systems (ITSs), where roadside LiDAR are widely utilized for their high precision and resolution. However, water droplets and atmospheric particles in fog significantly attenuate and scatter LiDAR beams, posing a challenge to multi-target tracking and ITS safety. To enhance the accuracy and reliability of RSU-based tracking, a collaborative RSU method that integrates denoising and tracking for multi-target tracking is proposed. The proposed approach first dynamically adjusts the filtering kernel scale based on local noise levels to effectively remove noisy point clouds using a modified bilateral filter. Subsequently, a multi-RSU cooperative tracking framework is designed, which employs a particle Probability Hypothesis Density (PHD) filter to estimate target states via measurement fusion. A multi-target tracking system for intelligent RSUs in Foggy scenarios was designed and implemented. Extensive experiments were conducted using an intelligent roadside platform in real-world fog-affected traffic environments to validate the accuracy and real-time performance of the proposed algorithm. Experimental results demonstrate that the proposed method improves the target detection accuracy by 8% and 29%, respectively, compared to statistical filtering methods after removing fog noise under thin and thick fog conditions. At the same time, this method performs well in tracking multi-class targets, surpassing existing state-of-the-art methods, especially in high-order evaluation indicators such as HOTA, MOTA, and IDs. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

19 pages, 5725 KB  
Article
Real-Time 3D Scene Understanding for Road Safety: Depth Estimation and Object Detection for Autonomous Vehicle Awareness
by Marcel Simeonov, Andrei Kurdiumov and Milan Dado
Vehicles 2026, 8(2), 28; https://doi.org/10.3390/vehicles8020028 - 2 Feb 2026
Viewed by 822
Abstract
Accurate depth perception is vital for autonomous driving and roadside monitoring. Traditional stereo vision methods are cost-effective but often fail under challenging conditions such as low texture, reflections, or complex lighting. This work presents a perception pipeline built around FoundationStereo, a Transformer-based stereo [...] Read more.
Accurate depth perception is vital for autonomous driving and roadside monitoring. Traditional stereo vision methods are cost-effective but often fail under challenging conditions such as low texture, reflections, or complex lighting. This work presents a perception pipeline built around FoundationStereo, a Transformer-based stereo depth estimation model. At low resolutions, FoundationStereo achieves real-time performance (up to 26 FPS) on embedded platforms like NVIDIA Jetson AGX Orin with TensorRT acceleration and power-of-two input sizes, enabling deployment in roadside cameras and in-vehicle systems. For Full HD stereo pairs, the same model delivers dense and precise environmental scans, complementing LiDAR while maintaining a high level of accuracy. YOLO11 object detection and segmentation is deployed in parallel for object extraction. Detected objects are removed from depth maps generated by FoundationStereo prior to point cloud generation, producing cleaner 3D reconstructions of the environment. This approach demonstrates that advanced stereo networks can operate efficiently on embedded hardware. Rather than replacing LiDAR or radar, it complements existing sensors by providing dense depth maps in situations where other sensors may be limited. By improving depth completeness, robustness, and enabling filtered point clouds, the proposed system supports safer navigation, collision avoidance, and scalable roadside infrastructure scanning for autonomous mobility. Full article
Show Figures

Figure 1

19 pages, 4337 KB  
Article
Automatic Real-Time Queue Length Detection Method of Multiple Lanes at Intersections Based on Roadside LiDAR
by Qian Chen, Jianying Zheng, Ennian Du, Xiang Wang, Wenjuan E, Xingxing Jiang, Yang Xiao, Yuxin Zhang and Tieshan Li
Electronics 2026, 15(3), 585; https://doi.org/10.3390/electronics15030585 - 29 Jan 2026
Viewed by 390
Abstract
Signal intersections are key nodes in urban road traffic networks, and real-time queue length information serves as a core performance indicator for formulating effective signal management schemes in modern adaptive traffic signal control systems, thereby enhancing traffic efficiency. In this study, a roadside [...] Read more.
Signal intersections are key nodes in urban road traffic networks, and real-time queue length information serves as a core performance indicator for formulating effective signal management schemes in modern adaptive traffic signal control systems, thereby enhancing traffic efficiency. In this study, a roadside Light Detection and Ranging (LiDAR) sensor is employed to acquire 3D point cloud data of vehicles in the road space, which acts as an important method for queue length detection. However, during queue-length detection, vehicles in different lanes are prone to occlusion because of the straight-line propagation of laser beams. This paper proposes a queue-length detection method based on variations in vehicle point cloud features to address the occlusion of queue-end vehicles during detection. This method first preprocesses LiDAR point cloud data (including region-of-interest extraction, ground-point filtering, point cloud clustering, object association, and lane recognition) to detect real-time queue lengths across multiple lanes. Subsequently, the occlusion problem is categorized into complete occulusion and partial occlusion, and corresponding processing is performed to correct the detection results. The performance of the proposed queue length detection method was validated through experiments that collected real-world data from three urban road intersections in Suzhou. The results indicate that this method’s average accuracy can reach 99.3%. Furthermore, the effectiveness of the proposed occlusion handling method has been validated through experiments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 3382 KB  
Article
CFFCNet: Center-Guided Feature Fusion Completion for Accurate Vehicle Localization and Dimension Estimation from Lidar Point Clouds
by Xiaoyi Chen, Xiao Feng, Shichen Zhang, Wen Xiao, Miao Tang and Kun Sun
Remote Sens. 2026, 18(1), 39; https://doi.org/10.3390/rs18010039 - 23 Dec 2025
Viewed by 644
Abstract
Accurate scene understanding from 3D point cloud data is fundamental to intelligent transportation systems and geospatial digital twins. However, point clouds acquired from lidar sensors in urban environments suffer from incompleteness due to occlusions and limited sensor resolution, presenting significant challenges for precise [...] Read more.
Accurate scene understanding from 3D point cloud data is fundamental to intelligent transportation systems and geospatial digital twins. However, point clouds acquired from lidar sensors in urban environments suffer from incompleteness due to occlusions and limited sensor resolution, presenting significant challenges for precise object localization and geometric reconstruction—critical requirements for traffic safety monitoring and autonomous navigation. To address these point cloud processing challenges, we propose a Center-guided Feature Fusion Completion Network (CFFCNet) that enhances vehicle representation through geometry-aware point cloud completion. The network incorporates a Branch-assisted Center Perception (BCP) module that learns to predict geometric centers while extracting multi-scale spatial features, generating initial coarse completions that account for the misalignment between detection centers and true geometric centers in real-world data. Subsequently, a Multi-scale Feature Blending Upsampling (MFBU) module progressively refines these completions by fusing hierarchical features across multiple stages, producing accurate and complete vehicle point clouds. Comprehensive evaluations on the KITTI dataset demonstrate substantial improvements in geometric accuracy, with localization mean absolute error (MAE) reduced to 0.0928 m and length MAE to 0.085 m. The method’s generalization capability is further validated on a real-world roadside lidar dataset (CUG-Roadside) without fine-tuning, achieving localization MAE of 0.051 m and length MAE of 0.051 m. These results demonstrate the effectiveness of geometry-guided completion for point cloud scene understanding in infrastructure-based traffic monitoring applications, contributing to the development of robust 3D perception systems for urban geospatial environments. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Figure 1

22 pages, 1441 KB  
Review
Use of Plant Growth Regulators for Sustainable Management of Vegetation in Highway
by Caio Lucas Alhadas de Paula Velloso, Job Teixeira de Oliveira, Fábio Henrique Rojo Baio, Fernando França da Cunha and Jaime Teixeira de Oliveira
Eng 2025, 6(12), 350; https://doi.org/10.3390/eng6120350 - 4 Dec 2025
Cited by 1 | Viewed by 786
Abstract
Plant growth regulators (PGRs) are natural or synthetic substances that control and manipulate plant physiological processes, controlling branching and vegetative growth. Maintaining roadside vegetation through frequent mowing is costly, dangerous, and unsustainable. This narrative literature review proposes a revolution in this management by [...] Read more.
Plant growth regulators (PGRs) are natural or synthetic substances that control and manipulate plant physiological processes, controlling branching and vegetative growth. Maintaining roadside vegetation through frequent mowing is costly, dangerous, and unsustainable. This narrative literature review proposes a revolution in this management by conducting a systematic literature review on the strategic application of PGRs on roadsides. Practices such as the application of plant growth regulators, the use of native cover crops, and bioengineering techniques with stabilizing species were analyzed. Previous studies have shown that the use of regulators such as mepiquat chloride and paclobutrazol reduces plant height and aboveground biomass, favoring growth control and compacting the plant architecture. The environmental and operational impacts related to vegetation control on roadside strips were also considered. Integrated with LiDAR technology for precise monitoring, this model establishes a new paradigm: smart, safe, and sustainable. Therefore, it is hoped that this compendium will fill a gap in national guidelines by offering an evidence-based protocol guideline for the use of PGR as an alternative to traditional management methods, thus reducing the number of mowing and weeding operations in highway right-of-way areas. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

35 pages, 6608 KB  
Article
BcDKM: Blockchain-Based Dynamic Key Management Scheme for Crowd Sensing in Vehicular Sensor Networks
by Mingrui Zhang, Ru Meng and Lei Zhang
Sensors 2025, 25(18), 5699; https://doi.org/10.3390/s25185699 - 12 Sep 2025
Cited by 1 | Viewed by 800
Abstract
Vehicular sensor networks (VSNs) consist of vehicles equipped with various sensing devices, such as LiDAR. In a VSN, vehicles and/or roadside units (RSUs) can be organized into a vehicular cloud (VC) to enable the sharing of sensing and computational resources among participants, thereby [...] Read more.
Vehicular sensor networks (VSNs) consist of vehicles equipped with various sensing devices, such as LiDAR. In a VSN, vehicles and/or roadside units (RSUs) can be organized into a vehicular cloud (VC) to enable the sharing of sensing and computational resources among participants, thereby supporting crowd-sensing applications. However, the highly dynamic nature of vehicular mobility poses significant challenges in terms of establishing secure and scalable group communication within the VC. To address these challenges, we first introduce a lightweight extension of the continuous group key agreement (CGKA) scheme by incorporating an administrator mechanism. The resulting scheme, referred to as CGKAwAM, supports the designation of multiple administrators within a single group for flexible member management. Building upon CGKAwAM, we propose a blockchain-based dynamic key management scheme, termed BcDKM. This scheme supports asynchronous join and leave operations while achieving communication round optimality. Furthermore, RSUs are leveraged as blockchain nodes to enable decentralized VC discovery and management, ensuring scalability without relying on a centralized server. We formally analyze the security of both CGKAwAM and BcDKM. The results demonstrate that the proposed scheme satisfies several critical security properties, including known-key security, forward secrecy, post-compromise security, and vehicle privacy. Experimental evaluations further confirm that BcDKM is practical and achieves a well-balanced tradeoff between security and performance. Full article
(This article belongs to the Special Issue Advanced Vehicular Ad Hoc Networks: 2nd Edition)
Show Figures

Figure 1

28 pages, 12681 KB  
Article
MM-VSM: Multi-Modal Vehicle Semantic Mesh and Trajectory Reconstruction for Image-Based Cooperative Perception
by Márton Cserni, András Rövid and Zsolt Szalay
Appl. Sci. 2025, 15(12), 6930; https://doi.org/10.3390/app15126930 - 19 Jun 2025
Cited by 3 | Viewed by 1399
Abstract
Recent advancements in cooperative 3D object detection have demonstrated significant potential for enhancing autonomous driving by integrating roadside infrastructure data. However, deploying comprehensive LiDAR-based cooperative perception systems remains prohibitively expensive and requires precisely annotated 3D data to function robustly. This paper proposes an [...] Read more.
Recent advancements in cooperative 3D object detection have demonstrated significant potential for enhancing autonomous driving by integrating roadside infrastructure data. However, deploying comprehensive LiDAR-based cooperative perception systems remains prohibitively expensive and requires precisely annotated 3D data to function robustly. This paper proposes an improved multi-modal method integrating LiDAR-based shape references into a previously mono-camera-based semantic vertex reconstruction framework to enable robust and cost-effective monocular and cooperative pose estimation after the reconstruction. A novel camera–LiDAR loss function that combines re-projection loss from a multi-view camera system alongside LiDAR shape constraints is proposed. Experimental evaluations conducted on the Argoverse dataset and real-world experiments demonstrate significantly improved shape reconstruction robustness and accuracy, thereby improving pose estimation performance. The effectiveness of the algorithm is proven through a real-world smart valet parking application, which is evaluated in our university parking area with real vehicles. Our approach allows accurate 6DOF pose estimation using an inexpensive IP camera without requiring context-specific training, thereby advancing the state of the art in monocular and cooperative image-based vehicle localization. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
Show Figures

Figure 1

26 pages, 24577 KB  
Article
Infra-3DRC-FusionNet: Deep Fusion of Roadside Mounted RGB Mono Camera and Three-Dimensional Automotive Radar for Traffic User Detection
by Shiva Agrawal, Savankumar Bhanderi and Gordon Elger
Sensors 2025, 25(11), 3422; https://doi.org/10.3390/s25113422 - 29 May 2025
Cited by 7 | Viewed by 2627
Abstract
Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based [...] Read more.
Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based road user detection. However, the performance of the most commonly used late fusion methods often degrades when the camera fails to detect road users in adverse environmental conditions. The solution is to fuse the data using deep neural networks at the early stage of the fusion pipeline to use the complete data provided by both sensors. Research has been carried out in this area, but is limited to vehicle-based sensor setups. Hence, this work proposes a novel deep neural network to jointly fuse RGB mono-camera images and 3D automotive radar point cloud data to obtain enhanced traffic user detection for the roadside-mounted smart infrastructure setup. Projected radar points are first used to generate anchors in image regions with a high likelihood of road users, including areas not visible to the camera. These anchors guide the prediction of 2D bounding boxes, object categories, and confidence scores. Valid detections are then used to segment radar points by instance, and the results are post-processed to produce final road user detections in the ground plane. The trained model is evaluated for different light and weather conditions using ground truth data from a lidar sensor. It provides a precision of 92%, recall of 78%, and F1-score of 85%. The proposed deep fusion methodology has 33%, 6%, and 21% absolute improvement in precision, recall, and F1-score, respectively, compared to object-level spatial fusion output. Full article
(This article belongs to the Special Issue Multi-sensor Integration for Navigation and Environmental Sensing)
Show Figures

Figure 1

32 pages, 2107 KB  
Review
Vulnerable Road User Detection for Roadside-Assisted Safety Protection: A Comprehensive Survey
by Ziyan Zhang, Chuheng Wei, Guoyuan Wu and Matthew J. Barth
Appl. Sci. 2025, 15(7), 3797; https://doi.org/10.3390/app15073797 - 30 Mar 2025
Cited by 4 | Viewed by 3823
Abstract
In recent years, the safety of vulnerable road users (VRUs), such as pedestrians, cyclists, and micro-mobility users, has become an increasingly significant concern in urban transportation systems worldwide. Reliable and accurate detection of VRUs is essential for effective safety protection. This survey explores [...] Read more.
In recent years, the safety of vulnerable road users (VRUs), such as pedestrians, cyclists, and micro-mobility users, has become an increasingly significant concern in urban transportation systems worldwide. Reliable and accurate detection of VRUs is essential for effective safety protection. This survey explores the techniques and methodologies used to detect VRUs, ranging from conventional methods to state-of-the-art (SOTA) approaches, with a primary focus on infrastructure-based detection. This study synthesizes findings from recent research papers and technical reports, emphasizing sensor modalities such as cameras, LiDAR, and RADAR. Furthermore, the survey examines benchmark datasets used to train and evaluate VRU detection models. Alongside innovative detection models and sufficient datasets, key challenges and emerging trends in algorithm development and dataset collection are also discussed. This comprehensive overview aims to provide insights into current advancements and inform the development of robust and reliable roadside detection systems to enhance the safety and efficiency of VRUs in modern transportation systems. Full article
(This article belongs to the Special Issue Computer Vision of Edge AI on Automobile)
Show Figures

Figure 1

14 pages, 4564 KB  
Article
Exploring Climate and Air Pollution Mitigating Benefits of Urban Parks in Sao Paulo Through a Pollution Sensor Network
by Patrick Connerton, Thiago Nogueira, Prashant Kumar, Maria de Fatima Andrade and Helena Ribeiro
Int. J. Environ. Res. Public Health 2025, 22(2), 306; https://doi.org/10.3390/ijerph22020306 - 18 Feb 2025
Cited by 4 | Viewed by 2195
Abstract
Ambient air pollution is the most important environmental factor impacting human health. Urban landscapes present unique air quality challenges, which are compounded by climate change adaptation challenges, as air pollutants can also be affected by the urban heat island effect, amplifying the deleterious [...] Read more.
Ambient air pollution is the most important environmental factor impacting human health. Urban landscapes present unique air quality challenges, which are compounded by climate change adaptation challenges, as air pollutants can also be affected by the urban heat island effect, amplifying the deleterious effects on health. Nature-based solutions have shown potential for alleviating environmental stressors, including air pollution and heat wave abatement. However, such solutions must be designed in order to maximize mitigation and not inadvertently increase pollutant exposure. This study aims to demonstrate potential applications of nature-based solutions in urban environments for climate stressors and air pollution mitigation by analyzing two distinct scenarios with and without green infrastructure. Utilizing low-cost sensors, we examine the relationship between green infrastructure and a series of environmental parameters. While previous studies have investigated green infrastructure and air quality mitigation, our study employs low-cost sensors in tropical urban environments. Through this novel approach, we are able to obtain highly localized data that demonstrates this mitigating relationship. In this study, as a part of the NERC-FAPESP-funded GreenCities project, four low-cost sensors were validated through laboratory testing and then deployed in two locations in São Paulo, Brazil: one large, heavily forested park (CIENTEC) and one small park surrounded by densely built areas (FSP). At each site, one sensor was located in a vegetated area (Park sensor) and one near the roadside (Road sensor). The locations selected allow for a comparison of built versus green and blue areas. Lidar data were used to characterize the profile of each site based on surrounding vegetation and building area. Distance and class of the closest roadways were also measured for each sensor location. These profiles are analyzed against the data obtained through the low-cost sensors, considering both meteorological (temperature, humidity and pressure) and particulate matter (PM1, PM2.5 and PM10) parameters. Particulate matter concentrations were lower for the sensors located within the forest site. At both sites, the road sensors showed higher concentrations during the daytime period. These results further reinforce the capabilities of green–blue–gray infrastructure (GBGI) tools to reduce exposure to air pollution and climate stressors, while also showing the importance of their design to ensure maximum benefits. The findings can inform decision-makers in designing more resilient cities, especially in low-and middle-income settings. Full article
Show Figures

Figure 1

17 pages, 5789 KB  
Article
Vehicle Trajectory Repair Under Full Occlusion and Limited Datapoints with Roadside LiDAR
by Qiyang Luo, Zhenyu Xu, Yibin Zhang, Morris Igene, Tamer Bataineh, Mohammad Soltanirad, Keshav Jimee and Hongchao Liu
Sensors 2025, 25(4), 1114; https://doi.org/10.3390/s25041114 - 12 Feb 2025
Cited by 5 | Viewed by 1852
Abstract
Object occlusion is a common challenge in roadside LiDAR-based vehicle tracking. This issue can cause variances in vehicle location and speed calculations. This paper proposes a vehicle tracking post-processing method designed to handle full occlusion and limited datapoint conditions. The first part of [...] Read more.
Object occlusion is a common challenge in roadside LiDAR-based vehicle tracking. This issue can cause variances in vehicle location and speed calculations. This paper proposes a vehicle tracking post-processing method designed to handle full occlusion and limited datapoint conditions. The first part of the method focuses on linking the disconnected trajectories of the same vehicle caused by full occlusion. The second part refines the vehicle representative point to enhance tracking accuracy. Performance evaluation demonstrates that the proposed method can detect and reconnect the trajectories of the same vehicle, even under prolonged full occlusion. Moreover, the refined vehicle representative point provides more stable speed estimates, even with sparse datapoints. This significantly increases the effective detection range of roadside LiDAR. This approach lays a strong foundation for the application of roadside LiDAR in emission analysis and near-crash studies. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

22 pages, 3523 KB  
Article
Evaluation of Semantic Segmentation Performance for a Multimodal Roadside Vehicle Detection System on the Edge
by Lauren Ervin, Max Eastepp, Mason McVicker and Kenneth Ricks
Sensors 2025, 25(2), 370; https://doi.org/10.3390/s25020370 - 10 Jan 2025
Viewed by 2157
Abstract
Discretely monitoring traffic systems and tracking payloads on vehicle targets can be challenging when traversal occurs off main roads where overhead traffic cameras are not present. This work proposes a portable roadside vehicle detection system as part of a solution for tracking traffic [...] Read more.
Discretely monitoring traffic systems and tracking payloads on vehicle targets can be challenging when traversal occurs off main roads where overhead traffic cameras are not present. This work proposes a portable roadside vehicle detection system as part of a solution for tracking traffic along any path. Training semantic segmentation networks to automatically detect specific types of vehicles while ignoring others will allow the user to track payloads present only on certain vehicles of interest, such as train cars or semi-trucks. Different vision sensors offer varying advantages for detecting targets in changing environments and weather conditions. To analyze the benefits of both, corresponding LiDAR and camera data were collected at multiple roadside sites and then trained on separate semantic segmentation networks for object detection. A custom CNN architecture was built to handle highly asymmetric LiDAR data, and a network inspired by DeepLabV3+ was used for camera data. The performance of both networks was evaluated, and showed comparable accuracy. Inferences run on embedded platforms showed real-time execution matching the performance on the training hardware for edge deployments anywhere. Both camera and LiDAR semantic segmentation networks were successful in identifying vehicles of interest from the proposed viewpoint. These highly accurate vehicle detection networks can pair with a tracking mechanism to establish a non-intrusive roadside detection system. Full article
(This article belongs to the Special Issue LiDAR Sensors Applied in Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop