Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = LiDAR-to-LiDAR translation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3554 KB  
Article
Unsupervised Optical-Sensor Extrinsic Calibration via Dual-Transformer Alignment
by Yuhao Wang, Yong Zuo, Yi Tang, Xiaobin Hong, Jian Wu and Ziyu Bian
Sensors 2025, 25(22), 6944; https://doi.org/10.3390/s25226944 (registering DOI) - 13 Nov 2025
Abstract
Accurate extrinsic calibration between optical sensors, such as camera and LiDAR, is crucial for multimodal perception. Traditional methods based on specific calibration targets exhibit poor robustness in complex optical environments such as glare, reflections, or low light, and they rely on cumbersome manual [...] Read more.
Accurate extrinsic calibration between optical sensors, such as camera and LiDAR, is crucial for multimodal perception. Traditional methods based on specific calibration targets exhibit poor robustness in complex optical environments such as glare, reflections, or low light, and they rely on cumbersome manual operations. To address this, we propose a fully unsupervised, end-to-end calibration framework. Our approach adopts a dual-Transformer architecture: a Vision Transformer extracts semantic features from the image stream, while a Point Transformer captures the geometric structure of the 3D LiDAR point cloud. These cross-modal representations are aligned and fused through a neural network, and a regression algorithm is used to obtain the 6-DoF extrinsic transformation matrix. A multi-constraint loss function is designed to enhance structural consistency between modalities, thereby improving calibration stability and accuracy. On the KITTI benchmark, our method achieves a mean rotation error of 0.21° and a translation error of 3.31 cm; on a self-collected dataset, it attains an average reprojection error of 1.52 pixels. These results demonstrate a generalizable and robust solution for optical-sensor extrinsic calibration, enabling precise and self-sufficient perception in real-world applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

22 pages, 3921 KB  
Article
Tightly Coupled LiDAR-Inertial Odometry for Autonomous Driving via Self-Adaptive Filtering and Factor Graph Optimization
by Weiwei Lyu, Haoting Li, Shuanggen Jin, Haocai Huang, Xiaojuan Tian, Yunlong Zhang, Zheyuan Du and Jinling Wang
Machines 2025, 13(11), 977; https://doi.org/10.3390/machines13110977 - 23 Oct 2025
Viewed by 515
Abstract
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry [...] Read more.
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry is proposed. First, a self-adaptive voxel grid filter is developed to dynamically downsample the original point clouds based on environmental feature richness, aiming to balance navigation accuracy and real-time performance. Second, keyframe factors are selected based on thresholds of translation distance, rotation angle, and time interval and then introduced into the factor graph to improve global consistency. Additionally, high-quality Global Navigation Satellite System (GNSS) factors are selected and incorporated into the factor graph through linear interpolation, thereby improving the navigation accuracy in complex and unknown environments. The proposed method is evaluated using KITTI dataset over various scales and environments. Results show that the proposed method has demonstrated very promising better results when compared with the other methods, such as ALOAM, LIO-SAM, and SC-LeGO-LOAM. Especially in urban scenes, the trajectory accuracy of the proposed method has been improved by 33.13%, 57.56%, and 58.4%, respectively, illustrating excellent navigation and positioning capabilities. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

20 pages, 11855 KB  
Article
High-Precision Extrinsic Calibration for Multi-LiDAR Systems with Narrow FoV via Synergistic Planar and Circular Features
by Xinbao Sun, Zhi Zhang, Shuo Xu and Jinyue Liu
Sensors 2025, 25(20), 6432; https://doi.org/10.3390/s25206432 - 17 Oct 2025
Viewed by 604
Abstract
Precise extrinsic calibration is a fundamental prerequisite for data fusion in multi-LiDAR systems. However, conventional methods are often encumbered by dependencies on initial estimates, auxiliary sensors, or manual feature selection, which renders them complex, time-consuming, and limited in adaptability across diverse environments. To [...] Read more.
Precise extrinsic calibration is a fundamental prerequisite for data fusion in multi-LiDAR systems. However, conventional methods are often encumbered by dependencies on initial estimates, auxiliary sensors, or manual feature selection, which renders them complex, time-consuming, and limited in adaptability across diverse environments. To address these limitations, this paper proposes a novel, high-precision extrinsic calibration method for multi-LiDAR systems with a narrow Field of View (FoV), achieved through the synergistic use of circular and planar features. Our approach commences with the automatic segmentation of the calibration target’s point cloud using an improved VoxelNet. Subsequently, a denoising step, combining RANSAC and a Gaussian Mean Intensity Filter (GMIF), is applied to ensure high-quality feature extraction. From the refined point cloud, planar and circular features are robustly extracted via Principal Component Analysis (PCA) and least-squares fitting, respectively. Finally, the extrinsic parameters are optimized by minimizing a nonlinear objective function formulated with joint constraints from both geometric features. Simulation results validate the high precision of our method, with rotational and translational errors contained within 0.08° and 0.8 cm. Furthermore, real-world experiments confirm its effectiveness and superiority, outperforming conventional point-cloud registration techniques. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 14342 KB  
Article
A Multi-LiDAR Self-Calibration System Based on Natural Environments and Motion Constraints
by Yuxuan Tang, Jie Hu, Zhiyong Yang, Wencai Xu, Shuaidi He and Bolun Hu
Mathematics 2025, 13(19), 3181; https://doi.org/10.3390/math13193181 - 4 Oct 2025
Viewed by 574
Abstract
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, [...] Read more.
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, mapping 3D point clouds to 2D feature/depth images to reduce feature extraction cost while preserving 3D structure. Motion consistency across consecutive frames enables a reduced-dimension hand–eye formulation. Within this formulation, the estimation integrates geometric constraints on SE(3) using Lagrange multiplier aggregation and quasi-Newton refinement. This approach highlights key aspects of identifiability, conditioning, and convergence. An online monitor evaluates plane alignment and LiDAR–INS odometry consistency to detect degradation and trigger recalibration. Tests on a commercial vehicle with six LiDARs and on nuScenes demonstrate accuracy comparable to offline, target-based methods while supporting practical online use. On the vehicle, maximum errors are 6.058 cm (translation) and 4.768° (rotation); on nuScenes, 2.916 cm and 5.386°. The approach streamlines calibration, enables online monitoring, and remains robust in real-world settings. Full article
(This article belongs to the Section A: Algebra and Logic)
Show Figures

Figure 1

22 pages, 11387 KB  
Article
Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration
by Yuanping Xia, Zhibo Liu and Hua Liu
Remote Sens. 2025, 17(17), 3056; https://doi.org/10.3390/rs17173056 - 2 Sep 2025
Viewed by 1193
Abstract
To address the problem of point-cloud registration accuracy degradation or even failure in traditional Voxelized GICP(VGICP) under bad initial pose due to improper voxel resolution settings, this paper proposes an Adaptive Resolution VGICP (AR-VGICP) algorithm. The algorithm first automatically estimates the initial voxel [...] Read more.
To address the problem of point-cloud registration accuracy degradation or even failure in traditional Voxelized GICP(VGICP) under bad initial pose due to improper voxel resolution settings, this paper proposes an Adaptive Resolution VGICP (AR-VGICP) algorithm. The algorithm first automatically estimates the initial voxel resolution based on the absolute deviations between source points outside the target voxel grid and their nearest neighbors in the target cloud, using the Median Absolute Deviation (MAD) method, and performs initial registration. Subsequently, the voxel resolution is dynamically updated according to the average nearest neighbor distance between the transformed source points and the target points, enabling progressive refined registration. The resolution update process terminates until the resolution change rate falls below a predefined threshold or the updated resolution does not exceed the density-adaptive resolution. Experimental results on both simulated and real-world datasets demonstrate that AR-VGICP achieves a 100% registration success rate, while VGICP fails in some cases due to small voxel resolution. On the KITTI dataset, AR-VGICP reduces translation error by 9.4% and rotation error by 14.8% compared to VGICP with a fixed 1 m voxel resolution, while increasing computation time by only 3%. Results from UAV LiDAR experiments show that, in residential area data, AR-VGICP achieves a maximum reduction of 33.4% in translation error and 21.4% in rotation error compared to VGICP (1.0 m). These results demonstrate that AR-VGICP attains a higher registration success rate when the initial pose between point-cloud pairs is bad, and delivers superior point-cloud registration accuracy in urban scenarios compared to VGICP. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Figure 1

24 pages, 5906 KB  
Article
Design and Framework of Non-Intrusive Spatial System for Child Behavior Support in Domestic Environments
by Da-Un Yoo, Jeannie Kang and Sung-Min Park
Sensors 2025, 25(17), 5257; https://doi.org/10.3390/s25175257 - 23 Aug 2025
Viewed by 1016
Abstract
This paper proposes a structured design framework and system architecture for a non-intrusive spatial system aimed at supporting child behavior in everyday domestic environments. Rooted in ethical considerations, our approach defines four core behavior-guided design strategies: routine recovery, emotion-responsive adjustment, behavioral transition induction, [...] Read more.
This paper proposes a structured design framework and system architecture for a non-intrusive spatial system aimed at supporting child behavior in everyday domestic environments. Rooted in ethical considerations, our approach defines four core behavior-guided design strategies: routine recovery, emotion-responsive adjustment, behavioral transition induction, and external linkage. Each strategy is meticulously translated into a detailed system logic that outlines input conditions, trigger thresholds, and feedback outputs, designed for implementability with ambient sensing technologies. Through a comparative conceptual analysis of three sensing configurations—low-resolution LiDARs, mmWave radars, and environmental sensors—we evaluate their suitability based on technical feasibility, spatial integration, operationalized privacy metrics, and ethical alignment. Supported by preliminary technical observations from lab-based sensor tests, low-resolution LiDAR emerges as the most balanced option for its ability to offer sufficient behavioral insight while enabling edge-based local processing, robustly protecting privacy, and maintaining compatibility with compact residential settings. Based on this, we present a working three-layered system architecture emphasizing edge processing and minimal-intrusion feedback mechanisms. While this paper primarily focuses on the framework and design aspects, we also outline a concrete pilot implementation plan tailored for small-scale home environments, detailing future empirical validation steps for system effectiveness and user acceptance. This structured design logic and pilot framework lays a crucial foundation for future applications in diverse residential and care contexts, facilitating longitudinal observation of behavioral patterns and iterative refinement through lived feedback. Ultimately, this work contributes to the broader discourse on how technology can ethically and developmentally support children’s autonomy and well-being, moving beyond surveillance to enable subtle, ambient, and socially responsible spatial interactions attuned to children’s everyday lives. Full article
(This article belongs to the Special Issue Progress in LiDAR Technologies and Applications)
Show Figures

Figure 1

46 pages, 125285 KB  
Article
ROS-Based Autonomous Driving System with Enhanced Path Planning Node Validated in Chicane Scenarios
by Mohamed Reda, Ahmed Onsy, Amira Y. Haikal and Ali Ghanbari
Actuators 2025, 14(8), 375; https://doi.org/10.3390/act14080375 - 27 Jul 2025
Viewed by 1543
Abstract
In modern vehicles, Autonomous Driving Systems (ADSs) are designed to operate partially or fully without human intervention. The ADS pipeline comprises multiple layers, including sensors, perception, localization, mapping, path planning, and control. The Robot Operating System (ROS) is a widely adopted framework that [...] Read more.
In modern vehicles, Autonomous Driving Systems (ADSs) are designed to operate partially or fully without human intervention. The ADS pipeline comprises multiple layers, including sensors, perception, localization, mapping, path planning, and control. The Robot Operating System (ROS) is a widely adopted framework that supports the modular development and integration of these layers. Among them, the path-planning and control layers remain particularly challenging due to several limitations. Classical path planners often struggle with non-smooth trajectories and high computational demands. Meta-heuristic optimization algorithms have demonstrated strong theoretical potential in path planning; however, they are rarely implemented in real-time ROS-based systems due to integration challenges. Similarly, traditional PID controllers require manual tuning and are unable to adapt to system disturbances. This paper proposes a ROS-based ADS architecture composed of eight integrated nodes, designed to address these limitations. The path-planning node leverages a meta-heuristic optimization framework with a cost function that evaluates path feasibility using occupancy grids from the Hector SLAM and obstacle clusters detected through the DBSCAN algorithm. A dynamic goal-allocation strategy is introduced based on the LiDAR range and spatial boundaries to enhance planning flexibility. In the control layer, a modified Pure Pursuit algorithm is employed to translate target positions into velocity commands based on the drift angle. Additionally, an adaptive PID controller is tuned in real time using the Differential Evolution (DE) algorithm, ensuring robust speed regulation in the presence of external disturbances. The proposed system is practically validated on a four-wheel differential drive robot across six scenarios. Experimental results demonstrate that the proposed planner significantly outperforms state-of-the-art methods, ranking first in the Friedman test with a significance level less than 0.05, confirming the effectiveness of the proposed architecture. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

15 pages, 2993 KB  
Article
A Joint LiDAR and Camera Calibration Algorithm Based on an Original 3D Calibration Plate
by Ziyang Cui, Yi Wang, Xiaodong Chen and Huaiyu Cai
Sensors 2025, 25(15), 4558; https://doi.org/10.3390/s25154558 - 23 Jul 2025
Cited by 1 | Viewed by 1014
Abstract
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods [...] Read more.
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods that rely on fitting planar contours using depth-discontinuous points are prone to systematic errors, which hinder the precise extraction of the 3D positions of feature points. This, in turn, compromises the accuracy and robustness of the calibration. To overcome these challenges, this paper introduces a novel 3D calibration plate incorporating the gradient depth, localization markers, and corner features. At the point cloud level, the gradient depth enables the accurate estimation of the 3D coordinates of feature points. At the image level, corner features and localization markers facilitate the rapid and precise acquisition of 2D pixel coordinates, with minimal interference from environmental noise. This method establishes a rigorous and systematic framework to enhance the accuracy of LiDAR–camera extrinsic calibrations. In a simulated environment, experimental results demonstrate that the proposed algorithm achieves a rotation error below 0.002 radians and a translation error below 0.005 m. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 3214 KB  
Article
Singular Value Decomposition (SVD) Method for LiDAR and Camera Sensor Fusion and Pattern Matching Algorithm
by Kaiqiao Tian, Meiqi Song, Ka C. Cheok, Micho Radovnikovich, Kazuyuki Kobayashi and Changqing Cai
Sensors 2025, 25(13), 3876; https://doi.org/10.3390/s25133876 - 21 Jun 2025
Viewed by 1330
Abstract
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and [...] Read more.
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and viewpoint. In this paper, we propose a robust pattern matching algorithm that leverages singular value decomposition (SVD) and gradient descent (GD) to align geometric features—such as object contours and convex hulls—across LiDAR and camera modalities. Unlike traditional calibration methods that require manual targets, our approach is targetless, extracting matched patterns from projected LiDAR point clouds and 2D image segments. The algorithm computes the optimal transformation matrix between sensors, correcting misalignments in rotation, translation, and scale. Experimental results on a vehicle-mounted sensing platform demonstrate an alignment accuracy improvement of up to 85%, with the final projection error reduced to less than 1 pixel. This pattern-based SVD-GD framework offers a practical solution for maintaining reliable cross-sensor alignment under calibration drift, enabling real-time perception systems to operate robustly without recalibration. This method provides a practical solution for maintaining reliable sensor fusion in autonomous driving applications subject to long-term calibration drift. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

18 pages, 4309 KB  
Article
OMRoadNet: A Self-Training-Based UDA Framework for Open-Pit Mine Haul Road Extraction from VHR Imagery
by Suchuan Tian, Zili Ren, Xingliang Xu, Zhengxiang He, Wanan Lai, Zihan Li and Yuhang Shi
Appl. Sci. 2025, 15(12), 6823; https://doi.org/10.3390/app15126823 - 17 Jun 2025
Viewed by 827
Abstract
Accurate extraction of dynamically evolving haul roads in open-pit mines from very-high-resolution (VHR) satellite imagery remains a critical challenge due to domain gaps between urban and mining environments, prohibitive annotation costs, and morphological irregularities. This paper introduces OMRoadNet, an unsupervised domain adaptation (UDA) [...] Read more.
Accurate extraction of dynamically evolving haul roads in open-pit mines from very-high-resolution (VHR) satellite imagery remains a critical challenge due to domain gaps between urban and mining environments, prohibitive annotation costs, and morphological irregularities. This paper introduces OMRoadNet, an unsupervised domain adaptation (UDA) framework for open-pit mine road extraction, which synergizes self-training, attention-based feature disentanglement, and morphology-aware augmentation to address these challenges. The framework employs a cyclic GAN (generative adversarial network) architecture with bidirectional translation pathways, integrating pseudo-label refinement through confidence thresholds and geometric rules (eight-neighborhood connectivity and adaptive kernel resizing) to resolve domain shifts. A novel exponential moving average unit (EMAU) enhances feature robustness by adaptively weighting historical states, while morphology-aware augmentation simulates variable road widths and spectral noise. Evaluations on cross-domain datasets demonstrate state-of-the-art performance with 92.16% precision, 80.77% F1-score, and 67.75% IoU (intersection over union), outperforming baseline models by 4.3% in precision and reducing annotation dependency by 94.6%. By reducing per-kilometer operational costs by 78% relative to LiDAR (Light Detection and Ranging) alternatives, OMRoadNet establishes a practical solution for intelligent mining infrastructure mapping, bridging the critical gap between structured urban datasets and unstructured mining environments. Full article
(This article belongs to the Special Issue Novel Technologies in Intelligent Coal Mining)
Show Figures

Figure 1

16 pages, 14380 KB  
Article
Online Calibration Method of LiDAR and Camera Based on Fusion of Multi-Scale Cost Volume
by Xiaobo Han, Jie Luo, Xiaoxu Wei and Yongsheng Wang
Information 2025, 16(3), 223; https://doi.org/10.3390/info16030223 - 13 Mar 2025
Cited by 2 | Viewed by 4030
Abstract
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high [...] Read more.
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high hardware requirements, while it is difficult for lightweight calibration algorithms to meet the accuracy requirements. Secondly, sensor noise, vibration, and changes in environmental conditions may reduce calibration accuracy. In addition, due to the large domain differences between different public datasets, the existing online calibration algorithms are unstable for various datasets and have poor algorithm robustness. To solve the above problems, we propose an online calibration algorithm based on multi-scale cost volume fusion. First, a multi-layer convolutional network is used to downsample and concatenate the camera RGB data and LiDAR point cloud data to obtain three-scale feature maps. The latter is then subjected to feature concatenation and group-wise correlation processing to generate three sets of cost volumes of different scales. After that, all the cost volumes are spliced and sent to the pose estimation module. After post-processing, the translation and rotation matrix between the camera and LiDAR coordinate systems can be obtained. We tested and verified this method on the KITTI odometry dataset and measured the average translation error of the calibration results to be 0.278 cm, the average rotation error to be 0.020°, and the single frame took 23 ms, reaching the advanced level. Full article
Show Figures

Graphical abstract

19 pages, 12501 KB  
Article
VS-SLAM: Robust SLAM Based on LiDAR Loop Closure Detection with Virtual Descriptors and Selective Memory Storage in Challenging Environments
by Zhixing Song, Xuebo Zhang, Shiyong Zhang, Songyang Wu and Youwei Wang
Actuators 2025, 14(3), 132; https://doi.org/10.3390/act14030132 - 8 Mar 2025
Cited by 1 | Viewed by 2187
Abstract
LiDAR loop closure detection is a key technology to mitigate localization drift in LiDAR SLAM, but it remains challenging in structurally similar environments and memory-constrained platforms. This paper proposes VS-SLAM, a novel and robust SLAM system that leverages virtual descriptors and selective memory [...] Read more.
LiDAR loop closure detection is a key technology to mitigate localization drift in LiDAR SLAM, but it remains challenging in structurally similar environments and memory-constrained platforms. This paper proposes VS-SLAM, a novel and robust SLAM system that leverages virtual descriptors and selective memory storage to enhance LiDAR loop closure detection in challenging environments. Firstly, to mitigate the sensitivity of existing descriptors to translational changes, we propose a novel virtual descriptor technique that enhances translational invariance and improves loop closure detection accuracy. Then, to further improve the accuracy of loop closure detection in structurally similar environments, we propose an efficient and reliable selective memory storage technique based on scene recognition and key descriptor evaluation, which also reduces the memory consumption of the loop closure database. Next, based on the two proposed techniques, we develop a LiDAR SLAM system with loop closure detection capability, which maintains high accuracy and robustness even in challenging environments with structural similarity. Finally, extensive experiments in self-built simulation, real-world environments, and public datasets demonstrate that VS-SLAM outperforms state-of-the-art methods in terms of memory efficiency, accuracy, and robustness. Specifically, the memory consumption of the loop closure database is reduced by an average of 92.86% compared with SC-LVI-SAM and VS-SLAM-w/o-st, and the localization accuracy in structurally similar challenging environments is improved by an average of 66.41% compared with LVI-SAM. Full article
Show Figures

Figure 1

18 pages, 23425 KB  
Article
Enhanced GIS Methodology for Building-Integrated Photovoltaic Façade Potential Based on Free and Open-Source Tools and Information
by Ana Marcos-Castro, Nuria Martín-Chivelet and Jesús Polo
Remote Sens. 2025, 17(6), 954; https://doi.org/10.3390/rs17060954 - 7 Mar 2025
Cited by 2 | Viewed by 1179
Abstract
This paper provides a methodology for improving the modelling and design of BIPV façades through in-depth solar irradiation calculations using free and open-source software, mainly GIS, in addition to free data, such as LiDAR, cadastres and meteorological databases. The objective is to help [...] Read more.
This paper provides a methodology for improving the modelling and design of BIPV façades through in-depth solar irradiation calculations using free and open-source software, mainly GIS, in addition to free data, such as LiDAR, cadastres and meteorological databases. The objective is to help BIPV design with a universal and easy-to-replicate procedure. The methodology is validated with the case study of Building 42 in the CIEMAT campus in Madrid, which was renovated in 2017 to integrate photovoltaic arrays in the east, south and west façades, with monitoring data of the main electrical and meteorological conditions. The main novelty is the development of a methodology where LiDAR data are combined with building vector information to create an enhanced high-definition DSM, which is used to develop precise yearly, monthly and daily façade irradiation estimations. The simulation takes into account terrain elevation and surrounding buildings and can optionally include existing vegetation. Gridded heatmap layouts for each façade area are provided at a spatial resolution of 1 metre, which can translate to PV potential. This methodology can contribute to the decision-making process for the implementation of BIPV in building façades by aiding in the selection of the areas that are more suitable for PV generation. Full article
Show Figures

Figure 1

17 pages, 2790 KB  
Article
Development of Visualization Tools for Sharing Climate Cooling Strategies with Impacted Urban Communities
by Linda Powers Tomasso, Kachina Studer, David Bloniarz, Dillon Escandon and John D. Spengler
Atmosphere 2025, 16(3), 258; https://doi.org/10.3390/atmos16030258 - 24 Feb 2025
Cited by 2 | Viewed by 1230
Abstract
Intensifying heat from warming climates regularly concentrates in urban areas lacking green infrastructure in the form of green space, vegetation, and ample tree canopy cover. Nature-based interventions in older U.S. city cores can help minimize the urban heat island effect, yet neighborhoods targeted [...] Read more.
Intensifying heat from warming climates regularly concentrates in urban areas lacking green infrastructure in the form of green space, vegetation, and ample tree canopy cover. Nature-based interventions in older U.S. city cores can help minimize the urban heat island effect, yet neighborhoods targeted for cooling interventions may remain outside the decisional processes through which change affects their communities. This translational research seeks to address health disparities originating from the absence of neighborhood-level vegetation in core urban areas, with a focus on tree canopy cover to mitigate human susceptibility to extreme heat exposure. The development of LiDAR-based imagery enables communities to visualize the proposed greening over time and across seasons of actual neighborhood streets, thus becoming an effective communications tool in community-engaged research. These tools serve as an example of how visualization strategies can initiate unbiased discussion of proposed interventions, serve as an educational vehicle around the health impacts of climate change, and invite distributional and participatory equity for residents of low-income, nature-poor neighborhoods. Full article
Show Figures

Figure 1

17 pages, 6512 KB  
Article
Rutting Caused by Grouser Wheel of Planetary Rover in Single-Wheel Testbed: LiDAR Topographic Scanning and Analysis
by Keisuke Takehana, Vinicius Emanoel Ares, Shreya Santra, Kentaro Uno, Eric Rohmer and Kazuya Yoshida
Aerospace 2025, 12(1), 71; https://doi.org/10.3390/aerospace12010071 - 20 Jan 2025
Cited by 3 | Viewed by 1404
Abstract
This paper presents datasets and analyses of 3D LiDAR scans capturing the rutting behavior of a rover wheel in a single-wheel terramechanics testbed. The data were acquired using a LiDAR sensor to record the terrain deformation caused by the wheel’s passage through a [...] Read more.
This paper presents datasets and analyses of 3D LiDAR scans capturing the rutting behavior of a rover wheel in a single-wheel terramechanics testbed. The data were acquired using a LiDAR sensor to record the terrain deformation caused by the wheel’s passage through a Toyoura sandbed, which mimics lunar regolith. Vertical loads of 25 N, 40 N, and 65 N were applied to study how rutting patterns change, focusing on rut amplitude, height, and inclination. This study emphasizes the extraction and processing of terrain profiles from noisy point cloud data, using methods like curve fitting and moving averages to capture the ruts’ geometric characteristics. A sine wave model, adjusted for translation, scaling, and inclination, was fitted to describe the wheel-induced wave-like patterns. It was found that the mean height of the terrain increases after the grouser wheel passes over it, forming ruts that slope downward, likely due to the transition from static to dynamic sinkage. Both the rut depth at the end of the wheel’s path and the incline increased with larger loads. These findings contribute to understanding wheel–terrain interactions and provide a reference for validating and calibrating models and simulations. The dataset from this study is made available to the scientific community. Full article
(This article belongs to the Special Issue Planetary Exploration)
Show Figures

Figure 1

Back to TopTop