Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (148)

Search Parameters:
Keywords = virtual-reality drive

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5986 KiB  
Article
Gaussian-UDSR: Real-Time Unbounded Dynamic Scene Reconstruction with 3D Gaussian Splatting
by Yang Sun, Yue Zhou, Bin Tian, Haiyang Wang, Yongchao Zhao and Songdi Wu
Appl. Sci. 2025, 15(11), 6262; https://doi.org/10.3390/app15116262 - 2 Jun 2025
Viewed by 230
Abstract
Unbounded dynamic scene reconstruction is crucial for applications such as autonomous driving, robotics, and virtual reality. However, existing methods struggle to reconstruct dynamic scenes in unbounded outdoor environments due to challenges such as lighting variation, object motion, and sensor limitations, leading to inaccurate [...] Read more.
Unbounded dynamic scene reconstruction is crucial for applications such as autonomous driving, robotics, and virtual reality. However, existing methods struggle to reconstruct dynamic scenes in unbounded outdoor environments due to challenges such as lighting variation, object motion, and sensor limitations, leading to inaccurate geometry and low rendering fidelity. In this paper, we proposed Gaussian-UDSR, a novel 3D Gaussian-based representation that efficiently reconstructs and renders high-quality, unbounded dynamic scenes in real time. Our approach fused LiDAR point clouds and Structure-from-Motion (SfM) point clouds obtained from an RGB camera, significantly improving depth estimation and geometric accuracy. To address dynamic appearance variations, we introduced a Gaussian color feature prediction network, which adaptively captures global and local feature information, enabling robust rendering under changing lighting conditions. Additionally, a pose-tracking mechanism ensured precise motion estimation for dynamic objects, enhancing realism and consistency. We evaluated Gaussian-UDSR on the Waymo and KITTI datasets, demonstrating state-of-the-art rendering quality with an 8.8% improvement in PSNR, a 75% reduction in LPIPS, and a fourfold speed improvement over existing methods. Our approach enables efficient, high-fidelity 3D reconstruction and fast real-time rendering of large-scale dynamic environments, while significantly reducing model storage overhead. Full article
Show Figures

Figure 1

22 pages, 1935 KiB  
Article
Blockage Prediction of an Urban Wireless Channel Characterization Using Classification Artificial Intelligence
by Saud Alhajaj Aldossari
Electronics 2025, 14(10), 2007; https://doi.org/10.3390/electronics14102007 - 15 May 2025
Viewed by 236
Abstract
The global deployment of 5G wireless networks has introduced significant advancements in data rates, latency, and energy efficiency. However, the rising demand for immersive applications (e.g., virtual and augmented reality) necessitates even higher data rates and lower latency, driving research toward sixth-generation (6G) [...] Read more.
The global deployment of 5G wireless networks has introduced significant advancements in data rates, latency, and energy efficiency. However, the rising demand for immersive applications (e.g., virtual and augmented reality) necessitates even higher data rates and lower latency, driving research toward sixth-generation (6G) wireless networks. This study addresses a major challenge in post-5G communication: mitigating signal blockage in high-frequency millimeter-wave (mmWave) bands. This paper proposes a novel framework for blockage prediction using AI-based classification techniques to enhance signal reliability and optimize connectivity. The proposed framework is evaluated comprehensively using performance metrics such as accuracy, precision, recall, and F1-score. Notably, the NN Model 4 achieves a classification accuracy of 99.8%. Comprehensive visualizations—such as learning curves, confusion matrices, ROC curves, and precision-recall plots—highlight the model’s performance. This study contributes to the development of AI-driven techniques that enhance reliability and efficiency in future wireless communication systems. Full article
(This article belongs to the Special Issue Wireless Communications Channel)
Show Figures

Figure 1

12 pages, 1995 KiB  
Communication
Design and Implementation of a Virtual Reality (VR) Urban Highway Driving Simulator for Exposure Therapy: An Interdisciplinary Project and Pilot Study
by Francisca Melis, Ricardo Sánchez, Luz María González, Pablo Pellegrini, Jorge Fuentes and Rodrigo Nieto
Psychiatry Int. 2025, 6(2), 58; https://doi.org/10.3390/psychiatryint6020058 - 15 May 2025
Viewed by 280
Abstract
Exposure therapy approaches are recognized as effective treatments for specific phobias; however, certain phobias, such as fear of driving on urban highways, present unique challenges in order to expose the patient to the triggering stimuli in a safe, accessible, and controlled manner. In [...] Read more.
Exposure therapy approaches are recognized as effective treatments for specific phobias; however, certain phobias, such as fear of driving on urban highways, present unique challenges in order to expose the patient to the triggering stimuli in a safe, accessible, and controlled manner. In this context, we developed a virtual reality (VR) computerized driving simulator based on a local urban highway, and an accompanying clinical protocol to provide exposure therapy for patients with observed fear of driving on urban highways. We recruited eleven patients for this pilot study, where safety and tolerability as well as clinical and functional improvement were explored. We found that the driving simulator was safe and well tolerated by patients, with a notable 82% of patients successfully completing in vivo exposure, and there being a consistent trend in reduced anxiety scores using standardized testing. Nine patients successfully engaged in live exposures in a real freeway after participating in this VR-based exposure therapy protocol. The creation of an immersive and realistic VR environment based on a local urban highway for treating this phobia proved feasible and well-tolerated by participants. The intervention’s ability to engage patients who might otherwise have avoided traditional exposure therapies is noteworthy. Future research should aim to replicate this study with a larger and more diverse sample to enhance the generalizability of the findings. Full article
Show Figures

Figure 1

16 pages, 7057 KiB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Viewed by 428
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

22 pages, 1272 KiB  
Article
Innovative Virtual Reality Teaching for the Sustainable Development of Vocational High School Students: A Case Study of Hair Braiding
by Sumei Chiang, Daihua Chiang, Shao-Hsun Chang and Kai-Chao Yao
Sustainability 2025, 17(9), 3945; https://doi.org/10.3390/su17093945 - 27 Apr 2025
Viewed by 500
Abstract
This study combines the “flow theory” and the “extended technology acceptance model” (ETAM) to explore the perceived utility and sustainable development impact of virtual reality (VR) immersive learning in the hairdressing course of vocational schools. The research subjects were 1200 students from three [...] Read more.
This study combines the “flow theory” and the “extended technology acceptance model” (ETAM) to explore the perceived utility and sustainable development impact of virtual reality (VR) immersive learning in the hairdressing course of vocational schools. The research subjects were 1200 students from three vocational schools in Chiayi and Tainan, Taiwan. Data analysis was performed using SPSS 22.0 and Smart PLS 3. The main findings are as follows: (1) Model validation shows that vocational school students’ acceptance of VR learning is significantly affected by perceived usefulness (PU) and perceived ease of use (PE), and both positively affect attitude towards use (ATU). (2) Flow theory (FLOW) not only directly improves students’ usage attitude and behavioral intention (BI), but also partially mediates the relationship between PU/PE and ATU, indicating that immersion is the core factor driving learning motivation. (3) VR technology reduces the consumption of physical resources (such as wig models), meets the United Nations SDG 4 (quality education), SDG 9 (industrial innovation), and SDG 12 (responsible consumption) goals, and is cost-effective. (4) Students’ feedback pointed out that VR teaching stimulates creativity and independent learning, but it needs to be combined with traditional demonstration teaching to strengthen technical details. Full article
Show Figures

Figure 1

22 pages, 7958 KiB  
Article
Depth Upsampling with Local and Nonlocal Models Using Adaptive Bandwidth
by Niloufar Salehi Dastjerdi and M. Omair Ahmad
Electronics 2025, 14(8), 1671; https://doi.org/10.3390/electronics14081671 - 20 Apr 2025
Viewed by 1397
Abstract
The rapid advancement of 3D imaging technology and depth cameras has made depth data more accessible for applications such as virtual reality and autonomous driving. However, depth maps typically suffer from lower resolution and quality compared to color images due to sensor limitations. [...] Read more.
The rapid advancement of 3D imaging technology and depth cameras has made depth data more accessible for applications such as virtual reality and autonomous driving. However, depth maps typically suffer from lower resolution and quality compared to color images due to sensor limitations. This paper introduces an improved approach to guided depth map super-resolution (GDSR) that effectively addresses key challenges, including the suppression of texture copying artifacts and the preservation of depth discontinuities. The proposed method integrates both local and nonlocal models within a structured framework, incorporating an adaptive bandwidth mechanism that dynamically adjusts guidance weights. Instead of relying on fixed parameters, this mechanism utilizes a distance map to evaluate patch similarity, leading to enhanced depth recovery. The local model ensures spatial smoothness by leveraging neighboring depth information, preserving fine details within small regions. On the other hand, the nonlocal model identifies similarities across distant areas, improving the handling of repetitive patterns and maintaining depth discontinuities. By combining these models, the proposed approach achieves more accurate depth upsampling with high-quality depth reconstruction. Experimental results, conducted on several datasets and evaluated using various objective metrics, demonstrate the effectiveness of the proposed method through both quantitative and qualitative assessments. The approach consistently delivers improved performance over existing techniques, particularly in preserving structural details and visual clarity. An ablation study further confirms the individual contributions of key components within the framework. These results collectively support the conclusion that the method is not only robust and accurate but also adaptable to a range of real-world scenarios, offering a practical advancement over current state-of-the-art solutions. Full article
(This article belongs to the Special Issue Image and Video Processing for Emerging Multimedia Technology)
Show Figures

Figure 1

19 pages, 1827 KiB  
Systematic Review
Advancing Gait Analysis: Integrating Multimodal Neuroimaging and Extended Reality Technologies
by Vera Gramigna, Arrigo Palumbo and Giovanni Perri
Bioengineering 2025, 12(3), 313; https://doi.org/10.3390/bioengineering12030313 - 19 Mar 2025
Viewed by 855
Abstract
The analysis of human gait is a cornerstone in diagnosing and monitoring a variety of neuromuscular and orthopedic conditions. Recent technological advancements have paved the way for innovative methodologies that combine multimodal neuroimaging and eXtended Reality (XR) technologies to enhance the precision and [...] Read more.
The analysis of human gait is a cornerstone in diagnosing and monitoring a variety of neuromuscular and orthopedic conditions. Recent technological advancements have paved the way for innovative methodologies that combine multimodal neuroimaging and eXtended Reality (XR) technologies to enhance the precision and applicability of gait analysis. This review explores the state-of-the-art solutions of an advanced gait analysis approach, a multidisciplinary concept that integrates neuroimaging, extended reality technologies, and sensor-based methods to study human locomotion. Several wearable neuroimaging modalities such as functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG), commonly used to monitor and analyze brain activity during walking and to explore the neural mechanisms underlying motor control, balance, and gait adaptation, were considered. XR technologies, including virtual, augmented, and mixed reality, enable the creation of immersive environments for gait analysis, real-time simulation, and movement visualization, facilitating a comprehensive assessment of locomotion and its neural and biomechanical dynamics. This advanced gait analysis approach enhances the understanding of gait by examining both cerebral and biomechanical aspects, offering insights into brain–musculoskeletal coordination. We highlight its potential to provide real-time, high-resolution data and immersive visualization, facilitating improved clinical decision-making and rehabilitation strategies. Additionally, we address the challenges of integrating these technologies, such as data fusion, computational demands, and scalability. The review concludes by proposing future research directions that leverage artificial intelligence to further optimize multimodal imaging and XR applications in gait analysis, ultimately driving their translation from laboratory settings to clinical practice. This synthesis underscores the transformative potential of these approaches for personalized medicine and patient outcomes. Full article
Show Figures

Figure 1

42 pages, 3555 KiB  
Review
Reviewing 6D Pose Estimation: Model Strengths, Limitations, and Application Fields
by Kostas Ordoumpozanis and George A Papakostas
Appl. Sci. 2025, 15(6), 3284; https://doi.org/10.3390/app15063284 - 17 Mar 2025
Viewed by 1978
Abstract
Three-dimensional object recognition is crucial in modern applications, including robotics in manufacturing, household items, augmented and virtual reality, and autonomous driving. Extensive research and numerous surveys have been conducted in this field. This study aims to create a model selection guide by addressing [...] Read more.
Three-dimensional object recognition is crucial in modern applications, including robotics in manufacturing, household items, augmented and virtual reality, and autonomous driving. Extensive research and numerous surveys have been conducted in this field. This study aims to create a model selection guide by addressing key questions we need to answer when we want to select a 6D pose estimation model: inputs, modalities, real-time capabilities, hardware requirements, evaluation datasets, performance metrics, strengths, limitations, and special attributes such as symmetry or occlusion handling. By analyzing 84 models, including 62 new ones beyond previous surveys, and identifying 25 datasets 14 newly introduced, we organized the results into comparison tables and standardized summarization templates. This structured approach facilitates easy model comparison and selection based on practical application needs. The focus of this study is on the practical aspects of utilizing 6D pose estimation models, providing a valuable resource for researchers and practitioners. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 3007 KiB  
Article
EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion
by Fuyun Sun, Baoquan Li and Qiaomei Zhang
Mathematics 2025, 13(6), 953; https://doi.org/10.3390/math13060953 - 13 Mar 2025
Viewed by 494
Abstract
Depth completion is a technique to densify the sparse depth maps acquired by depth sensors (e.g., RGB-D cameras, LiDAR) to generate complete and accurate depth maps. This technique has important application value in autonomous driving, robot navigation, and virtual reality. Currently, deep learning [...] Read more.
Depth completion is a technique to densify the sparse depth maps acquired by depth sensors (e.g., RGB-D cameras, LiDAR) to generate complete and accurate depth maps. This technique has important application value in autonomous driving, robot navigation, and virtual reality. Currently, deep learning has become a mainstream method for depth completion. Therefore, we propose an edge-enhanced dynamically routed adaptive depth completion network, EDRNet, to achieve efficient and accurate depth completion through lightweight design and boundary optimisation. Firstly, we introduce the Canny operator (a classical image processing technique) to explicitly extract and fuse the object contour information and fuse the acquired edge maps with RGB images and sparse depth map inputs to provide the network with clear edge-structure information. Secondly, we design a Sparse Adaptive Dynamic Routing Transformer block called SADRT, which can effectively combine the global modelling capability of the Transformer and the local feature extraction capability of CNN. The dynamic routing mechanism introduced in this block can dynamically select key regions for efficient feature extraction, and the amount of redundant computation is significantly reduced compared with the traditional Transformer. In addition, we design a loss function with additional penalties for the depth error of the object edges, which further enhances the constraints on the edges. The experimental results demonstrate that the method presented in this paper achieves significant performance improvements on the public datasets KITTI DC and NYU Depth v2, especially in the edge region’s depth prediction accuracy and computational efficiency. Full article
Show Figures

Figure 1

26 pages, 4969 KiB  
Review
A Review of Recent Advances in High-Dynamic-Range CMOS Image Sensors
by Jingyang Chen, Nanbo Chen, Zhe Wang, Runjiang Dou, Jian Liu, Nanjian Wu, Liyuan Liu, Peng Feng and Gang Wang
Chips 2025, 4(1), 8; https://doi.org/10.3390/chips4010008 - 3 Mar 2025
Viewed by 2159
Abstract
High-dynamic-range (HDR) technology enhances the capture of luminance beyond the limits of traditional images, facilitating the capture of more nuanced and lifelike visual effects. This advancement has profound implications across various sectors, such as medical imaging, augmented reality (AR), virtual reality (VR), and [...] Read more.
High-dynamic-range (HDR) technology enhances the capture of luminance beyond the limits of traditional images, facilitating the capture of more nuanced and lifelike visual effects. This advancement has profound implications across various sectors, such as medical imaging, augmented reality (AR), virtual reality (VR), and autonomous driving systems. The evolution of complementary metal-oxide semiconductor (CMOS) image sensor (CIS) manufacturing techniques, particularly through backside illumination (BSI) and advancements in three-dimensional (3D) stacking architectures, is driving progress in HDR’s capabilities. This paper provides a review of the technologies developed over the past six years that augment the dynamic range (DR) of CIS. It systematically introduces and summarizes the implementation methodologies and distinguishing features of each technology. Full article
Show Figures

Figure 1

17 pages, 2797 KiB  
Article
Multi-Environment Vehicle Trajectory Automatic Driving Scene Generation Method Based on Simulation and Real Vehicle Testing
by Yicheng Cao, Haiming Sun, Guisheng Li, Chuan Sun, Haoran Li, Junru Yang, Liangyu Tian and Fei Li
Electronics 2025, 14(5), 1000; https://doi.org/10.3390/electronics14051000 - 1 Mar 2025
Viewed by 712
Abstract
As autonomous vehicles increasingly populate roads, robust testing is essential to ensure their safety and reliability. Due to the limitation that traditional testing methodologies (real-world and simulation testing) are difficult to cover a wide range of scenarios and ensure repeatability, this study proposes [...] Read more.
As autonomous vehicles increasingly populate roads, robust testing is essential to ensure their safety and reliability. Due to the limitation that traditional testing methodologies (real-world and simulation testing) are difficult to cover a wide range of scenarios and ensure repeatability, this study proposes a novel virtual-real fusion testing approach that integrates Graph Theory and Artificial Potential Fields (APF) in virtual-real fusion autonomous vehicle testing. Conducted using SUMO software, our strategic lane change and speed adjustment simulation experiments demonstrate that our approach can efficiently handle vehicle dynamics and environmental interactions compared to traditional Rapidly-exploring Random Tree (RRT) methods. The proposed method shows a significant reduction in maneuver completion times—up to 41% faster in simulations and 55% faster in real-world tests. Field experiments at the Vehicle-Road-Cloud Integrated Platform in Suzhou High-Speed Railway New Town confirmed the method’s practical viability and robustness under real traffic conditions. The results indicate that our integrated approach enhances the authenticity and efficiency of testing, thereby advancing the development of dependable, autonomous driving systems. This research not only contributes to the theoretical framework but also has practical implications for improving autonomous vehicle testing processes. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 2465 KiB  
Article
The Ecology of Climate Change: Using Virtual Reality to Share, Experience, and Cultivate Local and Global Perspectives
by Victor Daniel Carmona-Galindo, Maryory Andrea Velado-Cano and Anna Maria Groat-Carmona
Educ. Sci. 2025, 15(3), 290; https://doi.org/10.3390/educsci15030290 - 26 Feb 2025
Cited by 1 | Viewed by 898
Abstract
The global challenge of climate change demands innovative, inclusive, and experiential education that fosters ecological literacy, behavioral change, and climate advocacy. This study explores a cross-cultural collaboration between two undergraduate ecology courses—one at the University of La Verne (ULV) in California and the [...] Read more.
The global challenge of climate change demands innovative, inclusive, and experiential education that fosters ecological literacy, behavioral change, and climate advocacy. This study explores a cross-cultural collaboration between two undergraduate ecology courses—one at the University of La Verne (ULV) in California and the other at the Universidad Centroamericana José Simeón Cañas (UCA) in El Salvador—that employed 360° virtual reality (VR) photosphere photographs to investigate climate change impacts. Students documented local ecological phenomena, such as drought and habitat loss, and shared insights with international peers, facilitating a rich exchange of perspectives across biomes. Generative AI tools like ChatGPT were utilized to overcome language barriers, enabling equitable participation and enhancing cross-cultural communication. The findings highlight VR’s transformative role in helping students visualize and communicate complex ecological concepts while fostering empathy, emotional engagement, and agency as climate advocates. Institutional and curricular factors shaping the integration of VR-based approaches are discussed, along with their potential to drive behavioral shifts and promote global engagement. This study demonstrates that immersive technologies, combined with collaborative learning, provide a powerful framework for bridging geographic and cultural divides, equipping students with the tools and perspectives needed to address the critical global challenges posed by climate change. Full article
20 pages, 1619 KiB  
Systematic Review
A Breakthrough in Producing Personalized Solutions for Rehabilitation and Physiotherapy Thanks to the Introduction of AI to Additive Manufacturing
by Emilia Mikołajewska, Dariusz Mikołajewski, Tadeusz Mikołajczyk and Tomasz Paczkowski
Appl. Sci. 2025, 15(4), 2219; https://doi.org/10.3390/app15042219 - 19 Feb 2025
Viewed by 1896
Abstract
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation [...] Read more.
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation and physiotherapy thanks to the introduction of AI to AM. Advanced AI algorithms analyze patient-specific data such as body scans, movement patterns, and medical history to design customized assistive devices, orthoses, and prosthetics. This synergy enables the rapid prototyping and production of highly optimized solutions, improving comfort, functionality, and therapeutic outcomes. Machine learning (ML) models further streamline the process by anticipating biomechanical needs and adapting designs based on feedback, providing iterative refinement. Cutting-edge techniques leverage generative design and topology optimization to create lightweight yet durable structures that are ideally suited to the patient’s anatomy and rehabilitation goals .AI-based AM also facilitates the production of multi-material devices that combine flexibility, strength, and sensory capabilities, enabling improved monitoring and support during physical therapy. New perspectives include integrating smart sensors with printed devices, enabling real-time data collection and feedback loops for adaptive therapy. Additionally, these solutions are becoming increasingly accessible as AM technology lowers costs and improves, democratizing personalized healthcare. Future advances could lead to the widespread use of digital twins for the real-time simulation and customization of rehabilitation devices before production. AI-based virtual reality (VR) and augmented reality (AR) tools are also expected to combine with AM to provide immersive, patient-specific training environments along with physical aids. Collaborative platforms based on federated learning can enable healthcare providers and researchers to securely share AI insights, accelerating innovation. However, challenges such as regulatory approval, data security, and ensuring equity in access to these technologies must be addressed to fully realize their potential. One of the major gaps is the lack of large, diverse datasets to train AI models, which limits their ability to design solutions that span different demographics and conditions. Integration of AI–AM systems into personalized rehabilitation and physical therapy should focus on improving data collection and processing techniques. Full article
(This article belongs to the Special Issue Additive Manufacturing in Material Processing)
Show Figures

Figure 1

29 pages, 4682 KiB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Viewed by 1430
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

24 pages, 4502 KiB  
Article
Quality Comparison of Dynamic Auditory Virtual-Reality Simulation Approaches of Approaching Vehicles Regarding Perceptual Behavior and Psychoacoustic Values
by Jonas Krautwurm, Daniel Oberfeld-Twistel, Thirsa Huisman, Maria Mareen Maravich and Ercan Altinsoy
Acoustics 2025, 7(1), 7; https://doi.org/10.3390/acoustics7010007 - 8 Feb 2025
Viewed by 1081
Abstract
Traffic safety experiments are often conducted in virtual environments in order to avoid dangerous situations and conduct the experiments more cost-efficiently. This means that attention must be paid to the fidelity of the traffic scenario reproduction, because the pedestrians’ judgments have to be [...] Read more.
Traffic safety experiments are often conducted in virtual environments in order to avoid dangerous situations and conduct the experiments more cost-efficiently. This means that attention must be paid to the fidelity of the traffic scenario reproduction, because the pedestrians’ judgments have to be close to reality. To understand behavior in relation to the prevailing audio rendering systems better, a listening test was conducted which focused on perceptual differences between simulation and playback methods. Six vehicle driving-by-scenes were presented using two different simulation methods and three different playback methods, and binaural recordings from the test track acquired during the recordings of the vehicle sound sources for the simulation were additionally incorporated. Each vehicle driving-by-scene was characterized by different vehicle types and different speeds. Participants rated six attributes of the perceptual dimensions: “timbral balance”, “naturalness”, “room-related”, “source localization”, “loudness” and “speed perception”. While the ratings showed a high degree of similarity among the ratings of the sound attributes in the different reproduction systems, there were minor differences in the speed and loudness estimations and the different perceptions of brightness stood out. A comparison of the loudness ratings in the scenes featuring electric and combustion-engine vehicles highlights the issue of reduced detection abilities with regard to the former. Full article
Show Figures

Figure 1

Back to TopTop