Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = distance in VR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7524 KB  
Article
Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling
by Wen-Chao Yang, Chih-Hung Shih, Jiajun Jiang, Sergio Pallas Enguita and Chung-Hao Chen
Electronics 2025, 14(16), 3265; https://doi.org/10.3390/electronics14163265 - 17 Aug 2025
Viewed by 307
Abstract
Understanding human perceptual strategies in high-stakes environments, such as crime scene investigations, is essential for developing cognitive models that reflect expert decision-making. This study presents an immersive experimental framework that utilizes virtual reality (VR) and eye-tracking technologies to capture and analyze visual attention [...] Read more.
Understanding human perceptual strategies in high-stakes environments, such as crime scene investigations, is essential for developing cognitive models that reflect expert decision-making. This study presents an immersive experimental framework that utilizes virtual reality (VR) and eye-tracking technologies to capture and analyze visual attention during simulated forensic tasks. A360° panoramic crime scene, constructed using the Nikon KeyMission 360 camera, was integrated into a VR system with HTC Vive and Tobii Pro eye-tracking components. A total of 46 undergraduate students aged 19 to 24–23, from the National University of Singapore in Singapore and 23 from the Central Police University in Taiwan—participated in the study, generating over 2.6 million gaze samples (IRB No. 23-095-B). The collected eye-tracking data were analyzed using statistical summarization, temporal alignment techniques (Earth Mover’s Distance and Needleman-Wunsch algorithms), and machine learning models, including K-means clustering, random forest regression, and support vector machines (SVMs). Clustering achieved a classification accuracy of 78.26%, revealing distinct visual behavior patterns across participant groups. Proficiency prediction models reached optimal performance with a random forest regression (R2 = 0.7034), highlighting scan-path variability and fixation regularity as key predictive features. These findings demonstrate that eye-tracking metrics—particularly sequence-alignment-based features—can effectively capture differences linked to both experiential training and cultural context. Beyond its immediate forensic relevance, the study contributes a structured methodology for encoding visual attention strategies into analyzable formats, offering valuable insights for cognitive modeling, training systems, and human-centered design in future perceptual intelligence applications. Furthermore, our work advances the development of autonomous vehicles by modeling how humans visually interpret complex and potentially hazardous environments. By examining expert and novice gaze patterns during simulated forensic investigations, we provide insights that can inform the design of autonomous systems required to make rapid, safety-critical decisions in similarly unstructured settings. The extraction of human-like visual attention strategies not only enhances scene understanding, anomaly detection, and risk assessment in autonomous driving scenarios, but also supports accelerated learning of response patterns for rare, dangerous, or otherwise exceptional conditions—enabling autonomous driving systems to better anticipate and manage unexpected real-world challenges. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

10 pages, 815 KB  
Article
Virtual Reality-Based Screening Tool for Distance Horizontal Fusional Vergence in Orthotropic Young Subjects: A Prospective Pilot Study
by Jhih-Yi Lu, Yin-Cheng Liu, Jui-Bang Lu, Ming-Han Tsai, Wen-Ling Liao, I-Ming Wang, Hui-Ju Lin and Yu-Te Huang
Life 2025, 15(8), 1286; https://doi.org/10.3390/life15081286 - 13 Aug 2025
Viewed by 320
Abstract
This prospective pilot study aimed to develop and evaluate a VR–based screening tool for assessing distance fusional vergence amplitude in healthy orthotropic young adults aged 18 to 30 years. A VR–based balloon-hitting game was used to measure hitting deviation angles and total vergence [...] Read more.
This prospective pilot study aimed to develop and evaluate a VR–based screening tool for assessing distance fusional vergence amplitude in healthy orthotropic young adults aged 18 to 30 years. A VR–based balloon-hitting game was used to measure hitting deviation angles and total vergence amplitudes under five conditions: control (0 prism diopter [PD]), inward image rotation for 10 and 20 PD (negative fusional vergence [NFV] 10/20 groups), and outward image rotation for 10 and 20 PD (positive fusional vergence [PFV] 10/20 groups). Of the 20 subjects recruited, one was excluded due to esotropia, leaving 19 participants (mean age: 22.2 ± 2.2 years; 13 wore glasses and 3 were female). In the control group, the mean hitting deviation was 0.65 ± 0.25 PD. The PFV 10 PD group showed similar deviation (0.67 ± 0.25 PD, p = 0.67), while the PFV 20 PD group had a significant increase (1.71 ± 2.0 PD, p = 0.04). NFV groups demonstrated greater deviations (NFV 10 PD: 3.40 ± 2.05 PD; NFV 20 PD: 9.9 ± 2.40 PD, both p < 0.01). Total vergence amplitudes were 8.65, 16.48, 6.60, and 10.05 PD for PFV 10, PFV 20, NFV 10, and NFV 20 PD, respectively. The VR–based tool enables standardized, efficient assessment of fusional vergence and shows promise for large-scale screening. Full article
(This article belongs to the Section Medical Research)
Show Figures

Figure 1

21 pages, 9379 KB  
Article
UDirEar: Heading Direction Tracking with Commercial UWB Earbud by Interaural Distance Calibration
by Minseok Kim, Younho Nam, Jinyou Kim and Young-Joo Suh
Electronics 2025, 14(15), 2940; https://doi.org/10.3390/electronics14152940 - 23 Jul 2025
Viewed by 387
Abstract
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a [...] Read more.
Accurate heading direction tracking is essential for immersive VR/AR, spatial audio rendering, and robotic navigation. Existing IMU-based methods suffer from drift and vibration artifacts, vision-based approaches require LoS and raise privacy concerns, and RF techniques often need dedicated infrastructure. We propose UDirEar, a COTS UWB device-based system that estimates user heading using solely high-level UWB information like distance and unit direction. By initializing an EKF with each user’s constant interaural distance, UDirEar compensates for the earbuds’ roto-translational motion without additional sensors. We evaluate UDirEar on a step-motor-driven dummy head against an IMU-only baseline (MAE 30.8°), examining robustness across dummy head–initiator distances, elapsed time, EKF calibration conditions, and NLoS scenarios. UDirEar achieves a mean absolute error of 3.84° and maintains stable performance under all tested conditions. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

12 pages, 8520 KB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Viewed by 744
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

16 pages, 2032 KB  
Article
Auto-Segmentation and Auto-Planning in Automated Radiotherapy for Prostate Cancer
by Sijuan Huang, Jingheng Wu, Xi Lin, Guangyu Wang, Ting Song, Li Chen, Lecheng Jia, Qian Cao, Ruiqi Liu, Yang Liu, Xin Yang, Xiaoyan Huang and Liru He
Bioengineering 2025, 12(6), 620; https://doi.org/10.3390/bioengineering12060620 - 6 Jun 2025
Viewed by 856
Abstract
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of [...] Read more.
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of the gross tumor volume (GTV), clinical tumor volume (CTV), nodal CTV (CTVnd), and organs at risk (OARs). Performance was assessed by the Dice similarity coefficient (DSC), the Recall, Precision, Volume Ratio (VR), the 95% Hausdorff distance (HD95%), and the volumetric revision degree (VRD). An auto-planning network based on a 3D Unet was trained on 77 treatment plans derived from the 166 patients. Dosimetric differences and clinical acceptability of the auto-plans were studied. The effect of OAR editing on dosimetry was also evaluated. Results: On an independent set of 50 cases, the auto-segmentation process took 1 min 20 s per case. The DSCs for GTV, CTV, and CTVnd were 0.87, 0.88, and 0.82, respectively, with VRDs ranging from 0.09 to 0.14. The segmentation of OARs demonstrated high accuracy (DSC ≥ 0.83, Recall/Precision ≈ 1.0). The auto-planning process required 1–3 optimization iterations for 50%, 40%, and 10% of cases, respectively, and exhibited significant better conformity (p ≤ 0.01) and OAR sparing (p ≤ 0.03) while maintaining comparable target coverage. Only 6.7% of auto-plans were deemed unacceptable compared to 20% of manual plans, with 75% of auto-plans considered superior. Notably, the editing of OARs had no significant impact on doses. Conclusions: The accuracy of auto-segmentation is comparable to that of manual segmentation, and the auto-planning offers equivalent or better OAR protection, meeting the requirements of online automated radiotherapy and facilitating its clinical application. Full article
(This article belongs to the Special Issue Novel Imaging Techniques in Radiotherapy)
Show Figures

Figure 1

18 pages, 2855 KB  
Article
Visual Environment Effects on Wayfinding in Underground Spaces
by Jupeng Wu and Soobeen Park
Buildings 2025, 15(11), 1918; https://doi.org/10.3390/buildings15111918 - 2 Jun 2025
Viewed by 566
Abstract
This study investigates how visual environmental factors influence wayfinding behavior in underground spaces, with a particular focus on cultural differences between Korean and Chinese college students. A virtual reality (VR) environment was developed using Unity3D to simulate an underground space, incorporating five key [...] Read more.
This study investigates how visual environmental factors influence wayfinding behavior in underground spaces, with a particular focus on cultural differences between Korean and Chinese college students. A virtual reality (VR) environment was developed using Unity3D to simulate an underground space, incorporating five key visual variables: passage width, brightness, color temperature (warm vs. cool), the presence or absence of obstacles, and the configuration of sign systems. Participants were divided into two groups—Korean (Group K) and Chinese (Group C)—and engaged in a VR-based wayfinding experiment followed by an emotional vocabulary evaluation. The results indicate significant cultural differences in spatial perception and navigation preferences. Chinese participants preferred narrower, brighter, and cool-colored passages, associating them with an improved sense of direction, lower stress, and enhanced attention. In contrast, Korean participants favored wider, darker, and warm-colored passages, valuing accessibility, stability, and distance perception. Both groups showed a strong preference for environments with floor signage and combined sign systems, though Korean participants were more tolerant of obstacles. These findings provide practical insights for designing more inclusive and navigable underground public spaces across different cultural contexts. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

14 pages, 1136 KB  
Article
The Potential Effects of Sensor-Based Virtual Reality Telerehabilitation on Lower Limb Function in Patients with Chronic Stroke Facing the COVID-19 Pandemic: A Retrospective Case-Control Study
by Mirjam Bonanno, Maria Grazia Maggio, Paolo De Pasquale, Laura Ciatto, Antonino Lombardo Facciale, Morena De Francesco, Giuseppe Andronaco, Rosaria De Luca, Angelo Quartarone and Rocco Salvatore Calabrò
Med. Sci. 2025, 13(2), 65; https://doi.org/10.3390/medsci13020065 - 23 May 2025
Viewed by 1433
Abstract
Background/Objectives: Individuals with chronic stroke often experience various impairments, including poor balance, reduced mobility, limited physical activity, and difficulty performing daily tasks. In the context of the COVID-19 pandemic, telerehabilitation (TR) can overcome the barriers of geographical and physical distancing, time, costs, and [...] Read more.
Background/Objectives: Individuals with chronic stroke often experience various impairments, including poor balance, reduced mobility, limited physical activity, and difficulty performing daily tasks. In the context of the COVID-19 pandemic, telerehabilitation (TR) can overcome the barriers of geographical and physical distancing, time, costs, and travel, as well as the anxiety about contracting COVID-19. In this retrospective case-control study, we aim to evaluate the motor and cognitive effects of balance TR training carried out with a sensor-based non-immersive virtual reality system compared to conventional rehabilitation in chronic stroke patients. Methods: Twenty chronic post-stroke patients underwent evaluation for inclusion in the analysis through an electronic recovery data system. The patients included in the study were divided into two groups with similar medical characteristics and duration of rehabilitation training. However, the groups differed in the type of rehabilitation approach used. The experimental group (EG) received TR with a sensor-based VR device, called VRRS—HomeKit (n. 10). In contrast, the control group (CG) underwent conventional home-based rehabilitation (n. 10). Results: At the end of the training, we observed significant improvements in the EG in the 10-m walking test (10MWT) (p = 0.01), Timed-Up-Go Left (TUG L) (p = 0.01), and Montreal Cognitive Assessment (MoCA) (p = 0.005). Conclusions: In our study, we highlighted the potential role of sensor-based virtual reality TR in chronic stroke patients for improving lower limb function, suggesting that this approach is feasible and not inferior to conventional home-based rehabilitation. Full article
Show Figures

Figure 1

13 pages, 1014 KB  
Article
The Impact of Deep Core Muscle System Training Through Virtual Reality on Selected Posturographic Parameters
by Jakub Čuj, Denisa Lenková, Miloslav Gajdoš, Eva Lukáčová, Michal Macej, Katarína Hnátová, Pavol Nechvátal and Lucia Demjanovič Kendrová
J. Funct. Morphol. Kinesiol. 2025, 10(2), 185; https://doi.org/10.3390/jfmk10020185 - 21 May 2025
Viewed by 559
Abstract
Objective: The aim of this study was to investigate the immediate effects of deep core muscle training in the plank position, using the Icaros® system, integrated with virtual reality (VR), on selected posturographic parameters. Methods: To meet the stated objective, we utilized [...] Read more.
Objective: The aim of this study was to investigate the immediate effects of deep core muscle training in the plank position, using the Icaros® system, integrated with virtual reality (VR), on selected posturographic parameters. Methods: To meet the stated objective, we utilized the Icaros® therapeutic system (Icaros GmbH, Martinsried, Germany) for VR-based exercise. The posturographic parameters were measured using the FootScan® force platform (Materialise Motion, Paal, Belgium). A representative sample of 30 healthy participants, 13 females and 17 males (age: 22.5 ± 2.1 years; weight: 65 ± 2.9 kg; height: 1.68 ± 0.4 m; BMI: 23.04 ± 1.75) was included in the study. All participants had no prior experience with VR. The selected posturographic parameters were the ellipse area (mm2) and traveled distance (mm), assessed four times at five-minute intervals, following a 15 min VR-based training session on the Icaros® system. Results: The results revealed that the participants experienced a sense of instability after completing the 15 min VR session, as objectively demonstrated by changes in the measured parameters. Both the ellipse area and traveled distance showed a worsening trend during the first three measurements: immediately post-exercise, at 5 min, and at 10 min post-exercise. A downward trend was observed in the fourth measurement, taken 15 min after exercise. Statistically significant differences were found between both parameters: ellipse area (p = 0.000) and traveled distance (p = 0.000). Post hoc analysis further confirmed significant differences between the time points. Conclusions: Based on the findings, it is recommended that trainers and physiotherapists supervising athletes or patients using the Icaros® VR system allow for a minimum rest period of 15 min in a seated or lying position following exercise. This recovery period appears essential to mitigate the sensation of instability and to reduce the risk of complications or injury due to potential falls. Full article
Show Figures

Figure 1

13 pages, 3019 KB  
Article
QTL Identification and Candidate Gene Prediction for Spike-Related Traits in Barley
by Xiaofang Wang, Junpeng Chen, Qingyu Cao, Chengyang Wang, Genlou Sun and Xifeng Ren
Agronomy 2025, 15(5), 1185; https://doi.org/10.3390/agronomy15051185 - 14 May 2025
Viewed by 559
Abstract
Barley (Hordeum vulgare L.) is one of the most important cereal crops in the world, and its production is important to humans. Barley spike morphology is highly correlated with yield and is also a complex multigene-controlled quantitative trait. To date, a considerable [...] Read more.
Barley (Hordeum vulgare L.) is one of the most important cereal crops in the world, and its production is important to humans. Barley spike morphology is highly correlated with yield and is also a complex multigene-controlled quantitative trait. To date, a considerable number of spike-related quantitative trait loci (QTLs) have been reported in barley, but the large physical distances between most of them and the lack of follow-up studies have made it difficult to use them in molecular-assisted breeding in barley. To explore more novel and yield-enhancing spike QTLs, in this study, a high-density genetic linkage map was developed based on a population of 172 F2:12 recombinant inbred lines (RILs) developed from a cross between the barley variety Yongjiabaidamai (YJ) and Hua 30 (H30), and used to map the spike length (SL), rachis node number (SRN), and spike density (SD). A total of 50 additive QTLs (LOD > 3) were mapped in four environments, four of them being stable and major QTLs. The qSL2-5 overlaps with the zeo1 gene, comparing the gene sequences of both parents and combining with previous studies, zeo1 was determined to be the SL regulatory gene in qSL2-5. The qSRN2-1 overlaps with vrs1, but it has not been previously reported that vrs1 affects SRN. Notably, two novel QTLs, one each on chromosomes 2H (qSL2-1) and 5H (qSL5-1), respectively, were first identified in this study. The qSL2-1 has only 0.06 Mb and contains three high-confidence genes. In addition, this study explored the relationship between three spike traits, and found that SL was affected by both SRN and SD, while there was almost no relationship between SRN and SD. We also explored the effect of these QTLs on grain weight per spike (GWPS) to assess their effect on yield and found that qSRN2-1 and qSL5-1 had a greater effect on GWPS, suggesting that they are potential loci to increase yield. Full article
(This article belongs to the Section Crop Breeding and Genetics)
Show Figures

Figure 1

29 pages, 16039 KB  
Article
PRIVocular: Enhancing User Privacy Through Air-Gapped Communication Channels
by Anastasios N. Bikos
Cryptography 2025, 9(2), 29; https://doi.org/10.3390/cryptography9020029 - 1 May 2025
Viewed by 1733
Abstract
Virtual reality (VR)/the metaverse is transforming into a ubiquitous technology by leveraging smart devices to provide highly immersive experiences at an affordable price. Cryptographically securing such augmented reality schemes is of paramount importance. Securely transferring the same secret key, i.e., obfuscated, between several [...] Read more.
Virtual reality (VR)/the metaverse is transforming into a ubiquitous technology by leveraging smart devices to provide highly immersive experiences at an affordable price. Cryptographically securing such augmented reality schemes is of paramount importance. Securely transferring the same secret key, i.e., obfuscated, between several parties is the main issue with symmetric cryptography, the workhorse of modern cryptography, because of its ease of use and quick speed. Typically, asymmetric cryptography establishes a shared secret between parties, after which the switch to symmetric encryption can be made. However, several SoTA (State-of-The-Art) security research schemes lack flexibility and scalability for industrial Internet-of-Things (IoT)-sized applications. In this paper, we present the full architecture of the PRIVocular framework. PRIVocular (i.e., PRIV(acy)-ocular) is a VR-ready hardware–software integrated system that is capable of visually transmitting user data over three versatile modes of encapsulation, encrypted—without loss of generality—using an asymmetric-key cryptosystem. These operation modes can be optical character-based or QR-tag-based. Encryption and decryption primarily depend on each mode’s success ratio of correct encoding and decoding. We investigate the most efficient means of ocular (encrypted) data transfer by considering several designs and contributing to each framework component. Our pre-prototyped framework can provide such privacy preservation (namely virtual proof of privacy (VPP)) and visually secure data transfer promptly (<1000 ms), as well as the physical distance of the smart glasses (∼50 cm). Full article
Show Figures

Figure 1

22 pages, 4437 KB  
Article
Study of Visualization Modalities on Industrial Robot Teleoperation for Inspection in a Virtual Co-Existence Space
by Damien Mazeas and Bernadin Namoano
Virtual Worlds 2025, 4(2), 17; https://doi.org/10.3390/virtualworlds4020017 - 28 Apr 2025
Cited by 1 | Viewed by 816
Abstract
Effective teleoperation visualization is crucial but challenging for tasks like remote inspection. This study proposes a VR-based teleoperation framework featuring a ‘Virtual Co-Existence Space’ and systematically investigates visualization modalities within it. We compared four interfaces (2D camera feed, 3D point cloud, combined 2D3D, [...] Read more.
Effective teleoperation visualization is crucial but challenging for tasks like remote inspection. This study proposes a VR-based teleoperation framework featuring a ‘Virtual Co-Existence Space’ and systematically investigates visualization modalities within it. We compared four interfaces (2D camera feed, 3D point cloud, combined 2D3D, and Augmented Virtuality-AV) for controlling an industrial robot. Twenty-four participants performed inspection tasks while performance (time, collisions, accuracy, photos) and cognitive load (NASA-TLX, pupillometry) were measured. Results revealed distinct trade-offs: 3D imposed the highest cognitive load but enabled precise navigation (low collisions). 2D3D offered the lowest load and highest user comfort but slightly reduced distance accuracy. AV suffered significantly higher collision rates and participant feedback usability issues. 2D showed low physiological load but high subjective effort. No significant differences were found for completion time, distance accuracy, or photo quality. In conclusion, no visualization modality proved universally superior within the proposed framework. The optimal choice is balancing task priorities like navigation safety versus user workload. Hybrid 2D3D shows promise for minimizing load, while AV requires substantial usability refinement for safe deployment. Full article
Show Figures

Figure 1

22 pages, 2964 KB  
Article
Energy-Efficient Dynamic Street Lighting Optimization: Balancing Pedestrian Safety and Energy Conservation
by Zhide Wang, Qing Fan, Zhuoyuan Du and Mingyu Zhang
Buildings 2025, 15(8), 1377; https://doi.org/10.3390/buildings15081377 - 21 Apr 2025
Viewed by 1091
Abstract
Residential street lighting plays a crucial role in enhancing the reassurance for pedestrians returning home late at night. However, street lighting is sometimes recommended and required to be kept at lower levels at night, due to problems such as light pollution, energy consumption, [...] Read more.
Residential street lighting plays a crucial role in enhancing the reassurance for pedestrians returning home late at night. However, street lighting is sometimes recommended and required to be kept at lower levels at night, due to problems such as light pollution, energy consumption, and negative economics. To solve these problems, this study designed a new Dynamic tracking lighting control mode capable of greater interactivity. Our study aimed to determine whether this new interactive lighting model can balance pedestrian safety with energy savings, compared with other lighting approaches used in low-light environments. In this experiment, 30 participants explored four lighting conditions in a simulated nighttime street environment through virtual reality (VR) and completed their assessment of each lighting mode. The statistical analysis of the results using the Friedman ANOVA test revealed that the Dynamic tracking lighting mode had advantages in improving the pedestrians’ reassurance compared with the other three lighting modes. Moreover, an additional recognition test experiment recorded the distance between each other whenever a participant recognized a stranger agent. The experimental results showed that this Dynamic tracking lighting mode can improve pedestrians’ ability to recognize others in low-light environments. These findings provide new strategies and ideas for urban energy conservation and environmental protection. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

22 pages, 6106 KB  
Article
Variable Rate Seeding and Accuracy of Within-Field Hybrid Switching in Maize (Zea mays L.)
by Károly Bűdi, Annamária Bűdi, Ádám Tarcsi and Gábor Milics
Agronomy 2025, 15(3), 718; https://doi.org/10.3390/agronomy15030718 - 16 Mar 2025
Viewed by 1192
Abstract
Precision agriculture techniques, such as variable rate seeding (VRS) and hybrid switching, play an important role in optimizing crop yield and reducing input costs. This study evaluates the effectiveness of hybrid switching and the application of VRS technology in maize production, focusing on [...] Read more.
Precision agriculture techniques, such as variable rate seeding (VRS) and hybrid switching, play an important role in optimizing crop yield and reducing input costs. This study evaluates the effectiveness of hybrid switching and the application of VRS technology in maize production, focusing on the accuracy of seeding rate and hybrid placement under varying field conditions. Conducted over two years, the research compares the performance of a precision planting system in flat (2023) and hilly (2024) terrain in north-eastern Hungary. The study examines seed placement uniformity, furrow quality and seed drop rates, with a focus on how terrain affects the success of these operations. A data analysis shows that hybrid switching and VRS result in better seed placement and more uniform furrows in downhill operations, with lower seed drop rates compared to uphill operations. In addition, the paper discusses the importance of accurate seeding equipment calibration and data cleaning. The findings highlight the critical need for accuracy and reliability in precision agriculture and provide insights to improve future crop management strategies and ensure sustainable farming practices. The study evaluates the accuracy of hybrid switching in maize across different terrain types and its impact on operational efficiency. The results show variation in hybrid switching distances, with an average transition length of 5.1 m on flat terrain, 5.80 m on uphill, and 4.22 m on downhill. The longest transitions occurred on uphill terrain due to increased mechanical adjustment delays, while the shortest transitions were observed on downhill slopes where seed flow remained more stable. The results highlight the importance of terrain-adaptive control mechanisms in precision planting systems to minimize transition delays, improve seed placement accuracy, and increase overall yield potential. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

11 pages, 1718 KB  
Article
Obstacle Circumvention Strategies During Omnidirectional Treadmill Walking in Virtual Reality
by Marco A. Bühler and Anouk Lamontagne
Sensors 2025, 25(6), 1667; https://doi.org/10.3390/s25061667 - 8 Mar 2025
Viewed by 988
Abstract
Obstacle circumvention is an important task for community ambulation that is challenging to replicate in research and clinical environments. Omnidirectional treadmills combined with virtual reality (ODT-VR) offer a promising solution, allowing users to change walking direction and speed while walking in large, simulated [...] Read more.
Obstacle circumvention is an important task for community ambulation that is challenging to replicate in research and clinical environments. Omnidirectional treadmills combined with virtual reality (ODT-VR) offer a promising solution, allowing users to change walking direction and speed while walking in large, simulated environments. However, the extent to which such a setup yields circumvention strategies representative of overground walking in the real world (OVG-RW) remains to be determined. This study examined obstacle circumvention strategies in ODT-VR versus OVG-RW and measured how they changed with practice. Fifteen healthy young individuals walked while avoiding an interferer, performing four consecutive blocks of trials per condition. Distance at onset trajectory deviation, minimum distance from the interferer, and walking speed were compared across conditions and blocks. In ODT-VR, larger clearances and slower walking speeds were observed. In contrast, onset distances and proportions of right-side circumvention were similar between conditions. Walking speed increased from the first to the second block exclusively. Results suggest the use of a cautious locomotor behavior while using the ODT-VR setup, with some key features of circumvention strategies being preserved. Although ODT-VR setups offer exciting prospects for research and clinical applications, consideration should be given to the generalizability of findings to the real world. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation)
Show Figures

Figure 1

24 pages, 7093 KB  
Article
Comparison of Manual, Automatic, and Voice Control in Wheelchair Navigation Simulation in Virtual Environments: Performance Evaluation of User and Motion Sickness
by Enrique Antonio Pedroza-Santiago, José Emilio Quiroz-Ibarra, Erik René Bojorges-Valdez and Miguel Ángel Padilla-Castañeda
Sensors 2025, 25(2), 530; https://doi.org/10.3390/s25020530 - 17 Jan 2025
Cited by 1 | Viewed by 1572
Abstract
Mobility is essential for individuals with physical disabilities, and wheelchairs significantly enhance their quality of life. Recent advancements focus on developing sophisticated control systems for effective and efficient interaction. This study evaluates the usability and performance of three wheelchair control modes manual, automatic, [...] Read more.
Mobility is essential for individuals with physical disabilities, and wheelchairs significantly enhance their quality of life. Recent advancements focus on developing sophisticated control systems for effective and efficient interaction. This study evaluates the usability and performance of three wheelchair control modes manual, automatic, and voice controlled using a virtual reality (VR) simulation tool. VR provides a controlled and repeatable environment to assess navigation performance and motion sickness across three scenarios: supermarket, museum, and city. Twenty participants completed nine tests each, resulting in 180 trials. Findings revealed significant differences in navigation efficiency, distance, and collision rates across control modes and scenarios. Automatic control consistently achieved faster navigation times and fewer collisions, particularly in the supermarket. Manual control offered precision but required greater user effort. Voice control, while intuitive, resulted in longer distances traveled and higher collision rates in complex scenarios like the city. Motion sickness levels varied across scenarios, with higher discomfort reported in the city during voice and automatic control. Participant feedback, gathered via a Likert scale questionnaire, highlighted the potential of VR simulation for evaluating user comfort and performance. This research underscores the advantages of VR-based testing for rapid prototyping and user-centered design, offering valuable insights into improving wheelchair control systems. Future work will explore adaptive algorithms to enhance usability and accessibility in real world applications. Full article
Show Figures

Figure 1

Back to TopTop