Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = VR-based training and testing system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4222 KB  
Article
Development and Usability Evaluation of a Leap Motion-Based Controller-Free VR Training System for Inferior Alveolar Nerve Block
by Jun-Seong Kim, Kun-Woo Kim, Hyo-Joon Kim and Seong-Yong Moon
Appl. Sci. 2026, 16(3), 1325; https://doi.org/10.3390/app16031325 - 28 Jan 2026
Viewed by 463
Abstract
This study developed a virtual reality (VR) simulator for training the inferior alveolar nerve block (IANB) procedure using Leap Motion-based hand tracking and the Unity engine, and evaluated its interaction performance, task-level outcomes within the simulator, and usability. Built on a 3D anatomical [...] Read more.
This study developed a virtual reality (VR) simulator for training the inferior alveolar nerve block (IANB) procedure using Leap Motion-based hand tracking and the Unity engine, and evaluated its interaction performance, task-level outcomes within the simulator, and usability. Built on a 3D anatomical model, the system provides a pre-clinical practice environment for realistic syringe manipulation and visually guided needle insertion, enabling repeated rehearsal of the procedural workflow. Interaction stability was assessed using participant-level gesture recognition rates and input latency. Usability was evaluated via a questionnaire addressing ease of use, cognitive load, and perceived educational usefulness. The results indicated participant-level mean gesture recognition rates of 88.8–90.5% and mean response latencies of approximately 64–66 ms. In usability testing (n = 40), the item related to perceived procedural skill improvement received the highest score (4.25/5.0). Because this study did not include controlled comparisons with conventional training or objective measures of clinical competency transfer, the findings should be interpreted as preliminary evidence of technical feasibility and learner-perceived usefulness within a simulated setting. Controlled comparative studies using objective learning outcomes are warranted. Full article
Show Figures

Figure 1

22 pages, 1145 KB  
Article
TSMTFN: Two-Stream Temporal Shift Module Network for Efficient Egocentric Gesture Recognition in Virtual Reality
by Muhammad Abrar Hussain, Chanjun Chun and SeongKi Kim
Virtual Worlds 2025, 4(4), 58; https://doi.org/10.3390/virtualworlds4040058 - 4 Dec 2025
Cited by 1 | Viewed by 781
Abstract
Egocentric hand gesture recognition is vital for natural human–computer interaction in augmented and virtual reality (AR/VR) systems. However, most deep learning models struggle to balance accuracy and efficiency, limiting real-time use on wearable devices. This paper introduces a Two-Stream Temporal Shift Module Transformer [...] Read more.
Egocentric hand gesture recognition is vital for natural human–computer interaction in augmented and virtual reality (AR/VR) systems. However, most deep learning models struggle to balance accuracy and efficiency, limiting real-time use on wearable devices. This paper introduces a Two-Stream Temporal Shift Module Transformer Fusion Network (TSMTFN) that achieves high recognition accuracy with low computational cost. The model integrates Temporal Shift Modules (TSMs) for efficient motion modeling and a Transformer-based fusion mechanism for long-range temporal understanding, operating on dual RGB-D streams to capture complementary visual and depth cues. Training stability and generalization are enhanced through full-layer training from epoch 1 and MixUp/CutMix augmentations. Evaluated on the EgoGesture dataset, TSMTFN attained 96.18% top-1 accuracy and 99.61% top-5 accuracy on the independent test set with only 16 GFLOPs and 21.3M parameters, offering a 2.4–4.7× reduction in computation compared to recent state-of-the-art methods. The model runs at 15.10 samples/s, achieving real-time performance. The results demonstrate robust recognition across over 95% of gesture classes and minimal inter-class confusion, establishing TSMTFN as an efficient, accurate, and deployable solution for next-generation wearable AR/VR gesture interfaces. Full article
Show Figures

Figure 1

22 pages, 2191 KB  
Systematic Review
Virtual Reality-Based Cognitive and Physical Interventions in Cognitive Impairment: A Network Meta-Analysis of Immersion Level Effects
by Wanyi Li, Wei Gao and Xiangyang Lin
Behav. Sci. 2025, 15(12), 1610; https://doi.org/10.3390/bs15121610 - 22 Nov 2025
Viewed by 1419
Abstract
Virtual reality (VR) has emerged as an innovative platform for delivering cognitive and physical training to individuals with cognitive impairment. However, the differential effectiveness of fully immersive versus partially immersive VR interventions remains unclear. This network meta-analysis aimed to evaluate how immersion level [...] Read more.
Virtual reality (VR) has emerged as an innovative platform for delivering cognitive and physical training to individuals with cognitive impairment. However, the differential effectiveness of fully immersive versus partially immersive VR interventions remains unclear. This network meta-analysis aimed to evaluate how immersion level influences cognitive, motor, and functional outcomes in neurodegenerative populations. A systematic search of PubMed, Embase, Cochrane Library, and Web of Science up to October 2025 identified 20 randomized controlled trials involving 1382 participants with mild cognitive impairment (MCI) or dementia. Interventions were categorized into four groups: (1) fully immersive VR (head-mounted displays), (2) partially immersive VR (screen-based or motion-capture systems), (3) active control (traditional cognitive or physical training), and (4) passive control (usual care or health education). Outcomes included the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Trail Making Test (TMT), Digit Span Test (DST), Timed Up and Go (TUG), and Instrumental Activities of Daily Living (IADL). Standardized mean differences (SMDs) and surface under the cumulative ranking curve (SUCRA) values were calculated using RevMan 5.4 and Stata 18.0. Fully immersive VR significantly improved global cognition compared to passive control (MMSE: SMD = 0.51, 95% CI [0.06, 0.96]), while partially immersive VR showed superior effects on executive function versus active control (TMT-B: SMD = −1.29, 95% CI [−2.62, −0.93]) and on motor function (TUG: SMD = −0.59, 95% CI [−1.11, −0.08]). In MoCA performance, both VR modalities outperformed traditional interventions (SUCRA: fully immersive = 76.0%; partially immersive = 84.8%). SUCRA rankings suggest that fully immersive VR is optimal for memory and foundational cognition (81.7%), whereas partially immersive VR performs best for executive function (98.9%). These findings indicate that the efficacy of VR-based cognitive or physical–cognitive interventions is modulated by immersion level. Tailoring VR modality to specific cognitive domains may optimize rehabilitation outcomes in MCI and dementia care. Full article
Show Figures

Graphical abstract

24 pages, 8957 KB  
Article
Utilizing VR Technology in Foundational Welding Skill Development
by Nuri Furkan Koçak, Ali Saygın and Fuat Türk
Appl. Sci. 2025, 15(22), 12331; https://doi.org/10.3390/app152212331 - 20 Nov 2025
Cited by 1 | Viewed by 1465
Abstract
Traditional approaches to welder training demand substantial investments in equipment, consumable materials, and workshop facilities, while also exposing novice learners to considerable safety risks. This study investigates the effectiveness of a virtual reality (VR)-based welding training system developed with Unity for the Meta [...] Read more.
Traditional approaches to welder training demand substantial investments in equipment, consumable materials, and workshop facilities, while also exposing novice learners to considerable safety risks. This study investigates the effectiveness of a virtual reality (VR)-based welding training system developed with Unity for the Meta Quest 2 platform, designed to deliver safe and immersive instruction in fundamental welding techniques. A total of twenty participants with no prior welding experience completed structured VR training sessions over two weeks. The program focused on developing competencies in welding machine operation (including start-up procedures and parameter adjustments), controlling shielding gas flow, and accurately regulating torch-to-workpiece distance, torch angle, and travel speed. Real-time feedback was integrated into the system to support accurate control and positioning of the welding torch. Quantitative assessments demonstrated significant improvements in both technical proficiency and trainee confidence and anxiety levels. Knowledge test scores increased from 45.3 to 85.1, while machine adjustment accuracy rose from 28.7 to 92.3. In parallel, participant confidence levels increased substantially, and anxiety scores decreased from 4.0–4.5 to 1.1–1.5 on standardized scales. These findings provide experimental evidence that VR-based training can enhance fundamental welding education by offering a safe, repeatable, and effective practice environment that simultaneously improves technical performance, strengthens learner confidence, and reduces training-related anxiety. Full article
(This article belongs to the Special Issue Recent Advances and Application of Virtual Reality)
Show Figures

Figure 1

16 pages, 4628 KB  
Article
The Design and Assessment of a Virtual Reality System for Driver Psychomotor Evaluation
by Jorge Luis Veloz, Andrea Alcívar-Cedeño, Tony Michael Cedeño-Zambrano, Deiter Miguel Zamora-Plaza, Pablo Fernández-Arias, Diego Vergara and Antonio del Bosque
Eng 2025, 6(11), 301; https://doi.org/10.3390/eng6110301 - 1 Nov 2025
Viewed by 843
Abstract
Traffic safety continues to be a pressing worldwide issue, with young drivers especially exposed to accidents because of limited experience, reckless behaviors, and risky practices such as driving under the influence of alcohol or other substances. In this scenario, reliable methods to evaluate [...] Read more.
Traffic safety continues to be a pressing worldwide issue, with young drivers especially exposed to accidents because of limited experience, reckless behaviors, and risky practices such as driving under the influence of alcohol or other substances. In this scenario, reliable methods to evaluate psychomotor and sensory abilities essential for safe driving are highly needed. This study presents the development of a Virtual Reality (VR) prototype aimed at enhancing psychometric testing. The platform incorporates immersive environments to assess peripheral vision, reaction time, and motor accuracy, implemented with Oculus Quest 2, Blender, and Unity. The VR-based system was validated through black-box testing and user satisfaction surveys with a sample of 80 licensed drivers in single-session evaluations. The findings demonstrate that VR increases both precision and realism in psychomotor evaluations: 81.25% of participants perceived the scenarios as realistic, and 85% agreed that the system effectively measured critical driving skills. While a few users experienced minor discomfort, 97.5% recommended its application in practical assessments. This study highlights VR as a robust alternative to conventional psychometric/psychotechnical tests, capable of improving measurement reliability and user engagement and paving the way for more efficient and inclusive driver training initiatives. Full article
Show Figures

Figure 1

20 pages, 3686 KB  
Article
Comparative Analysis of Correction Methods for Multi-Camera 3D Image Processing System and Its Application Design in Safety Improvement on Hot-Working Production Line
by Joanna Gąbka
Appl. Sci. 2025, 15(16), 9136; https://doi.org/10.3390/app15169136 - 19 Aug 2025
Viewed by 1279
Abstract
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses [...] Read more.
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses key challenges in image distortion, including the fish-eye effect and other aberrations. In addition, it considers the computational and bandwidth efficiency required for effective and economical streaming and real-time display of recorded content. Measurements and calculations were performed using a selected set of cameras, adapters, and lenses, chosen based on predefined criteria. A comparative analysis was conducted between the nearest-neighbour linear interpolation method and a third-order polynomial interpolation (ABCD polynomial). These methods were tested and evaluated using three different computational approaches, each aimed at optimizing data processing efficiency critical for real-time image correction. Images captured during real-time video transmission—processed using the developed correction techniques—are presented. In the final sections, the paper describes the configuration of an innovative VR-based training system incorporating an edge computing device. A case study involving a factory producing wheel rims is also presented to demonstrate the practical application of the system. Full article
Show Figures

Figure 1

14 pages, 1136 KB  
Article
The Potential Effects of Sensor-Based Virtual Reality Telerehabilitation on Lower Limb Function in Patients with Chronic Stroke Facing the COVID-19 Pandemic: A Retrospective Case-Control Study
by Mirjam Bonanno, Maria Grazia Maggio, Paolo De Pasquale, Laura Ciatto, Antonino Lombardo Facciale, Morena De Francesco, Giuseppe Andronaco, Rosaria De Luca, Angelo Quartarone and Rocco Salvatore Calabrò
Med. Sci. 2025, 13(2), 65; https://doi.org/10.3390/medsci13020065 - 23 May 2025
Cited by 1 | Viewed by 2691
Abstract
Background/Objectives: Individuals with chronic stroke often experience various impairments, including poor balance, reduced mobility, limited physical activity, and difficulty performing daily tasks. In the context of the COVID-19 pandemic, telerehabilitation (TR) can overcome the barriers of geographical and physical distancing, time, costs, and [...] Read more.
Background/Objectives: Individuals with chronic stroke often experience various impairments, including poor balance, reduced mobility, limited physical activity, and difficulty performing daily tasks. In the context of the COVID-19 pandemic, telerehabilitation (TR) can overcome the barriers of geographical and physical distancing, time, costs, and travel, as well as the anxiety about contracting COVID-19. In this retrospective case-control study, we aim to evaluate the motor and cognitive effects of balance TR training carried out with a sensor-based non-immersive virtual reality system compared to conventional rehabilitation in chronic stroke patients. Methods: Twenty chronic post-stroke patients underwent evaluation for inclusion in the analysis through an electronic recovery data system. The patients included in the study were divided into two groups with similar medical characteristics and duration of rehabilitation training. However, the groups differed in the type of rehabilitation approach used. The experimental group (EG) received TR with a sensor-based VR device, called VRRS—HomeKit (n. 10). In contrast, the control group (CG) underwent conventional home-based rehabilitation (n. 10). Results: At the end of the training, we observed significant improvements in the EG in the 10-m walking test (10MWT) (p = 0.01), Timed-Up-Go Left (TUG L) (p = 0.01), and Montreal Cognitive Assessment (MoCA) (p = 0.005). Conclusions: In our study, we highlighted the potential role of sensor-based virtual reality TR in chronic stroke patients for improving lower limb function, suggesting that this approach is feasible and not inferior to conventional home-based rehabilitation. Full article
Show Figures

Figure 1

38 pages, 7211 KB  
Article
Cross-Context Stress Detection: Evaluating Machine Learning Models on Heterogeneous Stress Scenarios Using EEG Signals
by Omneya Attallah, Mona Mamdouh and Ahmad Al-Kabbany
AI 2025, 6(4), 79; https://doi.org/10.3390/ai6040079 - 14 Apr 2025
Cited by 5 | Viewed by 3626
Abstract
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, [...] Read more.
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, there has been limited research on assessing ML models trained in one context and utilized in another. The objective of ML-based stress detection systems is to create models that generalize across various contexts. Methods: This study examines the generalizability of ML models employing EEG recordings from two stress-inducing contexts: mental arithmetic evaluation (MAE) and virtual reality (VR) gaming. We present a data collection workflow and publicly release a portion of the dataset. Furthermore, we evaluate classical ML models and their generalizability, offering insights into the influence of training data on model performance, data efficiency, and related expenses. EEG data were acquired leveraging MUSE-STM hardware during stressful MAE and VR gaming scenarios. The methodology entailed preprocessing EEG signals using wavelet denoising mother wavelets, assessing individual and aggregated sensor data, and employing three ML models—linear discriminant analysis (LDA), support vector machine (SVM), and K-nearest neighbors (KNN)—for classification purposes. Results: In Scenario 1, where MAE was employed for training and VR for testing, the TP10 electrode attained an average accuracy of 91.42% across all classifiers and participants, whereas the SVM classifier achieved the highest average accuracy of 95.76% across all participants. In Scenario 2, adopting VR data as the training data and MAE data as the testing data, the maximum average accuracy achieved was 88.05% with the combination of TP10, AF8, and TP9 electrodes across all classifiers and participants, whereas the LDA model attained the peak average accuracy of 90.27% among all participants. The optimal performance was achieved with Symlets 4 and Daubechies-2 for Scenarios 1 and 2, respectively. Conclusions: The results demonstrate that although ML models exhibit generalization capabilities across stressors, their performance is significantly influenced by the alignment between training and testing contexts, as evidenced by systematic cross-context evaluations using an 80/20 train–test split per participant and quantitative metrics (accuracy, precision, recall, and F1-score) averaged across participants. The observed variations in performance across stress scenarios, classifiers, and EEG sensors provide empirical support for this claim. Full article
Show Figures

Figure 1

14 pages, 1291 KB  
Article
The Effects of Virtual Reality-Based Task-Oriented Movement on Upper Extremity Function in Healthy Individuals: A Crossover Study
by Tuba Maden, Halil İbrahim Ergen, Zarife Pancar, Antonio Buglione, Johnny Padulo, Gian Mario Migliaccio and Luca Russo
Medicina 2025, 61(4), 668; https://doi.org/10.3390/medicina61040668 - 4 Apr 2025
Cited by 3 | Viewed by 2133
Abstract
Background and Objectives: Although virtual reality (VR) has been shown to be effective in rehabilitation through motor learning principles, its impact on upper extremity function, particularly in the context of console use, remains unclear. Materials and Methods: This study aimed to [...] Read more.
Background and Objectives: Although virtual reality (VR) has been shown to be effective in rehabilitation through motor learning principles, its impact on upper extremity function, particularly in the context of console use, remains unclear. Materials and Methods: This study aimed to investigate the effects of VR-based task-oriented movement on the upper extremity of healthy individuals. A total of 26 healthy individuals performed task-oriented movements in both real and virtual environments in a randomized order. All participants completed a single session of task-oriented movements using a VR Goggle system in a virtual setting. Physiotherapists designed immersive VR-based experiences and 3D screen-based exergames for this study. Upper extremity function was assessed using several measures: joint position sense (JPS) of the wrist and shoulder was evaluated using a universal goniometer, reaction time was measured via a mobile application, and gross manual dexterity was assessed using the box-and-block test (BBT). Evaluations were conducted before and after the interventions. Results: The results showed that JPS remained similar between conditions, while BBT performance improved in both groups. However, the reaction time increased significantly only after VR intervention (p < 0.05). No significant period or carryover effects were observed across the parameters. These findings suggest that VR-based task-oriented training positively influences reaction time and supports hand function. Moreover, VR systems that simulate joint position sense similar to real-world conditions may be beneficial for individuals with musculoskeletal motor deficits. Conclusions: These results highlight the potential for integrating VR technology into rehabilitation programs for patients with neurological or orthopedic impairments, providing a novel tool for enhancing upper extremity function and injury prevention strategies. Full article
(This article belongs to the Special Issue Advancement in Upper Limb Rehabilitation and Injury Prevention)
Show Figures

Figure 1

29 pages, 40685 KB  
Article
Evaluating the Benefits and Drawbacks of Visualizing Systems Modeling Language (SysML) Diagrams in the 3D Virtual Reality Environment
by Mostafa Lutfi and Ricardo Valerdi
Systems 2025, 13(4), 221; https://doi.org/10.3390/systems13040221 - 23 Mar 2025
Cited by 4 | Viewed by 3374
Abstract
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual [...] Read more.
Model-Based Systems Engineering (MBSE) prioritizes system design through models rather than documents, and it is implemented with the Systems Modeling Language (SysML), which is the state-of-the-art language in academia and industry. Virtual Reality (VR), an immersive visualization technology, can simulate reality in virtual environments with varying degrees of fidelity. In recent years, the technology industry has invested substantially in the development of head-mounted displays (HMDs) and related virtual reality (VR) technologies. Various research has suggested that VR-based immersive design reviews enhance system issue/fault identification, collaboration, focus, and presence compared to non-immersive approaches. Additionally, several research efforts have demonstrated that the VR environment provides higher understanding and knowledge retention levels than traditional approaches. In recent years, multiple attempts have been made to visualize conventional 2D SysML diagrams in a virtual reality environment. To the best of the author’s knowledge, no empirical evaluation has been performed to analyze the benefits and drawbacks of visualizing SysML diagrams in a VR environment. Hence, the authors aimed to evaluate four key benefit types and drawbacks through experiments with human subjects. The authors chose four benefit types—Systems Understanding, Information Sharing, Modeling and Training Experience, and Digital Twin based on the MBSE value and benefits review performed by researchers and benefits claimed by the evaluations for similar visual formalism languages. Experiments were conducted to compare the understanding, interaction, and knowledge retention for 3D VR and conventional 2D SysML diagrams. The authors chose a ground-based telescope system as the system of interest (SOI) for system modeling. The authors utilized a standalone wireless HMD unit for a virtual reality experience, which enabled experiments to be conducted irrespective of location. Students and experts from multiple disciplines, including systems engineering, participated in the experiment and provided their opinions on the VR SysML implementation. The knowledge test, perceived evaluation results, and post-completion surveys were analyzed to determine whether the 3D VR SysML implementation improved these benefits and identified potential drawbacks. The authors utilized a few VR scenario efficacy measures, namely the Simulation Sickness Questionnaire (SSQ) and System Usability Scale (SUS), to avoid evaluation design-related anomalies. Full article
Show Figures

Figure 1

32 pages, 13506 KB  
Article
VR Co-Lab: A Virtual Reality Platform for Human–Robot Disassembly Training and Synthetic Data Generation
by Yashwanth Maddipatla, Sibo Tian, Xiao Liang, Minghui Zheng and Beiwen Li
Machines 2025, 13(3), 239; https://doi.org/10.3390/machines13030239 - 17 Mar 2025
Cited by 5 | Viewed by 4599
Abstract
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages [...] Read more.
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages Quest Pro’s body-tracking capabilities to enable ergonomic, immersive interactions with planned eye-tracking integration for improved interactivity and accuracy. The Niryo One robot aids users in hands-on disassembly while generating synthetic data to refine robot motion planning models. A Robot Operating System (ROS) bridge enables the seamless simulation and control of various robotic platforms using Unified Robotics Description Format (URDF) files, bridging virtual and physical training environments. A Long Short-Term Memory (LSTM) model predicts user interactions and robotic motions, optimizing trajectory planning and minimizing errors. Monte Carlo dropout-based uncertainty estimation enhances prediction reliability, ensuring adaptability to dynamic user behavior. Initial technical validation demonstrates the platform’s potential, with preliminary testing showing promising results in task execution efficiency and human–robot motion alignment, though comprehensive user studies remain for future work. Limitations include the lack of multi-user scenarios, potential tracking inaccuracies, and the need for further real-world validation. This system establishes a sandbox training framework for HRC in disassembly, leveraging VR and AI-driven feedback to improve skill acquisition, task efficiency, and training scalability across industrial applications. Full article
Show Figures

Figure 1

14 pages, 6544 KB  
Article
Multi-Mode Hand Gesture-Based VR Locomotion Technique for Intuitive Telemanipulation Viewpoint Control in Tightly Arranged Logistic Environments
by Jaehoon Jeong, Haegyeom Choi and Donghun Lee
Sensors 2025, 25(4), 1181; https://doi.org/10.3390/s25041181 - 14 Feb 2025
Cited by 2 | Viewed by 1972
Abstract
Telemanipulation-based object-side picking with a suction gripper often faces challenges such as occlusion of the target object or the gripper and the need for precise alignment between the suction cup and the object’s surface. These issues can significantly affect task success rates in [...] Read more.
Telemanipulation-based object-side picking with a suction gripper often faces challenges such as occlusion of the target object or the gripper and the need for precise alignment between the suction cup and the object’s surface. These issues can significantly affect task success rates in logistics environments. To address these problems, this study proposes a multi-mode hand gesture-based virtual reality (VR) locomotion method to enable intuitive and precise viewpoint control. The system utilizes a head-mounted display (HMD) camera to capture hand skeleton data, which a multi-layer perceptron (MLP) model processes. The model classifies gestures into three modes: translation, rotation, and fixed, corresponding to fist, pointing, and unknown gestures, respectively. Translation mode moves the viewpoint based on the wrist’s displacement, rotation mode adjusts the viewpoint’s angle based on the wrist’s angular displacement, and fixed mode stabilizes the viewpoint when gestures are ambiguous. A dataset of 4312 frames was used for training and validation, with 666 frames for testing. The MLP model achieved a classification accuracy of 98.4%, with precision, recall, and F1-score exceeding 0.98. These results demonstrate the system’s ability to address the challenges of telemanipulation tasks by enabling accurate gesture recognition and seamless mode transitions. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

25 pages, 4084 KB  
Article
Effectiveness of Virtual Reality-Based Multi-Therapy Systems for Physio-Psychological Rehabilitation: A Clinical Study
by Iulia-Cristina Stanica, Simona Magdalena Hainagiu, Alberta Milicu, Maria-Iuliana Dascalu and Giovanni-Paul Portelli
Appl. Sci. 2024, 14(19), 9093; https://doi.org/10.3390/app14199093 - 8 Oct 2024
Cited by 6 | Viewed by 5908
Abstract
The worldwide increase in the number of disorders requiring rehabilitation is weighing more and more on healthcare systems, seriously affecting the quality of life of patients. Emergent technologies and techniques should be used more and more in both physical and psychological rehabilitation, after [...] Read more.
The worldwide increase in the number of disorders requiring rehabilitation is weighing more and more on healthcare systems, seriously affecting the quality of life of patients. Emergent technologies and techniques should be used more and more in both physical and psychological rehabilitation, after a thorough study of their potential and effects. Our paper presents an original virtual reality-based system including gamified immersive physio-psychological exercises, which was tested in a clinical setting with 25 patients suffering from various musculoskeletal, neuromotor, or mental disorders. A thorough testing protocol was followed during a two-week period, including repeated trials, progress tracking, and objective and subjective instruments used for data collection. A statistical analysis helped us identify interesting correlations between complex virtual reality games and people’s performance, and the high level of relaxation and stress relief (4.57 out of 5 across all games) which can be offered by VR-based psychotherapy exercises, as well as the increased ease of use (4.26 out of 5 perceived across all games) of properly designed training exercises regardless of patients’ level of VR experience (84% of patients with no or low experience and no patient with high experience). Full article
Show Figures

Figure 1

17 pages, 14390 KB  
Article
Scan-to-HBIM-to-VR: An Integrated Approach for the Documentation of an Industrial Archaeology Building
by Maria Alessandra Tini, Anna Forte, Valentina Alena Girelli, Alessandro Lambertini, Domenico Simone Roggio, Gabriele Bitelli and Luca Vittuari
Remote Sens. 2024, 16(15), 2859; https://doi.org/10.3390/rs16152859 - 5 Aug 2024
Cited by 15 | Viewed by 3956
Abstract
In this paper, we propose a comprehensive and optimised workflow for the documentation and the future maintenance and management of a historical building, integrating the state of the art of different techniques, in the challenging context of industrial archaeology. This approach has been [...] Read more.
In this paper, we propose a comprehensive and optimised workflow for the documentation and the future maintenance and management of a historical building, integrating the state of the art of different techniques, in the challenging context of industrial archaeology. This approach has been applied to the hydraulic work of the “Sostegno del Battiferro” in Bologna, Italy, an example of built industrial heritage whose construction began in 1439 and remains in active use nowadays to control the Navile canal water flow rate. The initial step was the definition of a 3D topographic frame, including geodetic measurements, which served as a reference for the complete 3D survey integrating Terrestrial Laser Scanning (TLS), Structured Light Projection scanning, and the photogrammetric processing of Unmanned Aircraft System (UAS) imagery through a Structure from Motion (SfM) approach. The resulting 3D point cloud has supported as-built parametric modelling (Scan-to-BIM) with the consequent extraction of plans and sections. Finally, the Heritage/Historic Building Information Modelling (HBIM) model generated was rendered and tested for a VR-based immersive experience. Building Information Modelling (BIM) and virtual reality (VR) applications were tested as a support for the management of the building, the maintenance of the hydraulic system, and the training of qualified technicians. In addition, considering the historical value of the surveyed building, the methodology was also applied for dissemination purposes. Full article
Show Figures

Figure 1

26 pages, 4468 KB  
Article
Virtual Reality Application for the Safety Improvement of Intralogistics Systems
by Konrad Lewczuk and Patryk Żuchowicz
Sustainability 2024, 16(14), 6024; https://doi.org/10.3390/su16146024 - 15 Jul 2024
Cited by 12 | Viewed by 3752
Abstract
Immersive technologies from the spectrum of Industry 4.0, such as Virtual Reality (VR), are increasingly used in research and safety analysis in industrial and intralogistics systems, including distribution warehouses and production plants. Safety in intralogistics systems is influenced by design and management processes, [...] Read more.
Immersive technologies from the spectrum of Industry 4.0, such as Virtual Reality (VR), are increasingly used in research and safety analysis in industrial and intralogistics systems, including distribution warehouses and production plants. Safety in intralogistics systems is influenced by design and management processes, human behavior, and device performance. In all these areas, VR can serve as a supportive technology for visualization, testing, and employee training. However, this requires the development of principles for integrating VR into standard procedures for the design, modernization, and analysis of intralogistics and production systems. This article discusses the use of VR to analyze the occupational and functional safety of intralogistics systems. It reviews the literature and VR implementations aimed at examining and improving safety in industrial systems. The article explores the integration of VR into the design and analysis procedures for intralogistics and production systems. The authors present a five-dimensional decision space for assessing the use of VR, including identifying subjects of safety analysis, threats and hazards specific to intralogistics, countermeasures for these threats, factors affecting safety, and mechanisms by which VR can improve safety in intralogistics systems. As a subsequent step, the authors discuss using universal simulation environments that support VR technology to study and enhance safety in intralogistics systems, providing a framework example based on the FlexSim (2023 update 2) environment. Finally, this article addresses the threats and limitations of VR technology, along with the challenges and future prospects of VR in the context of Industry 4.0. The article concludes that VR can be an essential tool for increasing safety in the future, albeit with some reservations about certain features of this technology. Full article
(This article belongs to the Section Sustainable Engineering and Science)
Show Figures

Figure 1

Back to TopTop