Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,443)

Search Parameters:
Keywords = virtual human

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2016 KB  
Review
Human-Centred Design (HCD) in Enhancing Dementia Care Through Assistive Technologies: A Scoping Review
by Fanke Peng, Kate Little and Lin Liu
Digital 2025, 5(4), 51; https://doi.org/10.3390/digital5040051 - 2 Oct 2025
Abstract
Background: Dementia is a progressive neurodegenerative condition that impairs cognitive functions such as memory, language comprehension, and problem-solving. Assistive technologies can provide vital support at various stages of dementia, significantly improving the quality of life by aiding daily activities and care. However, for [...] Read more.
Background: Dementia is a progressive neurodegenerative condition that impairs cognitive functions such as memory, language comprehension, and problem-solving. Assistive technologies can provide vital support at various stages of dementia, significantly improving the quality of life by aiding daily activities and care. However, for these technologies to be effective and widely adopted, a human-centred design (HCD) approach is of consequence for both their development and evaluation. Objectives: This scoping review aims to explore how HCD principles have been applied in the design of assistive technologies for people with dementia and to identify the extent and nature of their involvement in the design process. Eligibility Criteria: Studies published between 2017 and 2025 were included if they applied HCD methods in the design of assistive technologies for individuals at any stage of dementia. Priority was given to studies that directly involved people with dementia in the design or evaluation process. Sources of Evidence: A systematic search was conducted across five databases: Web of Science, JSTOR, Scopus, and ProQuest. Charting Methods: Articles were screened in two stages: title/abstract screening (n = 350) and full-text review (n = 89). Data from eligible studies (n = 49) were extracted and thematically analysed to identify design approaches, types of technologies, and user involvement. Results: The 49 included studies covered a variety of assistive technologies, such as robotic systems, augmented and virtual reality tools, mobile applications, and Internet of Things (IoT) devices. A wide range of HCD approaches were employed, with varying degrees of user involvement. Conclusions: HCD plays a critical role in enhancing the development and effectiveness of assistive technologies for dementia care. The review underscores the importance of involving people with dementia and their carers in the design process to ensure that solutions are practical, meaningful, and capable of improving quality of life. However, several key gaps remain. There is no standardised HCD framework for healthcare, stakeholder involvement is often inconsistent, and evidence on real-world impact is limited. Addressing these gaps is crucial to advancing the field and delivering scalable, sustainable innovations. Full article
Show Figures

Figure 1

33 pages, 3660 KB  
Review
Converging Extended Reality and Robotics for Innovation in the Food Industry
by Seongju Woo, Youngjin Kim and Sangoh Kim
AgriEngineering 2025, 7(10), 322; https://doi.org/10.3390/agriengineering7100322 - 1 Oct 2025
Abstract
Extended Reality (XR) technologies—including Virtual Reality, Augmented Reality, and Mixed Reality—are increasingly applied in the food industry to simulate sensory environments, support education, and influence consumer behavior, while robotics addresses labor shortages, hygiene, and efficiency in production. This review uniquely synthesizes their convergence [...] Read more.
Extended Reality (XR) technologies—including Virtual Reality, Augmented Reality, and Mixed Reality—are increasingly applied in the food industry to simulate sensory environments, support education, and influence consumer behavior, while robotics addresses labor shortages, hygiene, and efficiency in production. This review uniquely synthesizes their convergence through digital twin frameworks, combining XR’s immersive simulations with robotics’ precision and scalability. A systematic literature review and keyword co-occurrence analysis of over 800 titles revealed research clusters around consumer behavior, nutrition education, sensory experience, and system design. In parallel, robotics has expanded beyond traditional pick-and-place tasks into areas such as precision cleaning, chaotic mixing, and digital gastronomy. The integration of XR and robotics offers synergies including risk-free training, predictive task validation, and enhanced human–robot interaction but faces hurdles such as high hardware costs, motion sickness, and usability constraints. Future research should prioritize interoperability, ergonomic design, and cross-disciplinary collaboration to ensure that XR–robotics systems evolve not merely as tools, but as a paradigm shift in redefining the human–food–environment relationship. Full article
Show Figures

Figure 1

26 pages, 6503 KB  
Article
Acai Berry Extracts Can Mitigate the L-Glutamate-Induced Neurotoxicity Mediated by N-Methyl-D-Aspartate Receptors
by Maryam N. ALNasser, Nirmal Malik, Abrar Ahmed, Amy Newman, Ian R. Mellor and Wayne G. Carter
Brain Sci. 2025, 15(10), 1073; https://doi.org/10.3390/brainsci15101073 - 1 Oct 2025
Abstract
Background/Objectives: Stroke is the second leading cause of death worldwide. There is an unmet need to manage stroke pathophysiology, including L-glutamate (L-Glu)-mediated neurotoxicity. The acai berry (Euterpe sp.) contains phytochemicals with potentially nutraceutical content. The aim of this study was to assess [...] Read more.
Background/Objectives: Stroke is the second leading cause of death worldwide. There is an unmet need to manage stroke pathophysiology, including L-glutamate (L-Glu)-mediated neurotoxicity. The acai berry (Euterpe sp.) contains phytochemicals with potentially nutraceutical content. The aim of this study was to assess the ability of acai berry extracts to counter L-Glu neurotoxicity using human differentiated TE671 cells. Methods: The cytotoxicity of L-Glu and acai berry extracts was quantified using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lactate dehydrogenase (LDH) assays. Mitochondrial function was examined by a quantitation of cellular ATP levels, the maintenance of the mitochondrial membrane potential (MMP), and the production of reactive oxygen species (ROS). Whole-cell patch-clamp recordings monitored the activation of N-methyl-D-aspartate receptors (NMDARs). Candidate phytochemicals from acai berry extracts were modeled in silico for NMDAR binding. Results: L-Glu significantly reduced cell viability, ATP levels, the MMP, and increased cellular ROS. Generally, acai berry extracts alone were not cytotoxic, although high concentrations were detrimental to ATP production, maintenance of the MMP, and elevated ROS levels. Whole-cell patch-clamp recordings revealed that the combined addition of 300 µM L-Glu and 10 µM glycine activated currents in differentiated TE671 cells, consistent with triggering NMDAR activity. Acai berry extracts ameliorated the L-Glu-induced cytotoxicity, mitochondrial dysfunction, elevated ROS levels, and limited the NMDAR-mediated excitotoxicity (p < 0.001–0.0001). Several virtual ligands from acai berry extracts exhibited high-affinity NMDAR binding (arginine, 2,5-dihydroxybenzoic acid, threonine, protocatechuic acid, and histidine) as possible candidate receptor antagonists. Conclusions: Acai berry phytochemicals could be exploited to reduce the L-Glu-induced neurotoxicity often observed in stroke and other neurodegenerative diseases. Full article
(This article belongs to the Section Neuropharmacology and Neuropathology)
Show Figures

Figure 1

27 pages, 10581 KB  
Article
Maintaining Dynamic Symmetry in VR Locomotion: A Novel Control Architecture for a Dual Cooperative Five-Bar Mechanism-Based ODT
by Halit Hülako
Symmetry 2025, 17(10), 1620; https://doi.org/10.3390/sym17101620 - 1 Oct 2025
Abstract
Natural and unconstrained locomotion remains a fundamental challenge in creating truly immersive virtual reality (VR) experiences. This paper presents the design and control of a novel robotic omnidirectional treadmill (ODT) based on the bilateral symmetry of two cooperative five-bar planar mechanisms designed to [...] Read more.
Natural and unconstrained locomotion remains a fundamental challenge in creating truly immersive virtual reality (VR) experiences. This paper presents the design and control of a novel robotic omnidirectional treadmill (ODT) based on the bilateral symmetry of two cooperative five-bar planar mechanisms designed to replicate realistic walking mechanics. The central contribution is a human in the loop control strategy designed to achieve stable walking in place. This framework employs a specific control strategy that actively repositions the footplates along a dynamically defined ‘Line of Movement’ (LoM), compensating for the user’s motion to ensure the midpoint between the feet remains stabilized and symmetrical at the platform’s geometric center. A comprehensive dynamic model of both the ODT and a coupled humanoid robot was developed to validate the system. Numerical simulations demonstrate robust performance across various gaits, including turning and catwalks, maintaining the user’s locomotion center with a maximum resultant drift error of 11.65 cm, a peak value that occurred momentarily during a turning motion and remained well within the ODT’s safe operational boundaries, with peak errors along any single axis remaining below 9 cm. The system operated with notable efficiency, requiring RMS torques below 22 Nm for the primary actuators. This work establishes a viable dynamic and control architecture for foot-tracking ODTs, paving the way for future enhancements such as haptic terrain feedback and elevation simulation. Full article
(This article belongs to the Special Issue Applications Based on Symmetry/Asymmetry in Control Engineering)
Show Figures

Figure 1

20 pages, 1951 KB  
Article
Virtual Prototyping of the Human–Robot Ecosystem for Multiphysics Simulation of Upper Limb Motion Assistance
by Rocco Adduci, Francesca Alvaro, Michele Perrelli and Domenico Mundo
Machines 2025, 13(10), 895; https://doi.org/10.3390/machines13100895 - 1 Oct 2025
Abstract
As stroke is becoming more frequent nowadays, cutting edge rehabilitation approaches are required to recover upper limb functionalities and to support patients during daily activities. Recently, focus has moved to robotic rehabilitation; however, therapeutic devices are still highly expensive, making rehabilitation not easily [...] Read more.
As stroke is becoming more frequent nowadays, cutting edge rehabilitation approaches are required to recover upper limb functionalities and to support patients during daily activities. Recently, focus has moved to robotic rehabilitation; however, therapeutic devices are still highly expensive, making rehabilitation not easily affordable. Moreover, devices are not easily accepted by patients, who can refuse to use them due to not feeling comfortable. The presented work proposes the exploitation of a virtual prototype of the human–robot ecosystem for the study and analysis of patient–robot interactions, enabling their simulation-based investigation in multiple scenarios. For the accomplishment of this task, the Dynamics of Multi-physical Systems platform, previously presented by the authors, is further developed to enable the integration of biomechanical models of the human body with mechatronics models of robotic devices for motion assistance, as well as with PID-based control strategies. The work begins with (1) a description of the background; hence, the current state of the art and purpose of the study; (2) the platform is then presented and the system is formalized, first from a general side and then (3) in the application-specific scenario. (4) The use case is described, presenting a controlled gym weightlifting exercise supported by an exoskeleton and the results are analyzed in a final paragraph (5). Full article
Show Figures

Figure 1

17 pages, 1475 KB  
Systematic Review
Exploring Neuroscientific Approaches to Architecture: Design Strategies of the Built Environment for Improving Human Performance
by Erminia Attaianese, Morena Barilà and Mariangela Perillo
Buildings 2025, 15(19), 3524; https://doi.org/10.3390/buildings15193524 - 1 Oct 2025
Abstract
Since the 1960s, theories on the relationship between people and their environment have explored how elements of the built environment may directly or indirectly influence human behavior. In this context, neuroarchitecture is emerging as an interdisciplinary field that integrates neuroscience, architecture, environmental psychology, [...] Read more.
Since the 1960s, theories on the relationship between people and their environment have explored how elements of the built environment may directly or indirectly influence human behavior. In this context, neuroarchitecture is emerging as an interdisciplinary field that integrates neuroscience, architecture, environmental psychology, and cognitive science, with the aim of providing empirical evidence on how architectural spaces affect the human brain. This study investigates the potential of neuroarchitecture to inform environmental design by clarifying its current conceptual framework, examining its practical applications, and identifying the context in which it is being implemented. Beginning with an in-depth analysis of the definition of neuroarchitecture, its theoretical foundations, and the range of interpretations within the academic community, the study then offers a critical review of its practical applications across various design fields. By presenting a comprehensive overview of this emerging discipline, the study also summarizes the measurement techniques commonly employed in related research and critically evaluates design criteria based on observed human responses. Ultimately, neuroarchitecture represents a promising avenue for creating environments that deliberately enhance psychological and physiological well-being, paving the way toward truly human-centered design. Nevertheless, neuroarchitecture is still an emerging experimental field, which entails significant limitations. The experiments conducted are still limited to virtual reality and controlled experimental contexts. In addition, small and heterogeneous population samples have been tested, without considering human variability. Full article
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

16 pages, 1756 KB  
Article
The Effects of Vibrotactile Stimulation of the Upper Extremity on Sensation and Perception: A Study for Enhanced Ergonomic Design
by Abeer Abdel Khaleq, Yash More, Brody Skaufel and Mazen Al Borno
Theor. Appl. Ergon. 2025, 1(2), 8; https://doi.org/10.3390/tae1020008 - 29 Sep 2025
Abstract
Vibrotactile stimulation has applications in a variety of fields, including medicine, virtual reality, and human–computer interaction. Eccentric Rotating Mass (ERM) vibrating motors are widely used in wearable haptic devices owing to their small size, low cost, and low-energy features. User experience with vibrotactile [...] Read more.
Vibrotactile stimulation has applications in a variety of fields, including medicine, virtual reality, and human–computer interaction. Eccentric Rotating Mass (ERM) vibrating motors are widely used in wearable haptic devices owing to their small size, low cost, and low-energy features. User experience with vibrotactile stimulation is an important factor in ergonomic design for these applications. The effects of ERM motor vibrations on upper-extremity sensation and perception, which are important in the design of better wearable haptic devices, have not been thoroughly studied previously. Our study focuses on the relationship between user sensation and perception and on different vibration parameters, including frequency, location, and number of motors. We conducted experiments with vibrotactile stimulation on 15 healthy participants while the subjects were both at rest and in motion to capture different use cases of haptic devices. Eight motors were placed on a consistent set of muscles in the subjects’ upper extremities, and one motor was placed on their index fingers. We found a significant correlation between voltage and sensation intensity (r = 0.39). This finding is important in the design and safety of customized haptic devices. However, we did not find a significant aggregate-level correlation with the perceived pleasantness of the simulation. The sensation intensity varied based on the location of the vibration on the upper extremities (with the lowest intensities on the triceps brachii and brachialis) and slightly decreased (5.9 ± 2.9%) when the participants performed reaching movements. When a single motor was vibrating, the participants’ accuracy in identifying the motor without visual feedback increased as the voltage increased, reaching up to 81.4 ± 14.2%. When we stimulated three muscles simultaneously, we found that most participants were able to identify only two out of three vibrating motors (41.7 ± 32.3%). Our findings can help identify stimulation parameters for the ergonomic design of haptic devices. Full article
Show Figures

Figure 1

25 pages, 1278 KB  
Review
Eye-Tracking Advancements in Architecture: A Review of Recent Studies
by Mário Bruno Cruz, Francisco Rebelo and Jorge Cruz Pinto
Buildings 2025, 15(19), 3496; https://doi.org/10.3390/buildings15193496 - 28 Sep 2025
Abstract
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new [...] Read more.
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new investigations in the three years thereafter, situating these developments within the longer historical evolution of ET hardware and analytical paradigms. The review maps 13 recurrent areas of application, focusing on design evaluation, wayfinding and spatial navigation, end-user experience, and architectural education. Across these domains, ET reliably reveals where occupants focus, for how long, and in what sequence, providing objective evidence that complements designer intuition and conventional post-occupancy surveys. Experts and novices might display distinct gaze signatures; for example, architects spend longer fixating on contextual and structural cues, whereas lay users dwell on decorative details, highlighting possible pedagogical opportunities. Despite these benefits, persistent challenges include data loss in dynamic or outdoor settings, calibration drift, single-user hardware constraints, and the need to triangulate gaze metrics with cognitive or affective measures. Future research directions emphasize integrating ET with virtual or augmented reality (VR) (AR) to validate design interactively, improving mobile tracking accuracy, and establishing shared datasets to enable replication and meta-analysis. Overall, the study demonstrates that ET is maturing into an indispensable, evidence-based lens for creating more intuitive, legible, and human-centered architecture. Full article
(This article belongs to the Special Issue Emerging Trends in Architecture, Urbanization, and Design)
Show Figures

Figure 1

19 pages, 2387 KB  
Article
A Detailed Review of the Design and Evaluation of XR Applications in STEM Education and Training
by Magesh Chandramouli, Aleeha Zafar and Ashayla Williams
Electronics 2025, 14(19), 3818; https://doi.org/10.3390/electronics14193818 - 26 Sep 2025
Abstract
Extended reality (XR) technologies—including augmented reality (AR), virtual reality (VR), mixed reality (MR), and desktop virtual reality (dVR)—are rapidly advancing STEM education by providing immersive and interactive learning experiences. Despite their potential, many XR applications lack consistent design grounded in human–computer interaction (HCI), [...] Read more.
Extended reality (XR) technologies—including augmented reality (AR), virtual reality (VR), mixed reality (MR), and desktop virtual reality (dVR)—are rapidly advancing STEM education by providing immersive and interactive learning experiences. Despite their potential, many XR applications lack consistent design grounded in human–computer interaction (HCI), leading to challenges in usability, engagement, and learning outcomes. Through a comprehensive analysis of 50 peer-reviewed studies, this paper reveals both strengths and limitations in current implementations and suggest improvements for reducing cognitive load and enhancing engagement. To support this analysis, we draw briefly on a dual-phase learning model (L1–L2), which distinguishes between interface learning (L1) and conceptual or procedural learning (L2). By aligning theoretical insights with practical HCI strategies, the discussions from this study are intended to offer potentially actionable insights for educators and developers on XR design for STEM education. Based on a detailed analysis of the articles, this paper finally makes recommendations to educators and developers on important considerations and limitations concerning the optimal use of XR technologies in STEM education. The guidelines for design proposed by this review offer directions for developers intending to build XR frameworks that effectively improve presence, interaction, and immersion whilst considering affordability and accessibility. Full article
Show Figures

Figure 1

23 pages, 1708 KB  
Review
Grasping in Shared Virtual Environments: Toward Realistic Human–Object Interaction Through Review-Based Modeling
by Nicole Christoff, Nikolay N. Neshov, Radostina Petkova, Krasimir Tonchev and Agata Manolova
Electronics 2025, 14(19), 3809; https://doi.org/10.3390/electronics14193809 - 26 Sep 2025
Abstract
Virtual communication, involving the transmission of all human senses, is the next step in the development of telecommunications. Achieving this vision requires real-time data exchange with low latency, which in turn necessitates the implementation of the Tactile Internet (TI). TI will ensure the [...] Read more.
Virtual communication, involving the transmission of all human senses, is the next step in the development of telecommunications. Achieving this vision requires real-time data exchange with low latency, which in turn necessitates the implementation of the Tactile Internet (TI). TI will ensure the transmission of high-quality tactile data, especially when combined with audio and video signals, thus enabling more realistic interactions in virtual environments. In this context, advances in realism increasingly depend on the accurate simulation of the grasping process and hand–object interactions. To address this, in this paper, we methodically present the challenges of human–object interaction in virtual environments, together with a detailed review of the datasets used in grasping modeling and the integration of physics-based and machine learning approaches. Based on this review, we propose a multi-step framework that simulates grasping as a series of biomechanical, perceptual, and control processes. The proposed model aims to support realistic human interaction with virtual objects in immersive settings and to enable integration into applications such as remote manipulation, rehabilitation, and virtual learning. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

12 pages, 219 KB  
Article
The Future of Nostalgia: Loss and Absence in the Age of Algorithmic Temporality
by Silvia Pierosara
Humanities 2025, 14(10), 187; https://doi.org/10.3390/h14100187 - 25 Sep 2025
Abstract
For human beings, accepting loss and absence is a constant effort, particularly when it comes to accepting their own finitude, which becomes apparent as time passes and people leave us. This is closely linked to nostalgia and the processes of remembrance. While there [...] Read more.
For human beings, accepting loss and absence is a constant effort, particularly when it comes to accepting their own finitude, which becomes apparent as time passes and people leave us. This is closely linked to nostalgia and the processes of remembrance. While there are many nuances, we can distinguish between constructive and destructive nostalgia. The former cannot accept absence or the passage of time and deludes itself into thinking that it can recover what has been lost. The latter recognizes the temptation to recover everything, but knows that this is impossible, and accepts that the past can only be preserved by transforming it into something else. Contemporary technologies that use algorithms can exacerbate the former tendency by manipulating memory processes and distorting the meaning of the virtual. The aim of this contribution is to shed light on the dynamics and implications of nostalgia as it is influenced by algorithms. To this end, it is divided into three stages. In the first stage, nostalgia is examined for its “restraining” power in relation to deterministically progressive philosophies of history, also through a reference to the original philosophical meaning of the term ‘virtual’. In the second stage, the relation to progress is thematized through a reflection on technologies and artificial intelligence, which uses algorithms and devours our data. In the third stage, it will be shown how thinking about nostalgia and artificial and algorithmic ‘intelligence(s)’ can be a valuable test case for distinguishing between the uses and abuses of nostalgia, between constructive nostalgia and destructive nostalgia. Full article
15 pages, 2559 KB  
Article
Quasi-Static and Dynamic Measurement Capabilities Provided by an Electromagnetic Field-Based Sensory Glove
by Giovanni Saggio, Luca Pietrosanti, I-Jung Lee and Bor-Shing Lin
Biosensors 2025, 15(10), 640; https://doi.org/10.3390/bios15100640 - 25 Sep 2025
Abstract
The sensory glove (also known as data or instrumented glove) plays a key role in measuring and tracking hand dexterity. It has been adopted in a variety of different domains, including medical, robotics, virtual reality, and human–computer interaction, to assess hand motor skills [...] Read more.
The sensory glove (also known as data or instrumented glove) plays a key role in measuring and tracking hand dexterity. It has been adopted in a variety of different domains, including medical, robotics, virtual reality, and human–computer interaction, to assess hand motor skills and to improve control accuracy. However, no particular technology has been established as the most suitable for all domains, so that different sensory gloves have been developed, adopting different sensors mainly based on optic, electric, magnetic, or mechanical properties. This work investigates the performances of the MANUS Quantum sensory glove that sources an electromagnetic field and measures its changing value at the fingertips during fingers’ flexion. Its performance is determined in terms of measurement repeatability, reproducibility, and reliability during both quasi-static and dynamic hand motor tests. Full article
Show Figures

Figure 1

15 pages, 5189 KB  
Article
Assembly Complexity Index (ACI) for Modular Robotic Systems: Validation and Conceptual Framework for AR/VR-Assisted Assembly
by Kartikeya Walia and Philip Breedon
Machines 2025, 13(10), 882; https://doi.org/10.3390/machines13100882 - 24 Sep 2025
Viewed by 57
Abstract
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) [...] Read more.
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) and task complexity (Task Complexity Index) into a unified metric to quantify assembly difficulty. Twelve participants performed modular manipulator assembly tasks under supervised and unsupervised conditions, enabling evaluation of learning effects and assembly complexity dynamics. Statistical analyses, including Cronbach’s alpha, correlation studies, and paired t-tests, demonstrated the framework’s internal consistency, sensitivity to user learning, and ability to capture workload-performance trade-offs. Additionally, we propose an augmented reality (AR) and virtual reality (VR) integration workflow to further mitigate assembly complexity, offering real-time guidance and adaptive assistance. The proposed framework not only supports design iteration and operator training but also provides a human-centered evaluation methodology applicable to modular robotics deployment in Industry 4.0 environments. The AR/VR-assisted workflow presented here is proposed as a conceptual extension and will be validated in future work. Full article
Show Figures

Figure 1

22 pages, 8860 KB  
Article
Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction
by Hyunsu Kim and Yunsik Son
Appl. Sci. 2025, 15(19), 10372; https://doi.org/10.3390/app151910372 - 24 Sep 2025
Viewed by 72
Abstract
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view [...] Read more.
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models. Full article
(This article belongs to the Special Issue Advanced Technologies Applied for Object Detection and Tracking)
Show Figures

Figure 1

Back to TopTop