Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (662)

Search Parameters:
Keywords = 3D augmented reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1796 KB  
Systematic Review
Effects of Telerehabilitation Platforms on Quality of Life in People with Multiple Sclerosis: A Systematic Review of Randomized Clinical Trials
by Alejandro Herrera-Rojas, Andrés Moreno-Molina, Elena García-García, Naiara Molina-Rodríguez and Roberto Cano-de-la-Cuerda
NeuroSci 2025, 6(4), 103; https://doi.org/10.3390/neurosci6040103 - 13 Oct 2025
Abstract
Introduction: Multiple sclerosis (MS) is a chronic neurodegenerative disease that entails high costs, progressive disability, and reduced quality of life (QoL). Telerehabilitation (TR), supported by new technologies, is emerging as an alternative or complement to in-person rehabilitation, potentially lowering socioeconomic impact and improving [...] Read more.
Introduction: Multiple sclerosis (MS) is a chronic neurodegenerative disease that entails high costs, progressive disability, and reduced quality of life (QoL). Telerehabilitation (TR), supported by new technologies, is emerging as an alternative or complement to in-person rehabilitation, potentially lowering socioeconomic impact and improving QoL. Aim: The objective of this study was to evaluate the effect of TR on the QoL of people with MS compared with in-person rehabilitation or no intervention. Materials and methods: A systematic review of randomized clinical trials was conducted (March–May 2025) following PRISMA guidelines. Searches were run in the PubMed-Medline, EMBASE, PEDro, Web of Science, and Dialnet databases. Methodological quality was assessed with the CASP scale, risk of bias with the Risk of Bias 2 tool, and evidence level and grade of recommendation with the Oxford Classification. The protocol was registered in PROSPERO (CRD420251110353). Results: Of the 151 articles initially found, 12 RCTs (598 total patients) met the inclusion criteria. Interventions included (a) four studies employing video-controlled exercise (one involving Pilates to improve fitness, another involving exercise to improve fatigue and general health, and two using exercises focused on the pelvic floor muscles); (b) three studies using a monitoring app to improve manual dexterity, symptom control, and increased physical activity; (c) two studies implementing an augmented reality system to treat cognitive deficits and sexual disorders, respectively; (d) one platform with a virtual reality headset for motor and cognitive training; (e) one study focusing on video-controlled motor imagery, along with the use of a pain management app; (f) a final study addressing cognitive training and pain reduction. Studies used eight different scales to assess QoL, finding similar improvements between groups in eight of the trials and statistically significant improvements in favor of TR in four. The included trials were of good methodological quality, with a moderate-to-low risk of bias and good levels of evidence and grades of recommendation. Conclusions: TR was more effective in improving the QoL of people with MS than no intervention, was as effective as in-person treatment in patients with EDSS ≤ 6, and appeared to be more effective than in-person intervention in patients with EDSS between 5.5 and 7.5 in terms of QoL. It may also eliminate some common barriers to accessing such treatments. Full article
Show Figures

Figure 1

29 pages, 3369 KB  
Article
Longitudinal Usability and UX Analysis of a Multiplatform House Design Pipeline: Insights from Extended Use Across Web, VR, and Mobile AR
by Mirko Sužnjević, Sara Srebot, Mirta Moslavac, Katarina Mišura, Lovro Boban and Ana Jović
Appl. Sci. 2025, 15(19), 10765; https://doi.org/10.3390/app151910765 - 6 Oct 2025
Viewed by 343
Abstract
Computer-Aided Design (CAD) software has long served as a foundation for planning and modeling in Architecture, Engineering, and Construction (AEC). In recent years, the introduction of Augmented Reality (AR) and Virtual Reality (VR) has significantly reshaped the CAD landscape, offering novel interaction paradigms [...] Read more.
Computer-Aided Design (CAD) software has long served as a foundation for planning and modeling in Architecture, Engineering, and Construction (AEC). In recent years, the introduction of Augmented Reality (AR) and Virtual Reality (VR) has significantly reshaped the CAD landscape, offering novel interaction paradigms that bridge the gap between digital prototypes and real-world spatial understanding. These technologies have enabled users to engage with 3D architectural content in more immersive and intuitive ways, facilitating improved decision making and communication throughout design workflows. As digital design services grow more complex and span multiple media platforms—from desktop-based modeling to immersive AR/VR environments—evaluating usability and User Experience (UX) becomes increasingly challenging. This paper presents a longitudinal usability and UX study of a multiplatform house design pipeline (i.e., structured workflow for creating, adapting, and delivering house designs so they can be used seamlessly across multiple platforms) comprising a web-based application for initial house creation, a mobile AR tool for contextual exterior visualization, and VR applications that allow full-scale interior exploration and configuration. Together, these components form a unified yet heterogeneous service experience across different devices and modalities. We describe the iterative design and development of this system over three distinct phases (lasting two years), each followed by user studies which evaluated UX and usability and targeted different participant profiles and design maturity levels. The paper outlines our approach to cross-platform UX evaluation, including methods such as the Think-Aloud Protocol (TAP), standardized usability metrics, and structured interviews. The results from the studies provide insight into user preferences, interaction patterns, and system coherence across platforms. From both participant and evaluator perspectives, the iterative methodology contributed to improvements in system usability and a clearer mental model of the design process. The main research question we address is how iterative design and development affects the UX of the heterogeneous service. Our findings highlight important considerations for future research and practice in the design of integrated, multiplatform XR services for AEC, with potential relevance to other domains. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

20 pages, 1980 KB  
Review
Augmented Reality in Engineering Education: A Bibliometric Review
by Georgios Lampropoulos, Antonio del Bosque, Pablo Fernández-Arias and Diego Vergara
Information 2025, 16(10), 859; https://doi.org/10.3390/info16100859 - 4 Oct 2025
Viewed by 314
Abstract
The aim of this study is to examine the role and use of augmented reality in engineering education by examining the existing literature. A total of 235 studies from Scopus and Web of Science published during 2011–2025 were examined. The study focused on [...] Read more.
The aim of this study is to examine the role and use of augmented reality in engineering education by examining the existing literature. A total of 235 studies from Scopus and Web of Science published during 2011–2025 were examined. The study focused on analyzing the main characteristics of the studies, identifying the main topics, and exploring the use of augmented reality in engineering education. The study also highlighted current challenges and limitations and suggested future research directions. Based on the results, 7 main topics arose which were related to (i) Immersive technologies in engineering education, (ii) Gamified learning experiences, (iii) Remote and virtual laboratories, (iv) Visualization and 3D modeling, (v) Student motivation, (vi) Collaborative and interactive learning environments, and (vii) User-centered design and user experience. Augmented reality emerged as an effective educational tool that can positively impact engineering education and support both students and teachers. Specifically, physical, remote, and virtual laboratories that can improve students’ learning performance, motivation, creativity, engagement, and satisfaction can be created through augmented reality. Using augmented reality, students can develop their practical skills and knowledge within low-risk and secure learning environments. Additionally, via the realistic and interactive visualization, students’ knowledge acquisition and understanding can be enhanced. Finally, its ability to effectively support collaborative learning and experiential learning arose. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

17 pages, 10210 KB  
Article
Feature-Driven Joint Source–Channel Coding for Robust 3D Image Transmission
by Yinuo Liu, Hao Xu, Adrian Bowman and Weichao Chen
Electronics 2025, 14(19), 3907; https://doi.org/10.3390/electronics14193907 - 30 Sep 2025
Viewed by 160
Abstract
Emerging applications like augmented reality (AR) demand efficient wireless transmission of high-resolution three-dimensional (3D) images, yet conventional systems struggle with the high data volume and vulnerability to noise. This paper proposes a novel feature-driven framework that integrates semantic source coding with deep learning-based [...] Read more.
Emerging applications like augmented reality (AR) demand efficient wireless transmission of high-resolution three-dimensional (3D) images, yet conventional systems struggle with the high data volume and vulnerability to noise. This paper proposes a novel feature-driven framework that integrates semantic source coding with deep learning-based Joint Source–Channel Coding (JSCC) for robust and efficient transmission. Instead of processing dense meshes, the method first extracts a compact set of geometric features—specifically, the ridge and valley curves that define the object’s fundamental structure. This feature representation which is extracted by the anatomical curves is then processed by an end-to-end trained JSCC encoder, mapping the semantic information directly to channel symbols. This synergistic approach drastically reduces bandwidth requirements while leveraging the inherent resilience of JSCC for graceful degradation in noisy channels. The framework demonstrates superior reconstruction fidelity and robustness compared to traditional schemes, especially in low signal-to-noise ratio (SNR) regimes, enabling practical and efficient 3D semantic communications. Full article
(This article belongs to the Special Issue AI-Empowered Communications: Towards a Wireless Metaverse)
Show Figures

Figure 1

22 pages, 1783 KB  
Review
Effects of Virtual Reality on Motor Function and Balance in Incomplete Spinal Cord Injury: A Systematic Review and Meta-Analysis of Controlled Trials
by Yamil Liscano, Florencio Arias Coronel and Darly Martínez
Brain Sci. 2025, 15(10), 1071; https://doi.org/10.3390/brainsci15101071 - 30 Sep 2025
Viewed by 495
Abstract
Background/Objectives: Incomplete spinal cord injury (iSCI) represents a significant challenge in neurorehabilitation, with conventional limitations including recovery plateaus and declining patient motivation. Virtual reality (VR) and augmented reality (AR) have emerged as promising technologies to supplement traditional therapy through gamification and multisensory [...] Read more.
Background/Objectives: Incomplete spinal cord injury (iSCI) represents a significant challenge in neurorehabilitation, with conventional limitations including recovery plateaus and declining patient motivation. Virtual reality (VR) and augmented reality (AR) have emerged as promising technologies to supplement traditional therapy through gamification and multisensory feedback. This systematic review and meta-analysis evaluates the effectiveness of VR and AR interventions for improving balance and locomotor function in patients with incomplete spinal cord injury. Methods: A systematic review was conducted following PRISMA guidelines, with searches in PubMed, Scopus, Web of Science, Science Direct, and Google Scholar. Randomized controlled trials and high-quality controlled studies evaluating VR/AR interventions in patients with iSCI (American Spinal Injury Association Impairment Scale [AIS] classifications B, C, or D) for a minimum of 3 weeks were included. A random-effects meta-analysis (Standardized Mean Difference, SMD; 95% Confidence Interval, CI) was conducted for the balance outcome. Results: Eight studies were included (n = 142 participants). The meta-analysis for balance (k = 5 studies) revealed a statistically significant improvement with a large effect size (SMD = 1.21, 95% CI: 0.04–2.38, p = 0.046). For locomotor function, a quantitative meta-analysis was not feasible due to a limited number of methodologically homogeneous studies; a qualitative synthesis of this evidence remained inconclusive. Substantial heterogeneity was observed in the balance analysis (I2 = 81.5%). No serious adverse events related to VR/AR interventions were reported. Conclusions: VR/AR interventions show potential as an effective adjunctive therapy for improving balance in patients with iSCI, though the benefit should be interpreted with caution due to considerable variability between studies. The current evidence for locomotor function improvements is insufficient to draw conclusions, highlighting a critical need for more focused research. Substantial heterogeneity indicates that effectiveness may vary according to specific intervention characteristics, populations, and methodologies. Larger multicenter studies with standardized protocols are required to establish evidence-based clinical guidelines. Full article
Show Figures

Figure 1

9 pages, 660 KB  
Article
Mixed-Reality Visualization of Impacted Teeth: A Survey of Undergraduate Dental Students
by Agnieszka Garlicka, Małgorzata Bilińska, Karolina Kramarczyk, Kuba Chrobociński, Przemysław Korzeniowski and Piotr S. Fudalej
J. Clin. Med. 2025, 14(19), 6930; https://doi.org/10.3390/jcm14196930 - 30 Sep 2025
Viewed by 275
Abstract
Background/Objectives: Integrating 3D visualization technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), into dental education may enhance students’ understanding of facial anatomy and clinical procedures. This study aimed to assess dental students’ perceptions of using MR for three-dimensional [...] Read more.
Background/Objectives: Integrating 3D visualization technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), into dental education may enhance students’ understanding of facial anatomy and clinical procedures. This study aimed to assess dental students’ perceptions of using MR for three-dimensional visualizations of impacted teeth. Methods: Cone-beam computed tomography (CBCT) scans of patients with impacted teeth were retrospectively selected from a university clinic database. The CBCT images were processed to adjust contrast for optimal visualization before being uploaded to MR goggles (HoloLens 2). A total of 114 final-year dental students participated, each manipulating the 3D images in space using the goggles. Following this, they completed a seven-question survey on a five-point Likert scale (1 = strongly agree, 5 = strongly disagree), evaluating image quality and the usefulness of 3D visualization. Results: The study group consisted of 29 males and 85 females (mean age = 24.11 years, SD = 1.48). The most favorable responses were for enhanced visualization of the impacted tooth’s position relative to adjacent structures and the inclusion of 3D image visualization as a teaching aid, which benefited students while learning and allowed them to better understand the course of the procedure for exposure/extraction of the impacted tooth, with median scores of 1, indicating a highly favorable opinion. A statistically significant relationship was found between the responses of females and males regarding the quality of the presented image using HoloLens 2 goggles. No significant correlation was found between participants with and without prior experience using VR/MR/AR. No significant correlation was found between age and responses. Conclusions: Students reported an improved understanding of the relationships between impacted teeth and adjacent structures, as well as potential benefits for clinical training. These findings demonstrate a high level of acceptance of MR technology among students; however, further research is required to objectively assess its effectiveness in enhancing learning outcomes. Full article
(This article belongs to the Special Issue Orthodontics: Current Advances and Future Options)
Show Figures

Figure 1

14 pages, 2921 KB  
Article
Design and Validation of an Augmented Reality Training Platform for Patient Setup in Radiation Therapy Using Multimodal 3D Modeling
by Jinyue Wu, Donghee Han and Toshioh Fujibuchi
Appl. Sci. 2025, 15(19), 10488; https://doi.org/10.3390/app151910488 - 28 Sep 2025
Viewed by 230
Abstract
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup [...] Read more.
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup procedures. High-resolution 3D anatomical models were reconstructed from CT scans using 3D Slicer, while Luma AI was employed to rapidly capture complete body surface models. Due to limitations in each method—such as missing extremities or back surfaces—Blender was used to merge the models, improving completeness and anatomical fidelity. The AR application was developed in Unity, employing spatial anchors and 125 × 125 mm2 QR code markers to stabilize and align virtual models in real space. System accuracy testing demonstrated that QR code tracking achieved millimeter-level variation, with an expanded uncertainty of ±2.74 mm. Training trials for setup showed larger deviations in the X (left–right), Y (up-down), and Z (front-back) axes at the centimeter scale. This meant that we were able to quantify the user’s patient setup skills. While QR code positioning was relatively stable, manual placement of markers and the absence of real-time verification contributed to these errors. The system offers a radiation-free and interactive platform for training, enhancing spatial awareness and procedural skills. Future work will focus on improving tracking stability, optimizing the workflow, and integrating real-time feedback to move toward clinical applicability. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

21 pages, 4655 KB  
Article
A Geometric Distortion Correction Method for UAV Projection in Non-Planar Scenarios
by Hao Yi, Sichen Li, Feifan Yu, Mao Xu and Xinmin Chen
Aerospace 2025, 12(10), 870; https://doi.org/10.3390/aerospace12100870 - 27 Sep 2025
Viewed by 227
Abstract
Conventional projection systems typically require a fixed spatial configuration relative to the projection surface, with strict control over distance and angle. In contrast, UAV-mounted projectors overcome these constraints, enabling dynamic, large-scale projections onto non-planar and complex environments. However, such flexible scenarios introduce a [...] Read more.
Conventional projection systems typically require a fixed spatial configuration relative to the projection surface, with strict control over distance and angle. In contrast, UAV-mounted projectors overcome these constraints, enabling dynamic, large-scale projections onto non-planar and complex environments. However, such flexible scenarios introduce a key challenge: severe geometric distortions caused by intricate surface geometry and continuous camera–projector motion. To address this, we propose a novel image registration method based on global dense matching, which estimates the real-time optical flow field between the input projection image and the target surface. The estimated flow is used to pre-warp the image, ensuring that the projected content appears geometrically consistent across arbitrary, deformable surfaces. The core idea of our method lies in reformulating the geometric distortion correction task as a global feature matching problem, effectively reducing 3D spatial deformation into a 2D dense correspondence learning process. To support learning and evaluation, we construct a hybrid dataset that covers a wide range of projection scenarios, including diverse lighting conditions, object geometries, and projection contents. Extensive simulation and real-world experiments show that our method achieves superior accuracy and robustness in correcting geometric distortions in dynamic UAV projection, significantly enhancing visual fidelity in complex environments. This approach provides a practical solution for real-time, high-quality projection in UAV-based augmented reality, outdoor display, and aerial information delivery systems. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

19 pages, 2387 KB  
Article
A Detailed Review of the Design and Evaluation of XR Applications in STEM Education and Training
by Magesh Chandramouli, Aleeha Zafar and Ashayla Williams
Electronics 2025, 14(19), 3818; https://doi.org/10.3390/electronics14193818 - 26 Sep 2025
Viewed by 313
Abstract
Extended reality (XR) technologies—including augmented reality (AR), virtual reality (VR), mixed reality (MR), and desktop virtual reality (dVR)—are rapidly advancing STEM education by providing immersive and interactive learning experiences. Despite their potential, many XR applications lack consistent design grounded in human–computer interaction (HCI), [...] Read more.
Extended reality (XR) technologies—including augmented reality (AR), virtual reality (VR), mixed reality (MR), and desktop virtual reality (dVR)—are rapidly advancing STEM education by providing immersive and interactive learning experiences. Despite their potential, many XR applications lack consistent design grounded in human–computer interaction (HCI), leading to challenges in usability, engagement, and learning outcomes. Through a comprehensive analysis of 50 peer-reviewed studies, this paper reveals both strengths and limitations in current implementations and suggest improvements for reducing cognitive load and enhancing engagement. To support this analysis, we draw briefly on a dual-phase learning model (L1–L2), which distinguishes between interface learning (L1) and conceptual or procedural learning (L2). By aligning theoretical insights with practical HCI strategies, the discussions from this study are intended to offer potentially actionable insights for educators and developers on XR design for STEM education. Based on a detailed analysis of the articles, this paper finally makes recommendations to educators and developers on important considerations and limitations concerning the optimal use of XR technologies in STEM education. The guidelines for design proposed by this review offer directions for developers intending to build XR frameworks that effectively improve presence, interaction, and immersion whilst considering affordability and accessibility. Full article
Show Figures

Figure 1

12 pages, 2022 KB  
Case Report
Implementation of Medicalholodeck® for Augmented Reality Surgical Navigation in Microsurgical Mandibular Reconstruction: Enhanced Vessel Identification
by Norman Alejandro Rendón Mejía, Hansel Gómez Arámbula, José Humberto Baeza Ramos, Yidam Villa Martínez, Francisco Hernández Ávila, Mónica Quiñonez Pérez, Carolina Caraveo Aguilar, Rogelio Mariñelarena Hernández, Claudio Reyes Montero, Claudio Ramírez Espinoza and Armando Isaac Reyes Carrillo
Healthcare 2025, 13(19), 2406; https://doi.org/10.3390/healthcare13192406 - 24 Sep 2025
Viewed by 530
Abstract
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of [...] Read more.
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of the fibula flap, are demanding steps that directly impact surgical success. Augmented reality (AR) offers a solution by overlaying three-dimensional virtual models directly onto the surgeon’s view of the operative field. We report the first case in Latin America utilizing a low-cost, commercially available holographic navigation system for complex microsurgical mandibular reconstruction. A 26-year-old female presented with a large, destructive osteoblastoma of the left mandible, requiring wide resection and reconstruction. Preoperative surgical planning was conducted using DICOM data from the patient’s CT scans to generate 3D holographic models with the Medicalholodeck® software. Intraoperatively, the primary surgeon used the AR system to superimpose the holographic models onto the patient. The system provided real-time, immersive guidance for identifying the facial artery, which was anatomically displaced by the tumor mass, as well as for localizing the peroneal artery perforators for donor flap harvest. A free fibula flap was harvested and transferred. During the early postoperative course and after 3-months of follow-up, the patient presented with an absence of any clinical complications. This case demonstrates the successful application and feasibility of using a low-cost, consumer-grade holographic navigation system. Full article
(This article belongs to the Special Issue Virtual Reality Technologies in Health Care)
Show Figures

Figure 1

25 pages, 1458 KB  
Review
Research on Frontier Technology of Risk Management for Conservation of Cultural Heritage Based on Bibliometric Analysis
by Dandan Li, Laiming Wu, He Huang, Hao Zhou, Lankun Cai and Fangyuan Xu
Heritage 2025, 8(9), 392; https://doi.org/10.3390/heritage8090392 - 19 Sep 2025
Viewed by 468
Abstract
In the contemporary international context, the preventive conservation of cultural relics has become a widespread consensus. “Risk management” has emerged as a pivotal research focus at the present stage. However, the preventive protection of cultural relics is confronted with deficiencies in risk assessment [...] Read more.
In the contemporary international context, the preventive conservation of cultural relics has become a widespread consensus. “Risk management” has emerged as a pivotal research focus at the present stage. However, the preventive protection of cultural relics is confronted with deficiencies in risk assessment and prediction. There is an urgent requirement for research to present a comprehensive and in-depth overview of the frontier technologies applicable to the preventive protection of cultural relics, with a particular emphasis on risk prevention and control. Additionally, it is essential to delineate the prospects for future investigations and developments in this domain. Consequently, this study employs bibliometric methods, applying CiteSpace (6.3.R1) and Biblioshiny (4.3.0) to perform comprehensive visual and analytical examinations of 392 publications sourced from the Web of Science (WoS) database covering the period 2010 to 2024. The results obtained from the research are summarized as follows: First, it is evident that scholars originating from China, Italy, and Spain have exhibited preponderant publication frequencies, contributing the largest quantity of articles. Second, augmented reality, digital technology, and risk-based analysis have been identified as the cardinal research frontiers. These areas have attracted significant scholarly attention and are at the forefront of innovation and exploration within the discipline. Third, the “Journal of Culture Heritage” and “Heritage Science” have been empirically determined to be the most frequently cited periodical within this particular field of study. Moreover, over the past decade, under the impetus and influence of the concept of Intangible Cultural Heritage, virtual reality, digital protection, and 3D models have progressively evolved into the central and crucial topics that have pervaded and shaped the research agenda. Finally, with respect to future research trajectories, there will be a pronounced focus on interdisciplinary design. This will be accompanied by an escalation in the requisites and standards for preventive conservation. Specifically, the spotlight will be cast upon aspects such as the air quality within the preservation environment of cultural relics held in collections, the implementation and efficacy of environmental real-time monitoring systems, the utilization and interpretation of big data analysis and early warning mechanisms, as well as the comprehensive and in-depth risk analysis of cultural relics. These multifaceted investigations will be essential for advancing understanding and safeguarding of cultural heritage. These findings deepen our grasp of how risk management in cultural heritage conservation has progressed and transformed between 2010 and 2024. Furthermore, the study provides novel insights and directions for subsequent investigations into risk assessment methodologies for heritage collections. Full article
Show Figures

Figure 1

2 pages, 124 KB  
Editorial
A Decade of Impressive Advances in Processes
by Giancarlo Cravotto
Processes 2025, 13(9), 2989; https://doi.org/10.3390/pr13092989 - 19 Sep 2025
Viewed by 257
Abstract
In the past decade, technologies such as artificial intelligence (AI), augmented reality, 3D printing, and 5G smartphones have become commonplace, driving fundamental innovations in industrial production through the development of smart, highly efficient, and sustainable processes [...] Full article
(This article belongs to the Section Chemical Processes and Systems)
25 pages, 1596 KB  
Review
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS
by Shuai Liu, Mengmeng Yang, Tingyan Xing and Ran Yang
Sensors 2025, 25(18), 5748; https://doi.org/10.3390/s25185748 - 15 Sep 2025
Viewed by 1990
Abstract
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. [...] Read more.
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. With the rise in novel view synthesis technologies such as Neural Radiation Field (NeRF) and 3D Gaussian Splatting (3DGS), 3D reconstruction is facing unprecedented development opportunities. This article introduces the basic principles of traditional 3D reconstruction methods, including Structure from Motion (SfM) and Multi View Stereo (MVS) techniques, and analyzes the limitations of these methods in dealing with complex scenes and dynamic environments. Focusing on implicit 3D scene reconstruction techniques related to NeRF, this paper explores the advantages and challenges of using deep neural networks to learn and generate high-quality 3D scene rendering from limited perspectives. Based on the principles and characteristics of 3DGS-related technologies that have emerged in recent years, the latest progress and innovations in rendering quality, rendering efficiency, sparse view input support, and dynamic 3D reconstruction are analyzed. Finally, the main challenges and opportunities faced by current 3D reconstruction technology and novel view synthesis technology were discussed in depth, and possible technological breakthroughs and development directions in the future were discussed. This article aims to provide a comprehensive perspective for researchers in 3D reconstruction technology in fields such as digital twins and smart cities, while opening up new ideas and paths for future technological innovation and widespread application. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 1096 KB  
Article
Effect of the Virtual Reality-Infused Movement and Activity Program (V-MAP) on Physical Activity and Cognition in Head Start Preschoolers
by Xiangli Gu, Samantha Moss, Xiaoxia Zhang, Tao Zhang and Tracy L. Greer
Children 2025, 12(9), 1228; https://doi.org/10.3390/children12091228 - 14 Sep 2025
Viewed by 776
Abstract
Background/Objectives: This study examined the efficacy of a physical activity (PA) intervention augmented by a non-immersive Virtual Reality (VR) gaming system (i.e., Virtual Reality-infused Movement and Activity Program; V-MAP) on physical activity (i.e., sedentary behavior, moderate-to-vigorous PA [MVPA], vigorous PA [VPA]) and cognitive [...] Read more.
Background/Objectives: This study examined the efficacy of a physical activity (PA) intervention augmented by a non-immersive Virtual Reality (VR) gaming system (i.e., Virtual Reality-infused Movement and Activity Program; V-MAP) on physical activity (i.e., sedentary behavior, moderate-to-vigorous PA [MVPA], vigorous PA [VPA]) and cognitive skills (i.e., response error, movement latency and reaction time) in Head Start preschoolers. Methods: Using a repeated-measure with 1-month follow-up design, a sample of 13 Head Start preschoolers (Mage = 67.08 ± 4.32 months; 36.2% boys) engaged in a 6-week V-MAP intervention (30-min session; 8 sessions) that focused on non-immersive VR based movement integration. The Cambridge Neuropsychological Test Automated Battery (CANTAB) was used to measure cognition; school-based PA and sedentary behavior were assessed by ActiGraph accelerometer. Pedometers were used to monitor real time engagement and implementation over eight intervention sessions. Results: On average, children obtained 1105 steps during the 30-min intervention (36.85 steps/min). There was a significant increase in VPA after the V-MAP intervention, whereas no significant changes in MVPA or sedentary behavior were observed (ps > 0.05). Although we did not observe significant improvement in studied cognitive function variables (ps > 0.05) after the V-MAP intervention, some delayed effects were observed in the follow-up test (Cohen’s d ranges from −0.41 to −0.73). Conclusions: This efficacy trial provides preliminary support that implementing V-MAP in recess may help Head Start preschoolers achieve or accumulate the recommended daily 60-min MVPA guideline during preschool years. The findings also provide insights that VR-based PA for as little as 30 min per day may benefit cognitive capability. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

34 pages, 9482 KB  
Review
Methodologies for Remote Bridge Inspection—Review
by Diogo Ribeiro, Anna M. Rakoczy, Rafael Cabral, Vedhus Hoskere, Yasutaka Narazaki, Ricardo Santos, Gledson Tondo, Luis Gonzalez, José Campos Matos, Marcos Massao Futai, Yanlin Guo, Adriana Trias, Joaquim Tinoco, Vanja Samec, Tran Quang Minh, Fernando Moreu, Cosmin Popescu, Ali Mirzazade, Tomás Jorge, Jorge Magalhães, Franziska Schmidt, João Ventura and João Fonsecaadd Show full author list remove Hide full author list
Sensors 2025, 25(18), 5708; https://doi.org/10.3390/s25185708 - 12 Sep 2025
Viewed by 918
Abstract
This article addresses the state of the art of methodologies for bridge inspection with potential for inclusion in Bridge Management Systems (BMS) and within the scope of the IABSE Task Group 5.9 on Remote Inspection of Bridges. The document covers computer vision approaches, [...] Read more.
This article addresses the state of the art of methodologies for bridge inspection with potential for inclusion in Bridge Management Systems (BMS) and within the scope of the IABSE Task Group 5.9 on Remote Inspection of Bridges. The document covers computer vision approaches, including 3D geometric reconstitution (photogrammetry, LiDAR, and hybrid fusion strategies), damage and component identification (based on heuristics and Artificial Intelligence), and non-contact measurement of key structural parameters (displacements, strains, and modal parameters). Additionally, it addresses techniques for handling the large volumes of data generated by bridge inspections (Big Data), the use of Digital Twins for asset maintenance, and dedicated applications of Augmented Reality based on immersive environments for bridge inspection. These methodologies will contribute to safe, automated, and intelligent assessment and maintenance of bridges, enhancing resilience and lifespan of transportation infrastructure under changing climate. Full article
(This article belongs to the Special Issue Feature Review Papers in Fault Diagnosis & Sensors)
Show Figures

Figure 1

Back to TopTop