Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = virtual–real fusion interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 9435 KB  
Article
Intelligent Fault Warning Method for Wind Turbine Gear Transmission System Driven by Digital Twin and Multi-Source Data Fusion
by Tiantian Xu, Xuedong Zhang and Wenlei Sun
Appl. Sci. 2025, 15(15), 8655; https://doi.org/10.3390/app15158655 - 5 Aug 2025
Viewed by 620
Abstract
To meet the demands for real-time and accurate fault warning of wind turbine gear transmission systems, this study proposes an innovative intelligent warning method based on the integration of digital twin and multi-source data fusion. A digital twin system architecture is developed, comprising [...] Read more.
To meet the demands for real-time and accurate fault warning of wind turbine gear transmission systems, this study proposes an innovative intelligent warning method based on the integration of digital twin and multi-source data fusion. A digital twin system architecture is developed, comprising a high-precision geometric model and a dynamic mechanism model, enabling real-time interaction and data fusion between the physical transmission system and its virtual model. At the algorithmic level, a CNN-LSTM-Attention fault prediction model is proposed, which innovatively integrates the spatial feature extraction capabilities of a convolutional neural network (CNN), the temporal modeling advantages of long short-term memory (LSTM), and the key information-focusing characteristics of an attention mechanism. Experimental validation shows that this model outperforms traditional methods in prediction accuracy. Specifically, it achieves average improvements of 0.3945, 0.546 and 0.061 in Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and R-squared (R2) metrics, respectively. Building on the above findings, a monitoring and early warning platform for the wind turbine transmission system was developed, integrating digital twin visualization with intelligent prediction functions. This platform enables a fully intelligent process from data acquisition and status evaluation to fault warning, providing an innovative solution for the predictive maintenance of wind turbines. Full article
Show Figures

Figure 1

16 pages, 4481 KB  
Article
Construction and Validation of a Digital Twin-Driven Virtual-Reality Fusion Control Platform for Industrial Robots
by Wenxuan Chang, Wenlei Sun, Pinghui Chen and Huangshuai Xu
Sensors 2025, 25(13), 4153; https://doi.org/10.3390/s25134153 - 3 Jul 2025
Cited by 1 | Viewed by 1141
Abstract
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for [...] Read more.
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for remote control and real-time status monitoring. In this study, a novel approach is proposed by integrating digital twin technology with traditional robot control methodologies to establish a virtual–real mapping architecture. A high-precision and efficient digital twin-based control platform for industrial robots is developed using the Unity3D (2022.3.53f1c1) engine, offering enhanced visualization, interaction, and system adaptability. The high-precision twin environment is constructed from the three dimensions of the physical layer, digital layer, and information fusion layer. The system adopts the socket communication mechanism based on TCP/IP protocol to realize the real-time acquisition of robot state information and the synchronous issuance of control commands, and constructs the virtual–real bidirectional mapping mechanism. The Unity3D platform is integrated to develop a visual human–computer interaction interface, and the user-oriented graphical interface and modular command system effectively reduce the threshold of robot use. A spatially curved part welding experiment is carried out to verify the adaptability and control accuracy of the system in complex trajectory tracking and flexible welding tasks, and the experimental results show that the system has high accuracy as well as good interactivity and stability. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 4622 KB  
Article
Dual Focus-3D: A Hybrid Deep Learning Approach for Robust 3D Gaze Estimation
by Abderrahmen Bendimered, Rabah Iguernaissi, Mohamad Motasem Nawaf, Rim Cherif, Séverine Dubuisson and Djamal Merad
Sensors 2025, 25(13), 4086; https://doi.org/10.3390/s25134086 - 30 Jun 2025
Viewed by 831
Abstract
Estimating gaze direction is a key task in computer vision, especially for understanding where a person is focusing their attention. It is essential for applications in assistive technology, medical diagnostics, virtual environments, and human–computer interaction. In this work, we introduce Dual Focus-3D, a [...] Read more.
Estimating gaze direction is a key task in computer vision, especially for understanding where a person is focusing their attention. It is essential for applications in assistive technology, medical diagnostics, virtual environments, and human–computer interaction. In this work, we introduce Dual Focus-3D, a novel hybrid deep learning architecture that combines appearance-based features from eye images with 3D head orientation data. This fusion enhances the model’s prediction accuracy and robustness, particularly in challenging natural environments. To support training and evaluation, we present EyeLis, a new dataset containing 5206 annotated samples with corresponding 3D gaze and head pose information. Our model achieves state-of-the-art performance, with a MAE of 1.64° on EyeLis, demonstrating its ability to generalize effectively across both synthetic and real datasets. Key innovations include a multimodal feature fusion strategy, an angular loss function optimized for 3D gaze prediction, and regularization techniques to mitigate overfitting. Our results show that including 3D spatial information directly in the learning process significantly improves accuracy. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

18 pages, 1498 KB  
Article
Speech Emotion Recognition on MELD and RAVDESS Datasets Using CNN
by Gheed T. Waleed and Shaimaa H. Shaker
Information 2025, 16(7), 518; https://doi.org/10.3390/info16070518 - 21 Jun 2025
Cited by 1 | Viewed by 2995
Abstract
Speech emotion recognition (SER) plays a vital role in enhancing human–computer interaction (HCI) and can be applied in affective computing, virtual support, and healthcare. This research presents a high-performance SER framework based on a lightweight 1D Convolutional Neural Network (1D-CNN) and a multi-feature [...] Read more.
Speech emotion recognition (SER) plays a vital role in enhancing human–computer interaction (HCI) and can be applied in affective computing, virtual support, and healthcare. This research presents a high-performance SER framework based on a lightweight 1D Convolutional Neural Network (1D-CNN) and a multi-feature fusion technique. Rather than employing spectrograms as image-based input, frame-level characteristics (Mel-Frequency Cepstral Coefficients, Mel-Spectrograms, and Chroma vectors) are calculated throughout the sequences to preserve temporal information and reduce the computing expense. The model attained classification accuracies of 94.0% on MELD (multi-party talks) and 91.9% on RAVDESS (acted speech). Ablation experiments demonstrate that the integration of complimentary features significantly outperforms the utilisation of a singular feature as a baseline. Data augmentation techniques, including Gaussian noise and time shifting, enhance model generalisation. The proposed method demonstrates significant potential for real-time emotion recognition using audio only in embedded or resource-constrained devices. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
Show Figures

Figure 1

23 pages, 9051 KB  
Article
Predicting User Attention States from Multimodal Eye–Hand Data in VR Selection Tasks
by Xiaoxi Du, Jinchun Wu, Xinyi Tang, Xiaolei Lv, Lesong Jia and Chengqi Xue
Electronics 2025, 14(10), 2052; https://doi.org/10.3390/electronics14102052 - 19 May 2025
Viewed by 1247
Abstract
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state [...] Read more.
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state prediction model based on the multimodal fusion of eye and hand features, which distinguishes whether users primarily employ goal-directed attention or stimulus-driven attention during the execution of their intentions. In our experiment, we collected three types of data—eye movements, hand movements, and pupil changes—and instructed participants to complete a virtual button selection task. This setup allowed us to establish a binary ground truth label for attentional state during the execution of selection intentions for model training. To investigate the impact of different time windows on prediction performance, we designed eight time windows ranging from 0 to 4.0 s (in increments of 0.5 s) and compared the performance of eleven algorithms, including logistic regression, support vector machine, naïve Bayes, k-nearest neighbors, decision tree, linear discriminant analysis, random forest, AdaBoost, gradient boosting, XGBoost, and neural networks. The results indicate that, within the 3 s window, the gradient boosting model performed best, achieving a weighted F1-score of 0.8835 and an Accuracy of 0.8860. Furthermore, the analysis of feature importance demonstrated that the multimodal eye–hand features play a critical role in the prediction. Overall, this study introduces an innovative approach that integrates three types of multimodal eye–hand behavioral and physiological data within a virtual reality interaction context. This framework provides both theoretical and methodological support for predicting users’ attentional states within short time windows and contributes practical guidance for the design of attention-adaptive 3D interfaces. In addition, the proposed multimodal eye–hand data fusion framework also demonstrates potential applicability in other three-dimensional interaction domains, such as game experience optimization, rehabilitation training, and driver attention monitoring. Full article
Show Figures

Figure 1

40 pages, 24863 KB  
Article
Digital Twin-Based Technical Research on Comprehensive Gear Fault Diagnosis and Structural Performance Evaluation
by Qiang Zhang, Zhe Wu, Boshuo An, Ruitian Sun and Yanping Cui
Sensors 2025, 25(9), 2775; https://doi.org/10.3390/s25092775 - 27 Apr 2025
Cited by 5 | Viewed by 1478
Abstract
In the operation process of modern industrial equipment, as the core transmission component, the operation state of the gearbox directly affects the overall performance and service life of the equipment. However, the current gear operation is still faced with problems such as poor [...] Read more.
In the operation process of modern industrial equipment, as the core transmission component, the operation state of the gearbox directly affects the overall performance and service life of the equipment. However, the current gear operation is still faced with problems such as poor monitoring, a single detection index, and low data utilization, which lead to incomplete evaluation results. In view of these challenges, this paper proposes a shape and property integrated gearbox monitoring system based on digital twin technology and artificial intelligence, which aims to realize real-time fault diagnosis, performance prediction, and the dynamic visualization of gear through virtual real mapping and data interaction, and lays the foundation for the follow-up predictive maintenance application. Taking the QPZZ-ii gearbox test bed as the physical entity, the research establishes a five-layer architecture: functional service layer, software support layer, model integration layer, data-driven layer, and digital twin layer, forming a closed-loop feedback mechanism. In terms of technical implementation, combined with HyperMesh 2023 refinement mesh generation, ABAQUS 2023 simulates the stress distribution of gear under thermal fluid solid coupling conditions, the Gaussian process regression (GPR) stress prediction model, and a fault diagnosis algorithm based on wavelet transform and the depth residual shrinkage network (DRSN), and analyzes the vibration signal and stress distribution of gear under normal, broken tooth, wear and pitting fault types. The experimental verification shows that the fault diagnosis accuracy of the system is more than 99%, the average value of the determination coefficient (R2) of the stress prediction model is 0.9339 (driving wheel) and 0.9497 (driven wheel), and supports the real-time display of three-dimensional cloud images. The advantage of the research lies in the interaction and visualization of fusion of multi-source data, but it is limited to the accuracy of finite element simulation and the difficulty of obtaining actual stress data. This achievement provides a new method for intelligent monitoring of industrial equipment and effectively promotes the application of digital twin technology in the field of predictive maintenance. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

39 pages, 1564 KB  
Article
Future Outdoor Safety Monitoring: Integrating Human Activity Recognition with the Internet of Physical–Virtual Things
by Yu Chen, Jia Li, Erik Blasch and Qian Qu
Appl. Sci. 2025, 15(7), 3434; https://doi.org/10.3390/app15073434 - 21 Mar 2025
Cited by 3 | Viewed by 1643
Abstract
The convergence of the Internet of Physical–Virtual Things (IoPVT) and the Metaverse presents a transformative opportunity for safety and health monitoring in outdoor environments. This concept paper explores how integrating human activity recognition (HAR) with the IoPVT within the Metaverse can revolutionize public [...] Read more.
The convergence of the Internet of Physical–Virtual Things (IoPVT) and the Metaverse presents a transformative opportunity for safety and health monitoring in outdoor environments. This concept paper explores how integrating human activity recognition (HAR) with the IoPVT within the Metaverse can revolutionize public health and safety, particularly in urban settings with challenging climates and architectures. By seamlessly blending physical sensor networks with immersive virtual environments, the paper highlights a future where real-time data collection, digital twin modeling, advanced analytics, and predictive planning proactively enhance safety and well-being. Specifically, three dimensions of humans, technology, and the environment interact toward measuring safety, health, and climate. Three outdoor cultural scenarios showcase the opportunity to utilize HAR–IoPVT sensors for urban external staircases, rural health, climate, and coastal infrastructure. Advanced HAR–IoPVT algorithms and predictive analytics would identify potential hazards, enabling timely interventions and reducing accidents. The paper also explores the societal benefits, such as proactive health monitoring, enhanced emergency response, and contributions to smart city initiatives. Additionally, we address the challenges and research directions necessary to realize this future, emphasizing AI technical scalability, ethical considerations, and the importance of interdisciplinary collaboration for designs and policies. By articulating an AI-driven HAR vision along with required advancements in edge-based sensor data fusion, city responsiveness with fog computing, and social planning through cloud analytics, we aim to inspire the academic community, industry stakeholders, and policymakers to collaborate in shaping a future where technology profoundly improves outdoor health monitoring, enhances public safety, and enriches the quality of urban life. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

22 pages, 3652 KB  
Article
Named Entity Recognition in Online Medical Consultation Using Deep Learning
by Ze Hu, Wenjun Li and Hongyu Yang
Appl. Sci. 2025, 15(6), 3033; https://doi.org/10.3390/app15063033 - 11 Mar 2025
Viewed by 1246
Abstract
Named entity recognition in online medical consultation aims to address the challenge of identifying various types of medical entities within complex and unstructured social text in the context of online medical consultations. This can provide important data support for constructing more powerful online [...] Read more.
Named entity recognition in online medical consultation aims to address the challenge of identifying various types of medical entities within complex and unstructured social text in the context of online medical consultations. This can provide important data support for constructing more powerful online medical consultation knowledge graphs and improving virtual intelligent health assistants. A dataset of 26 medical entity types for named entity recognition for online medical consultations is first constructed. Then, a novel approach for deep named entity recognition in the medical field based on the fusion context mechanism is proposed. This approach captures enhanced local and global contextual semantic representations of online medical consultation text while simultaneously modeling high- and low-order feature interactions between local and global contexts, thereby effectively improving the sequence labeling performance. The experimental results show that the proposed approach can effectively identify 26 medical entity types with an average F1 score of 85.47%, outperforming the state-of-the-art (SOTA) method. The practical significance of this study lies in improving the efficiency and performance of domain-specific knowledge extraction in online medical consultation, supporting the development of virtual intelligent health assistants based on large language models and enabling real-time intelligent medical decision-making, thereby helping patients and their caregivers access common medical information more promptly. Full article
Show Figures

Figure 1

17 pages, 2797 KB  
Article
Multi-Environment Vehicle Trajectory Automatic Driving Scene Generation Method Based on Simulation and Real Vehicle Testing
by Yicheng Cao, Haiming Sun, Guisheng Li, Chuan Sun, Haoran Li, Junru Yang, Liangyu Tian and Fei Li
Electronics 2025, 14(5), 1000; https://doi.org/10.3390/electronics14051000 - 1 Mar 2025
Cited by 1 | Viewed by 1072
Abstract
As autonomous vehicles increasingly populate roads, robust testing is essential to ensure their safety and reliability. Due to the limitation that traditional testing methodologies (real-world and simulation testing) are difficult to cover a wide range of scenarios and ensure repeatability, this study proposes [...] Read more.
As autonomous vehicles increasingly populate roads, robust testing is essential to ensure their safety and reliability. Due to the limitation that traditional testing methodologies (real-world and simulation testing) are difficult to cover a wide range of scenarios and ensure repeatability, this study proposes a novel virtual-real fusion testing approach that integrates Graph Theory and Artificial Potential Fields (APF) in virtual-real fusion autonomous vehicle testing. Conducted using SUMO software, our strategic lane change and speed adjustment simulation experiments demonstrate that our approach can efficiently handle vehicle dynamics and environmental interactions compared to traditional Rapidly-exploring Random Tree (RRT) methods. The proposed method shows a significant reduction in maneuver completion times—up to 41% faster in simulations and 55% faster in real-world tests. Field experiments at the Vehicle-Road-Cloud Integrated Platform in Suzhou High-Speed Railway New Town confirmed the method’s practical viability and robustness under real traffic conditions. The results indicate that our integrated approach enhances the authenticity and efficiency of testing, thereby advancing the development of dependable, autonomous driving systems. This research not only contributes to the theoretical framework but also has practical implications for improving autonomous vehicle testing processes. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

25 pages, 9799 KB  
Article
A Diamond Approach to Develop Virtual Object Interaction: Fusing Augmented Reality and Kinesthetic Haptics
by Alma Rodriguez-Ramirez, Osslan Osiris Vergara Villegas, Manuel Nandayapa, Francesco Garcia-Luna and María Cristina Guevara Neri
Multimodal Technol. Interact. 2025, 9(2), 15; https://doi.org/10.3390/mti9020015 - 13 Feb 2025
Viewed by 1068
Abstract
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight [...] Read more.
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight and touch when interacting with a virtual object. The sense of sight is represented through augmented reality, and the sense of touch is represented through kinesthetic haptics. The diamond methodology is centered on the user experience and comprises five general stages: (i) experience design, (ii) sensory representation, (iii) development, (iv) display, and (v) fusion. The first stage is the expected, proposed, or needed user experience. Then, each technology takes its homologous activities from the second to the fourth stage, diverging from each other along their development. Finally, the technologies converge to the fifth stage for fusion in the user experience. The diamond methodology was tested by generating a user’s dual sensation when interacting with the elasticity of a tension virtual spring. The user can simultaneously perceive the visual and tactile change of the virtual spring during the interaction, representing the object’s deformation. The experimental results demonstrated that an interactive experience can be felt and seen in augmented reality following the diamond methodology. Full article
Show Figures

Graphical abstract

14 pages, 6597 KB  
Article
A Virtual Assembly Technology for Virtual–Real Fusion Interaction of Ship Structure Based on Three-Level Collision Detection
by Ze Jiang, Pengyu Wei, Yuntong Du, Jiayi Peng and Qingbo Zeng
J. Mar. Sci. Eng. 2024, 12(11), 1910; https://doi.org/10.3390/jmse12111910 - 25 Oct 2024
Cited by 1 | Viewed by 1085
Abstract
With the rapid advancement of new-generation information technology, the virtual–real fusion interaction has increasingly become a crucial technique for structural analysis to determine the strength envelope of hulls. A high-precision assembly of the experimental devices in a virtual environment is vital. A virtual [...] Read more.
With the rapid advancement of new-generation information technology, the virtual–real fusion interaction has increasingly become a crucial technique for structural analysis to determine the strength envelope of hulls. A high-precision assembly of the experimental devices in a virtual environment is vital. A virtual assembly method for a structural virtual–real fusion test based on the oriented bounding box (OBB) algorithm, the Devillers and Guigue algorithm, and differential triangle facets algorithm are proposed in this paper. The experiment of the connector in a typical offshore floating platform is performed as a case, which indicates that the virtual assembly method proposed in this paper enables the assessment of assembly virtually prior to the actual experiment, and the assembly accuracy can reach 0.01 mm. The digitalization and virtual–real fusion interaction for the mechanical experiment of hulls are advanced to ensure efficiency, safety, and economy. Full article
Show Figures

Figure 1

26 pages, 72430 KB  
Article
Interactive Mesh Sculpting with Arbitrary Topologies in Head-Mounted VR Environments
by Xiaoqiang Zhu and Yifei Yang
Mathematics 2024, 12(15), 2428; https://doi.org/10.3390/math12152428 - 5 Aug 2024
Cited by 2 | Viewed by 2350
Abstract
Shape modeling is a dynamic area in computer graphics with significant applications in computer-aided design, animation, architecture, and entertainment. Virtual sculpting, a key paradigm in free-form modeling, has traditionally been performed on desktop computers where users manipulate meshes with controllers and view the [...] Read more.
Shape modeling is a dynamic area in computer graphics with significant applications in computer-aided design, animation, architecture, and entertainment. Virtual sculpting, a key paradigm in free-form modeling, has traditionally been performed on desktop computers where users manipulate meshes with controllers and view the models on two-dimensional displays. However, the advent of Extended Reality (XR) technology has ushered in immersive interactive experiences, expanding the possibilities for virtual sculpting across various environments. A real-time virtual sculpting system implemented in a Virtual Reality (VR) setting is introduced in this paper, utilizing quasi-uniform meshes as the foundational structure. In our innovative sculpting system, we design an integrated framework encompassing a surface selection algorithm, mesh optimization technique, mesh deformation strategy, and topology fusion methodology, which are all tailored to meet the needs of the sculpting process. The universal, user-friendly sculpting tools designed to support free-form topology are offered in this system, ensuring that the meshes remain watertight, manifold, and free from self-intersections throughout the sculpting process. The models produced are versatile and suitable for use in diverse fields such as gaming, art, and education. Experimental results confirm the system’s real-time performance and universality, highlighting its user-centric design. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 47096 KB  
Article
Digital Twin System of Pest Management Driven by Data and Model Fusion
by Min Dai, Yutian Shen, Xiaoyin Li, Jingjing Liu, Shanwen Zhang and Hong Miao
Agriculture 2024, 14(7), 1099; https://doi.org/10.3390/agriculture14071099 - 9 Jul 2024
Cited by 5 | Viewed by 3309
Abstract
Protecting crops from pests is a major issue in the current agricultural production system. The agricultural digital twin system, as an emerging product of modern agricultural development, can effectively achieve intelligent control of pest management systems. In response to the current problems of [...] Read more.
Protecting crops from pests is a major issue in the current agricultural production system. The agricultural digital twin system, as an emerging product of modern agricultural development, can effectively achieve intelligent control of pest management systems. In response to the current problems of heavy use of pesticides in pest management and over-reliance on managers’ personal experience with pepper plants, this paper proposes a digital twin system that monitors changes in aphid populations, enabling timely and effective pest control interventions. The digital twin system is developed for pest management driven by data and model fusion. First, a digital twin framework is presented to manage insect pests in the whole process of crop growth. Then, a digital twin model is established to predict the number of pests based on the random forest algorithm optimized by the genetic algorithm; a pest control intervention based on a twin data search strategy is designed and the decision optimization of pest management is conducted. Finally, a case study is carried out to verify the feasibility of the system for the growth state of pepper and pepper pests. The experimental results show that the virtual and real interactive feedback of the pepper aphid management system is achieved. It can obtain prediction accuracy of 88.01% with the training set and prediction accuracy of 85.73% with the test set. The application of the prediction model to the decision-making objective function can improve economic efficiency by more than 20%. In addition, the proposed approach is superior to the manual regulatory method in pest management. This system prioritizes detecting population trends over precise species identification, providing a practical tool for integrated pest management (IPM). Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

35 pages, 13690 KB  
Article
An Audio-Based SLAM for Indoor Environments: A Robotic Mixed Reality Presentation
by Elfituri S. F. Lahemer and Ahmad Rad
Sensors 2024, 24(9), 2796; https://doi.org/10.3390/s24092796 - 27 Apr 2024
Cited by 2 | Viewed by 3108
Abstract
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker’s direction. The system allows an autonomous robot equipped with a single [...] Read more.
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker’s direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot’s surroundings. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 4954 KB  
Article
Virtual Reality and Internet of Things Based Digital Twin for Smart City Cross-Domain Interoperability
by Guillermo del Campo, Edgar Saavedra, Luca Piovano, Francisco Luque and Asuncion Santamaria
Appl. Sci. 2024, 14(7), 2747; https://doi.org/10.3390/app14072747 - 25 Mar 2024
Cited by 23 | Viewed by 5049
Abstract
The fusion of Internet of Things (IoT), Digital Twins, and Virtual Reality (VR) technologies marks a pivotal advancement in urban development, offering new services to citizens and municipalities in urban environments. This integration promises enhanced urban planning, management, and engagement by providing a [...] Read more.
The fusion of Internet of Things (IoT), Digital Twins, and Virtual Reality (VR) technologies marks a pivotal advancement in urban development, offering new services to citizens and municipalities in urban environments. This integration promises enhanced urban planning, management, and engagement by providing a comprehensive, real-time digital reflection of the city, enriched with immersive experiences and interactive capabilities. It enables smarter decision-making, efficient resource management, and personalized citizen services, transforming the urban landscape into a more sustainable, livable, and responsive environment. The research presented herein focuses on the practical implementation of a DT concept for managing cross-domain smart city services, leveraging VR technology to create a virtual replica of the urban environment and IoT devices. Imperative for cross-domain city services is interoperability, which is crucial not only for the seamless operation of these advanced tools but also for unlocking the potential of cross-service applications. Through the deployment of our model at the IoTMADLab facilities, we showcase the integration of IoT devices within varied urban infrastructures. The outcomes demonstrate the efficacy of VR interfaces in simplifying complex interactions, offering pivotal insights into device functionality, and enabling informed decision-making processes. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

Back to TopTop