Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = haptic–visual synchronization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 7175 KB  
Article
VisFactory: Adaptive Multimodal Digital Twin with Integrated Visual-Haptic-Auditory Analytics for Industry 4.0 Engineering Education
by Tsung-Ching Lin, Cheng-Nan Chiu, Po-Tong Wang and Li-Der Fang
Multimedia 2025, 1(1), 3; https://doi.org/10.3390/multimedia1010003 - 18 Aug 2025
Viewed by 744
Abstract
Industry 4.0 has intensified the skills gap in industrial automation education, with graduates requiring extended on boarding periods and supplementary training investments averaging USD 11,500 per engineer. This paper introduces VisFactory, a multimedia learning system that extends the cognitive theory of multimedia learning [...] Read more.
Industry 4.0 has intensified the skills gap in industrial automation education, with graduates requiring extended on boarding periods and supplementary training investments averaging USD 11,500 per engineer. This paper introduces VisFactory, a multimedia learning system that extends the cognitive theory of multimedia learning by incorporating haptic feedback as a third processing channel alongside visual and auditory modalities. The system integrates a digital twin architecture with ultra-low latency synchronization (12.3 ms) across all sensory channels, a dynamic feedback orchestration algorithm that distributes information optimally across modalities, and a tripartite student model that continuously calibrates instruction parameters. We evaluated the system through a controlled experiment with 127 engineering students randomly assigned to experimental and control groups, with assessments conducted immediately and at three-month and six-month intervals. VisFactory significantly enhanced learning outcomes across multiple dimensions: 37% reduction in time to mastery (t(125) = 11.83, p < 0.001, d = 2.11), skill acquisition increased from 28% to 85% (ηp2=0.54), and 28% higher knowledge retention after six months. The multimodal approach demonstrated differential effectiveness across learning tasks, with haptic feedback providing the most significant benefit for procedural skills (52% error reduction) and visual–auditory integration proving most effective for conceptual understanding (49% improvement). The adaptive modality orchestration reduced cognitive load by 43% compared to unimodal interfaces. This research advances multimedia learning theory by validating tri-modal integration effectiveness and establishing quantitative benchmarks for sensory channel synchronization. The findings provide a theoretical framework and implementation guidelines for optimizing multimedia learning environments for complex skill development in technical domains. Full article
Show Figures

Figure 1

17 pages, 852 KB  
Review
A Review of Multimodal Interaction in Remote Education: Technologies, Applications, and Challenges
by Yangmei Xie, Liuyi Yang, Miao Zhang, Sinan Chen and Jialong Li
Appl. Sci. 2025, 15(7), 3937; https://doi.org/10.3390/app15073937 - 3 Apr 2025
Cited by 3 | Viewed by 2877
Abstract
Multimodal interaction technology has become a key aspect of remote education by enriching student engagement and learning results as it utilizes the speech, gesture, and visual feedback as various sensory channels. This publication reflects on the latest breakthroughs in multimodal interaction and its [...] Read more.
Multimodal interaction technology has become a key aspect of remote education by enriching student engagement and learning results as it utilizes the speech, gesture, and visual feedback as various sensory channels. This publication reflects on the latest breakthroughs in multimodal interaction and its usage in remote learning environments, including a multi-layered discussion that addresses various levels of learning and understanding. It showcases the main technologies, such as speech recognition, computer vision, and haptic feedback, that enable the visitors and learning portals to exchange data fluidly. In addition, we investigate the function of multimodal learning analytics in order to measure the cognitive and emotional states of students, targeting personalized feedback and refining instructional strategies. Though multimodal communication may bring a historical improvement to the mode of online education, the platform still faces many issues, such as media synchronization, higher computational demand, physical adaptability, and privacy concerns. These problems demand further research in the fields of algorithm optimization, access to technology guidance, and the ethical use of big data. This paper presents a systematic review of the application of multimodal interaction in remote education. Through the analysis of 25 selected research papers, this review explores key technologies, applications, and challenges in the field. By synthesizing existing findings, this study highlights the role of multimodal learning analytics, speech recognition, gesture-based interaction, and haptic feedback in enhancing remote learning. Full article
(This article belongs to the Special Issue Current Status and Perspectives in Human–Computer Interaction)
Show Figures

Figure 1

29 pages, 6970 KB  
Review
Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review
by Xiaochen Zhang, Xiaoyu Huang, Yiran Ding, Liumei Long, Wujing Li and Xing Xu
Sensors 2024, 24(24), 7986; https://doi.org/10.3390/s24247986 - 14 Dec 2024
Cited by 6 | Viewed by 6564
Abstract
Research into new solutions for wearable assistive devices for the visually impaired is an important area of assistive technology (AT). This plays a crucial role in improving the functionality and independence of the visually impaired, helping them to participate fully in their daily [...] Read more.
Research into new solutions for wearable assistive devices for the visually impaired is an important area of assistive technology (AT). This plays a crucial role in improving the functionality and independence of the visually impaired, helping them to participate fully in their daily lives and in various community activities. This study presents a bibliometric analysis of the literature published over the last decade on wearable assistive devices for the visually impaired, retrieved from the Web of Science Core Collection (WoSCC) using CiteSpace, to provide an overview of the current state of research, trends, and hotspots in the field. The narrative focuses on prominent innovations in recent years related to wearable assistive devices for the visually impaired based on sensory substitution technology, describing the latest achievements in haptic and auditory feedback devices, the application of smart materials, and the growing concern about the conflicting interests of individuals and societal needs. It also summarises the current opportunities and challenges facing the field and discusses the following insights and trends: (1) optimization of the transmission of haptic and auditory information while multitasking; (2) advance research on smart materials and foster cross-disciplinary collaboration among experts; and (3) balance the interests of individuals and society. Given the two essential directions, the low-cost, stand-alone pursuit of efficiency and the high-cost pursuit of high-quality services that are closely integrated with accessible infrastructure, the latest advances will gradually allow more freedom for ambient assisted living by using robotics and automated machines, while using sensor and human–machine interaction as bridges to promote the synchronization of machine intelligence and human cognition. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

14 pages, 3795 KB  
Article
A Timestamp-Independent Haptic–Visual Synchronization Method for Haptic-Based Interaction System
by Yiwen Xu, Liangtao Huang, Tiesong Zhao, Ying Fang and Liqun Lin
Sensors 2022, 22(15), 5502; https://doi.org/10.3390/s22155502 - 23 Jul 2022
Cited by 4 | Viewed by 2738
Abstract
The booming haptic data significantly improve the users’ immersion during multimedia interaction. As a result, the study of a Haptic-based Interaction System has attracted the attention of the multimedia community. To construct such a system, a challenging task is the synchronization of multiple [...] Read more.
The booming haptic data significantly improve the users’ immersion during multimedia interaction. As a result, the study of a Haptic-based Interaction System has attracted the attention of the multimedia community. To construct such a system, a challenging task is the synchronization of multiple sensorial signals that is critical to the user experience. Despite audio-visual synchronization efforts, there is still a lack of a haptic-aware multimedia synchronization model. In this work, we propose a timestamp-independent synchronization for haptic–visual signal transmission. First, we exploit the sequential correlations during delivery and playback of a haptic–visual communication system. Second, we develop a key sample extraction of haptic signals based on the force feedback characteristics and a key frame extraction of visual signals based on deep-object detection. Third, we combine the key samples and frames to synchronize the corresponding haptic–visual signals. Without timestamps in the signal flow, the proposed method is still effective and more robust in complicated network conditions. Subjective evaluation also shows a significant improvement of user experience with the proposed method. Full article
Show Figures

Figure 1

27 pages, 6298 KB  
Article
Haptic Glove TV Device for People with Visual Impairment
by Diego Villamarín and José Manuel Menéndez
Sensors 2021, 21(7), 2325; https://doi.org/10.3390/s21072325 - 26 Mar 2021
Cited by 8 | Viewed by 5780
Abstract
Immersive video is changing the way we enjoy TV. It is no longer just about receiving sequential images with audio, but also playing with other human senses through smells, vibrations of movement, 3D audio, feeling water, wind, heat, and other emotions that can [...] Read more.
Immersive video is changing the way we enjoy TV. It is no longer just about receiving sequential images with audio, but also playing with other human senses through smells, vibrations of movement, 3D audio, feeling water, wind, heat, and other emotions that can be experienced through all human senses. This work aims to validate the usefulness of an immersive and interactive solution for people with severe visual impairment by developing a haptic glove that allows receiving signals and generating vibrations in hand, informing about what happens in a scene. The study case presented here shows how the haptic device can take the information about the ball’s location in the playing field, synchronized with the video reception, and deliver it to the user in the form of vibrations during the re-transmission of a soccer match. In this way, we take visually impaired people to live a new sensory experience, allowing digital and social inclusion and accessibility to audiovisual technologies that they could not enjoy before. This work shows the methodology used for the design, implementation, and results evaluation. Usability tests were carried out with fifteen visually impaired people who used the haptic device to attend a soccer match synchronized with the glove’s vibrations. Full article
(This article belongs to the Collection Computer Vision Based Smart Sensing)
Show Figures

Figure 1

18 pages, 1188 KB  
Article
A Pneumatically-Actuated Mouse for Delivering Multimodal Haptic Feedback
by Waseem Hassan, Hwangil Kim, Aishwari Talhan and Seokhee Jeon
Appl. Sci. 2020, 10(16), 5611; https://doi.org/10.3390/app10165611 - 13 Aug 2020
Cited by 4 | Viewed by 3993
Abstract
Most of the information a user obtains through a computer is visual and/or auditory. Providing synchronized haptic information in addition to visual and/or auditory information can significantly enhance user experience and perception of virtual objects. In this paper, we propose a pneumatically-controlled haptic [...] Read more.
Most of the information a user obtains through a computer is visual and/or auditory. Providing synchronized haptic information in addition to visual and/or auditory information can significantly enhance user experience and perception of virtual objects. In this paper, we propose a pneumatically-controlled haptic mouse that can replace a conventional computer mouse and deliver multimodal haptic feedback using a single end-effector. The haptic mouse can deliver distinct haptic feedback, i.e., static pressure, high frequency vibrations, and impact response. It has a dual-layered silicone housing with two air chambers. The outer layer is stretchable, and when pumped with air, changes in size and delivers feedback directly to the hand. The inner layer is non-stretchable, and is used to hold the form of the haptic mouse. Various experiments were conducted to quantify the characteristics of haptic mouse. The haptic mouse can generate a static pressure of up to 0.6 Gs, vibrations up to 250 Hz, and provides a max actuation delay of 23 ms. Based on those characteristics, haptic geometry and texture rendering algorithms were developed. These algorithms were used to render virtual shapes and textures and were evaluated using a psychophysical experiment. The results show that participants were able to successfully identify the geometries and textures in most cases. Full article
(This article belongs to the Special Issue Haptics: Technology and Applications)
Show Figures

Graphical abstract

15 pages, 8085 KB  
Article
Visuo-Haptic Mixed Reality Simulation Using Unbound Handheld Tools
by Mehmet Murat Aygün, Yusuf Çağrı Öğüt, Hulusi Baysal and Yiğit Taşcıoğlu
Appl. Sci. 2020, 10(15), 5344; https://doi.org/10.3390/app10155344 - 3 Aug 2020
Cited by 8 | Viewed by 5930
Abstract
Visuo-haptic mixed reality (VHMR) adds virtual objects to a real scene and enables users to see and also touch them via a see-through display and a haptic device. Most studies with kinesthetic feedback use general-purpose haptic devices, which require the user to continuously [...] Read more.
Visuo-haptic mixed reality (VHMR) adds virtual objects to a real scene and enables users to see and also touch them via a see-through display and a haptic device. Most studies with kinesthetic feedback use general-purpose haptic devices, which require the user to continuously hold an attached stylus. This approach constrains users to the mechanical limits of the device even when it is not needed. In this paper, we propose a novel VHMR concept with an encountered-type haptic display (ETHD), which consists of a precision hexapod positioner and a six-axis force/torque transducer. The main contribution is that the users work with unbound real-life tools with tracking markers. ETHD’s end-effector remains inside the virtual object and follows the tooltip to engage only during an interaction. We have developed a simulation setup and experimentally evaluated the relative accuracy and synchronization of the three major processes, namely tool tracking, haptic rendering, and visual rendering. The experiments successfully build-up to a simple simulation scenario where a tennis ball with a fixed center is deformed by the user. Full article
(This article belongs to the Special Issue Haptics: Technology and Applications)
Show Figures

Graphical abstract

Back to TopTop