3D User Interfaces and Virtual Reality

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (20 April 2024) | Viewed by 36701

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70503, USA
Interests: human–computer Interaction (HCI); 3D user interfaces; 3D interfaces for video games; virtual reality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing, University of North Florida, Jacksonville, FL 32224, USA
Interests: human–computer interaction (HCI); virtual reality; telepresence; perception of virtual environments

Special Issue Information

Dear Colleagues,

This Special Issue explores methods, technologies, and studies of 3D user interfaces (3DUIs) and virtual reality (VR) in the broad area of human–computer interaction (HCI). HCI is a multidisciplinary field where the researchers study the interface between people and computers, including studies of how people interact with computers, how people interact with each other using computer-mediated communication, and to what extent an interface promotes a successful interaction based on user needs. Modern 3D user interfaces can involve motion-tracked input devices, interactions in 3D, or other interfaces in which a 3D input or environment is a characteristic. Like HCI, 3DUI and VR research lies at the intersection between computer science, behavioral science, design, media studies, and several other fields of study. This Special Issue invites contributions on the technological, creative, perceptual, cognitive, social, and health aspects of 3DUI and VR.

We encourage authors to submit original research articles, novel case studies, insightful reviews, theoretical and critical perspectives, and well-argued viewpoint articles on 3D user interfaces and virtual reality, including but not limited to the following:

  • 3D input and sensing technologies;
  • 3D interaction and metaphors;
  • 3D computer-mediated communication;
  • 3D gestural input;
  • 3D interaction techniques;
  • Empirical studies of 3DUIs;
  • Novel software architectures for 3DUI;
  • Interfaces for VR, AR, or other 3D computer environments;
  • Evaluation methods for 3DUIs;
  • Human perception of 3D interaction;
  • Novel applications of 3DUIs: games, entertainment, CAD, education, etc.;
  • Mobile and desktop 3DUIs;
  • Hybrid 3DUIs;
  • VR Navigation;
  • AR/VR selection and manipulation.

Of particular interest are articles that critically explore 3D user interface methods in contexts such as games, education, and virtual reality.

Dr. Arun K. Kulshreshth
Dr. Kevin Pfeil
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D User Interfaces (3DUIs)
  • interfaces
  • 3D interactions
  • spatial user interaction 
  • Virtual reality (VR) 
  • Augmented reality (AR)
  • Mixed reality (MR) 
  • Human-computer interaction (HCI)
  • user studies

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 13291 KiB  
Article
3D Hand Motion Generation for VR Interactions Using a Haptic Data Glove
by Sang-Woo Seo, Woo-Sug Jung and Yejin Kim
Multimodal Technol. Interact. 2024, 8(7), 62; https://doi.org/10.3390/mti8070062 - 15 Jul 2024
Cited by 1 | Viewed by 1316
Abstract
Recently, VR-based training applications have become popular and promising, as they can simulate real-world situations in a safe, repeatable, and cost-effective way. For immersive simulations, various input devices have been designed and proposed to increase the effectiveness of training. In this study, we [...] Read more.
Recently, VR-based training applications have become popular and promising, as they can simulate real-world situations in a safe, repeatable, and cost-effective way. For immersive simulations, various input devices have been designed and proposed to increase the effectiveness of training. In this study, we developed a novel device that generates 3D hand motion data and provides haptic force feedback for VR interactions. The proposed device can track 3D hand positions using a combination of the global position estimation of ultrasonic sensors and the hand pose estimation of inertial sensors in real time. For haptic feedback, shape–memory alloy (SMA) actuators were designed to provide kinesthetic forces and an efficient power control without an overheat problem. Our device improves upon the shortcomings of existing commercial devices in tracking and haptic capabilities such that it can track global 3D positions and estimate hand poses in a VR space without using an external suit or tracker. For better flexibility in handling and feeling physical objects compared to exoskeleton-based devices, we introduced an SMA-based actuator to control haptic forces. Overall, our device was designed and implemented as a lighter and less bulky glove which provides comparable accuracy and performance in generating 3D hand motion data for a VR training application (i.e., the use of a fire extinguisher), as demonstrated in the experimental results. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

24 pages, 13651 KiB  
Article
Sound of the Police—Virtual Reality Training for Police Communication for High-Stress Operations
by Markus Murtinger, Jakob Carl Uhl, Lisa Maria Atzmüller, Georg Regal and Michael Roither
Multimodal Technol. Interact. 2024, 8(6), 46; https://doi.org/10.3390/mti8060046 - 4 Jun 2024
Viewed by 4423
Abstract
Police communication is a field with unique challenges and specific requirements. Police officers depend on effective communication, particularly in high-stress operations, but current training methods are not focused on communication and provide only limited evaluation methods. This work explores the potential of virtual [...] Read more.
Police communication is a field with unique challenges and specific requirements. Police officers depend on effective communication, particularly in high-stress operations, but current training methods are not focused on communication and provide only limited evaluation methods. This work explores the potential of virtual reality (VR) for enhancing police communication training. The rise of VR training, especially in specific application areas like policing, provides benefits. We conducted a field study during police training to assess VR approaches for training communication. The results show that VR is suitable for communication training if factors such as realism, reflection and repetition are given in the VR system. Trainer feedback shows that assistive systems for evaluation and visualization of communication are highly needed. We present ideas and approaches for evaluation in communication training and concepts for visualization and exploration of the data. This research contributes to improving VR police training and has implications for communication training in VR in challenging contexts. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

22 pages, 1360 KiB  
Article
How New Developers Approach Augmented Reality Development Using Simplified Creation Tools: An Observational Study
by Narges Ashtari and Parmit K. Chilana
Multimodal Technol. Interact. 2024, 8(4), 35; https://doi.org/10.3390/mti8040035 - 22 Apr 2024
Viewed by 1788
Abstract
Software developers new to creating Augmented Reality (AR) experiences often gravitate towards simplified development environments, such as 3D game engines. While popular game engines such as Unity and Unreal have evolved to offer extensive support and functionalities for AR creation, many developers still [...] Read more.
Software developers new to creating Augmented Reality (AR) experiences often gravitate towards simplified development environments, such as 3D game engines. While popular game engines such as Unity and Unreal have evolved to offer extensive support and functionalities for AR creation, many developers still find it difficult to realize their immersive development projects. We ran an observational study with 12 software developers to assess how they approach the initial AR creation processes using a simplified development framework, the information resources they seek, and how their learning experience compares to the more mainstream 2D development. We observed that developers often started by looking for code examples rather than breaking down complex problems, leading to challenges in visualizing the AR experience. They encountered vocabulary issues and found trial-and-error methods ineffective due to a lack of familiarity with 3D environments, physics, and motion. These observations highlight the distinct needs of emerging AR developers and suggest that conventional code reuse strategies in mainstream development may be less effective in AR. We discuss the importance of developing more intuitive training and learning methods to foster diversity in developing interactive systems and support self-taught learners. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

15 pages, 2890 KiB  
Article
Design and Evaluation of a Memory-Recalling Virtual Reality Application for Elderly Users
by Zoe Anastasiadou, Eleni Dimitriadou and Andreas Lanitis
Multimodal Technol. Interact. 2024, 8(3), 24; https://doi.org/10.3390/mti8030024 - 21 Mar 2024
Viewed by 1739
Abstract
Virtual reality (VR) can be useful in efforts that aim to improve the well-being of older members of society. Within this context, the work presented in this paper aims to provide the elderly with a user-friendly and enjoyable virtual reality application incorporating memory [...] Read more.
Virtual reality (VR) can be useful in efforts that aim to improve the well-being of older members of society. Within this context, the work presented in this paper aims to provide the elderly with a user-friendly and enjoyable virtual reality application incorporating memory recall and storytelling activities that could promote mental awareness. An important aspect of the proposed VR application is the presence of a virtual audience that listens to the stories presented by elderly users and interacts with them. In an effort to maximize the impact of the VR application, research was conducted to study whether the elderly are willing to use the VR application and whether they believe it can help to improve well-being and reduce the effects of loneliness and social isolation. Self-reported results related to the experience of the users show that elderly users are positive towards the use of such an application in everyday life as a means of improving their overall well-being. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

18 pages, 2780 KiB  
Article
Enhancing Calculus Learning through Interactive VR and AR Technologies: A Study on Immersive Educational Tools
by Logan Pinter and Mohammad Faridul Haque Siddiqui
Multimodal Technol. Interact. 2024, 8(3), 19; https://doi.org/10.3390/mti8030019 - 1 Mar 2024
Cited by 2 | Viewed by 2102
Abstract
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive [...] Read more.
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive VR graphing tool capable of standard 2D graphs, solids of revolution, and a series of visualizations deemed potentially useful to struggling students. This tool was developed within the Unity 3D engine, and while interaction and expression parsing rely on existing libraries, core functionalities were developed independently. As a pilot study, it includes qualitative information from a survey of students currently or previously enrolled in Calculus II/III courses, revealing its potential effectiveness. This survey primarily aims to determine the tool’s viability in future endeavors. The positive response suggests the tool’s immediate usefulness and its promising future in educational settings, prompting further exploration and consideration for adaptation into an Augmented Reality (AR) environment. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

21 pages, 25063 KiB  
Article
Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies
by Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, Davy Vanacken and Kris Luyten
Multimodal Technol. Interact. 2024, 8(3), 15; https://doi.org/10.3390/mti8030015 - 20 Feb 2024
Viewed by 2137
Abstract
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical [...] Read more.
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Graphical abstract

28 pages, 800 KiB  
Article
Asymmetric VR Game Subgenres: Implications for Analysis and Design
by Miah Dawes, Katherine Rackliffe, Amanda Lee Hughes and Derek L. Hansen
Multimodal Technol. Interact. 2024, 8(2), 12; https://doi.org/10.3390/mti8020012 - 11 Feb 2024
Cited by 1 | Viewed by 2289
Abstract
This paper identifies subgenres of asymmetric virtual reality (AVR) games and proposes the AVR Game Genre (AVRGG) framework for developing AVR games. We examined 66 games “in the wild” to develop the AVRGG and used it to identify 5 subgenres of AVR games [...] Read more.
This paper identifies subgenres of asymmetric virtual reality (AVR) games and proposes the AVR Game Genre (AVRGG) framework for developing AVR games. We examined 66 games “in the wild” to develop the AVRGG and used it to identify 5 subgenres of AVR games including David(s) vs. Goliath, Hide and Seek, Perspective Puzzle, Order Simulation, and Lifeline. We describe these genres, which account for nearly half of the 66 games reviewed, in terms of the AVRGG framework that highlights salient asymmetries in the mechanics, dynamics, and aesthetics categories. To evaluate the usefulness of the AVRGG framework, we conducted four workshops (two with the AVRGG framework and two without) with novice game designers who generated 16 original AVR game concepts. Comparisons between the workshop groups, observations of the design sessions, focus groups, and surveys showed the promise and limitations of the AVRGG framework as a design tool. We found that novice designers were able to understand and apply the AVRGG framework after only a brief introduction. The observations indicated two primary challenges that AVR designers face: balancing the game between VR and non-VR player(s) and generating original game concepts. The AVRGG framework helped overcome the balancing concerns due to its ability to inspire novice game designers with example subgenres and draw attention to the asymmetric mechanics and competitive/cooperative nature of games. While half of those who used the AVRGG framework to design with created games that fit directly into existing subgenres, the other half viewed the subgenres as “creative constraints” useful in jumpstarting novel game designs that combined, modified, or purposefully avoided existing subgenres. Additional benefits and limitations of the AVRGG framework are outlined in the paper. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

39 pages, 2355 KiB  
Article
A Comparison of One- and Two-Handed Gesture User Interfaces in Virtual Reality—A Task-Based Approach
by Taneli Nyyssönen, Seppo Helle, Teijo Lehtonen and Jouni Smed
Multimodal Technol. Interact. 2024, 8(2), 10; https://doi.org/10.3390/mti8020010 - 2 Feb 2024
Cited by 1 | Viewed by 2504
Abstract
This paper presents two gesture-based user interfaces which were designed for a 3D design review in virtual reality (VR) with inspiration drawn from the shipbuilding industry’s need to streamline and make their processes more sustainable. The user interfaces, one focusing on single-hand (unimanual) [...] Read more.
This paper presents two gesture-based user interfaces which were designed for a 3D design review in virtual reality (VR) with inspiration drawn from the shipbuilding industry’s need to streamline and make their processes more sustainable. The user interfaces, one focusing on single-hand (unimanual) gestures and the other focusing on dual-handed (bimanual) usage, are tested as a case study using 13 tasks. The unimanual approach attempts to provide a higher degree of flexibility, while the bimanual approach seeks to provide more control over the interaction. The interfaces were developed for the Meta Quest 2 VR headset using the Unity game engine. Hand-tracking (HT) is utilized due to potential usability benefits in comparison to standard controller-based user interfaces, which lack intuitiveness regarding the controls and can cause more strain. The user interfaces were tested with 25 test users, and the results indicate a preference toward the one-handed user interface with little variation in test user categories. Additionally, the testing order, which was counterbalanced, had a statistically significant impact on the preference and performance, indicating that learning novel interaction mechanisms requires an adjustment period for reliable results. VR sickness was also strongly experienced by a few users, and there were no signs that gesture controls would significantly alleviate it. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

23 pages, 2722 KiB  
Article
Virtual Reality Assessment of Attention Deficits in Traumatic Brain Injury: Effectiveness and Ecological Validity
by Amaryllis-Chryssi Malegiannaki, Evangelia Garefalaki, Nikolaos Pellas and Mary H. Kosmidis
Multimodal Technol. Interact. 2024, 8(1), 3; https://doi.org/10.3390/mti8010003 - 3 Jan 2024
Viewed by 2988
Abstract
Early detection is crucial for addressing attention deficits commonly associated with Traumatic brain injury (TBI), informing effective rehabilitation planning and intervention. While traditional neuropsychological assessments have been conventionally used to evaluate attention deficits, their limited ecological validity presents notable challenges. This study explores [...] Read more.
Early detection is crucial for addressing attention deficits commonly associated with Traumatic brain injury (TBI), informing effective rehabilitation planning and intervention. While traditional neuropsychological assessments have been conventionally used to evaluate attention deficits, their limited ecological validity presents notable challenges. This study explores the efficacy and validity of a novel virtual reality test, the Computerized Battery for the Assessment of Attention Disorders (CBAAD), among a cohort of TBI survivors (n = 20), in comparison to a healthy control group (n = 20). Participants, ranging in age from 21 to 62 years, were administered a comprehensive neuropsychological assessment, including the CBAAD and the Attention Related Cognitive Errors Scale. While variations in attentional performance were observed across age cohorts, the study found no statistically significant age-related effects within either group. The CBAAD demonstrated sensitivity to attentional dysfunction in the TBI group, establishing its value as a comprehensive test battery for assessing attention in this specific population. Regression analyses demonstrated the CBAAD’s effectiveness in predicting real-life attentional errors reported by TBI patients. In summary, the CBAAD demonstrates sensitivity to attentional dysfunction in TBI patients and the ability to predict real-world attentional errors, establishing its value as a comprehensive test battery for assessing attention in this specific population. Its implementation holds promise for enhancing the early identification of attentional impairments and facilitating tailored rehabilitation strategies for TBI patients. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

23 pages, 13982 KiB  
Article
Authoring Moving Parts of Objects in AR, VR and the Desktop
by Andrés N. Vargas González, Brian Williamson and Joseph J. LaViola, Jr.
Multimodal Technol. Interact. 2023, 7(12), 117; https://doi.org/10.3390/mti7120117 - 13 Dec 2023
Viewed by 2125
Abstract
Creating digital twins of real objects is becoming more popular, with smartphones providing 3D scanning capabilities. Adding semantics to the reconstructed virtual objects is important to possibly reproduce training scenarios that otherwise could demand significant resources or happen in dangerous scenarios in some [...] Read more.
Creating digital twins of real objects is becoming more popular, with smartphones providing 3D scanning capabilities. Adding semantics to the reconstructed virtual objects is important to possibly reproduce training scenarios that otherwise could demand significant resources or happen in dangerous scenarios in some cases. The aim of this work is to evaluate the usability of authoring object component behaviors in immersive and non-immersive approaches. Therefore, we present an evaluation of the perceived ease of use to author moving parts of objects under three different conditions: desktop, augmented reality (AR) and virtual reality (VR). This paper provides insights into the perceived benefits and issues that domain experts might encounter when authoring geometrical component behaviors across each interface. A within-subject study is the major contribution of this work, from which is presented an analysis based on the usability, workload and user interface preferences of participants in the study. To reduce confounding variables in the study, we ensured that the virtual objects and the environment used for the evaluation were digital twins of the real objects and the environment that the experiment took place in. Results show that the desktop interface was perceived as more efficient and easier to use based on usability and workload measures. The desktop was preferred for performing component selection but no difference was found in the preference for defining a behavior and visualizing it. Based on these results, a set of recommendations and future directions are provided to achieve a more usable, immersive authoring experience. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

29 pages, 35261 KiB  
Article
Identifying Strategies to Mitigate Cybersickness in Virtual Reality Induced by Flying with an Interactive Travel Interface
by Daniel Page, Robert W. Lindeman and Stephan Lukosch
Multimodal Technol. Interact. 2023, 7(5), 47; https://doi.org/10.3390/mti7050047 - 28 Apr 2023
Cited by 3 | Viewed by 3869
Abstract
As Virtual Reality (VR) technology has improved in hardware, accessibility of development and availability of applications, its interest has increased. However, the problem of Cybersickness (CS) still remains, causing uncomfortable symptoms in users. Therefore, this research seeks to identify and understand new CS [...] Read more.
As Virtual Reality (VR) technology has improved in hardware, accessibility of development and availability of applications, its interest has increased. However, the problem of Cybersickness (CS) still remains, causing uncomfortable symptoms in users. Therefore, this research seeks to identify and understand new CS mitigation strategies that can contribute to developer guidelines. Three hypotheses for strategies were devised and tested in an experiment. This involved a physical travel interface for flying through a Virtual Environment (VE) as a Control (CT) condition. On top of this, three manipulation conditions referred to as Gaze-tracking Vignette (GV), First-person Perspective with members representation (FP) and Fans and Vibration (FV) were applied. The experiment was between subjects, with 37 participants randomly allocated across conditions. According to the Simulator Sickness Questionnaire (SSQ) scores, significant evidence was found that GV and FP made CS worse. Evidence was also found that FV did not have an effect on CS. However, from the physiological data recorded, an overall lowering of heart rate for FV indicated that it might have some effect on the experience, but cannot be strongly linked with CS. Additionally, comments from some participants identified that they experienced symptoms consistent with CS. Amongst these, dizziness was the most common, with a few having issues with the usability of the travel interface. Despite some CS symptoms, most participants reported little negative impact of CS on the overall experience and feelings of immersion. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

16 pages, 3288 KiB  
Article
Velocity-Oriented Dynamic Control–Display Gain for Kinesthetic Interaction with a Grounded Force-Feedback Device
by Zhenxing Li, Jari Kangas and Roope Raisamo
Multimodal Technol. Interact. 2023, 7(2), 12; https://doi.org/10.3390/mti7020012 - 28 Jan 2023
Viewed by 1706
Abstract
Kinesthetic interaction is an important interaction method for virtual reality. Current kinesthetic interaction using a grounded force-feedback device, however, is still considered difficult and time-consuming because of the interaction difficulty in a three-dimensional space. Velocity-oriented dynamic control–display (CD) gain has been used to [...] Read more.
Kinesthetic interaction is an important interaction method for virtual reality. Current kinesthetic interaction using a grounded force-feedback device, however, is still considered difficult and time-consuming because of the interaction difficulty in a three-dimensional space. Velocity-oriented dynamic control–display (CD) gain has been used to improve user task performance with pointing devices, such as the mouse. In this study, we extended the application of this technique to kinesthetic interaction and examined its effects on interaction speed, positioning accuracy and touch perception. The results showed that using this technique could improve interaction speed without affecting positioning accuracy in kinesthetic interaction. Velocity-oriented dynamic CD gain could negatively affect touch perception in softness while using large gains. However, it is promising and particularly suitable for kinesthetic tasks that do not require high accuracy in touch perception. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

14 pages, 2931 KiB  
Article
Effect of Arm Pivot Joints on Stiffness Discrimination in Haptic Environments
by Khandaker Nusaiba Hafiz and Ernur Karadoğan
Multimodal Technol. Interact. 2022, 6(11), 98; https://doi.org/10.3390/mti6110098 - 10 Nov 2022
Viewed by 1648
Abstract
We investigated the effect of arm pivot joints that are typically used during haptic exploration by evaluating four joints of the human arm (metacarpophalangeal joint of the index finger, wrist, elbow, and shoulder joints). Using a virtual stiffness discrimination task, a four-session psychophysical [...] Read more.
We investigated the effect of arm pivot joints that are typically used during haptic exploration by evaluating four joints of the human arm (metacarpophalangeal joint of the index finger, wrist, elbow, and shoulder joints). Using a virtual stiffness discrimination task, a four-session psychophysical experiment was conducted with 38 participants (25 male and 13 female); each session was conducted with one of the four joints as the pivot joint during haptic exploration. The participants were asked to judge the stiffness of the top surface of two computer-generated cylinders by determining the stiffer one while using their dominant hand’s index finger. A two-alternative forced-choice procedure was employed by assigning one cylinder a constant stiffness value of 1.0 N/mm (standard side) and the remaining cylinder a variable stiffness value (comparison side). Using a custom-made stylus for the Geomagic TouchTM (3D Systems, Inc., Rock Hill, SC, USA) haptic interface, the participants were able to feel the stiffness of these virtual surfaces only with their index fingers. It was observed that the average Weber fraction monotonically decreased as the pivot joint shifted toward the torso (i.e., a shift from the metacarpophalangeal joint to the shoulder joint); this decrease was not statistically significant, which suggests that the selection of the pivot joint was not a determining factor for the sensitivity to discriminate stiffness. In general, the palpation speed and force exerted by the participants on the standard side during the haptic exploration showed a tendency to increase when the pivot joint shifted toward the torso; the difference in average palpation speed and force across the pivot joints was not statistically significant. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 582 KiB  
Review
The Good News, the Bad News, and the Ugly Truth: A Review on the 3D Interaction of Light Field Displays
by Peter A. Kara and Aniko Simon
Multimodal Technol. Interact. 2023, 7(5), 45; https://doi.org/10.3390/mti7050045 - 27 Apr 2023
Cited by 6 | Viewed by 3508
Abstract
Light field displays offer glasses-free 3D visualization, which means that multiple individuals may observe the same content simultaneously from a virtually infinite number of perspectives without the need of viewing devices. The practical utilization of such visualization systems include various passive and active [...] Read more.
Light field displays offer glasses-free 3D visualization, which means that multiple individuals may observe the same content simultaneously from a virtually infinite number of perspectives without the need of viewing devices. The practical utilization of such visualization systems include various passive and active use cases. In the case of the latter, users often engage with the utilized system via human–computer interaction. Beyond conventional controls and interfaces, it is also possible to use advanced solutions such as motion tracking, which may seem seamless and highly convenient when paired with glasses-free 3D visualization. However, such solutions may not necessarily outperform conventional controls, and their true potentials may fundamentally depend on the use case in which they are deployed. In this paper, we provide a review on the 3D interaction of light field displays. Our work takes into consideration the different requirements posed by passive and active use cases, discusses the numerous challenges, limitations and potentials, and proposes research initiatives that could progress the investigated field of science. Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
Show Figures

Figure 1

Back to TopTop