3D Human–Computer Interaction

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (30 December 2020) | Viewed by 43106

Special Issue Editors


E-Mail Website
Guest Editor
Center for Advanced Computer Studies, University of Louisiana at Lafayette, P.O. Box 44330, Lafayette, LA 70504, USA
Interests: 3D interaction; Virtual Reality; graphics; haptics; scientific visualization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70503, USA
Interests: Human-Computer Interaction (HCI); 3D User Interfaces; 3D interfaces for video games; Virtual Reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

This Special Issue explores methods, technologies, and studies of 3D interaction in the broad area of Human–Computer Interaction (HCI). HCI researches the interface between people and computers. 3D user interfaces (3DUI) can involve input devices that track user movements in 3D, techniques for interaction with virtual or augmented reality, or other interfaces in which a 3D arrangement of inputs or environments is characteristic. Like HCI, 3DUI research lies in the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. This Special Issue invites contributions on the technological, creative, perceptual, cognitive, social, and health aspects of 3DUI.

We encourage authors to submit original research articles, novel case studies, insightful reviews, theoretical and critical perspectives, and well-argued viewpoint articles on 3D Human–Computer Interaction, including but not limited to the following:

  • 3D interaction techniques and metaphors
  • 3D input and sensing technologies
  • 3D feedback for any senses (visual, auditory, haptic, olfactory, gustatory, vestibular)
  • Empirical studies of 3DUIs
  • Novel software architectures for 3DUI
  • Collaborative interfaces for VR, AR, or other 3D computer environments
  • Evaluation methods for 3DUIs
  • Human perception of 3D interaction
  • Novel Applications of 3DUIs: Games, entertainment, CAD, education, etc.
  • Mobile 3DUIs
  • Hybrid 3DUIs
  • Desktop 3DUIs

Of particular interest are articles that critically explore 3D Human–Computer Interaction methods in contexts such as virtual reality, augmented reality, and mobile/wearable devices.

Dr. Christoph W. Borst
Dr. Arun K. Kulshreshth
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D User Interfaces (3DUI)
  • Spatial user interaction
  • Interaction
  • Interfaces
  • Human-Computer Interaction
  • HCI
  • CHI
  • Virtual Reality (VR)
  • Augmented Reality (AR)
  • Motion tracking
  • Motion controllers
  • Haptics
  • 3D Input
  • Collaborative VR
  • User studies

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3484 KiB  
Article
“MedChemVR”: A Virtual Reality Game to Enhance Medicinal Chemistry Education
by Areej Abuhammad, Jannat Falah, Salasabeel F. M. Alfalah, Muhannad Abu-Tarboush, Ruba T. Tarawneh, Dimitris Drikakis and Vassilis Charissis
Multimodal Technol. Interact. 2021, 5(3), 10; https://doi.org/10.3390/mti5030010 - 4 Mar 2021
Cited by 16 | Viewed by 6525
Abstract
Medicinal chemistry (MC) is an indispensable component of the pharmacy curriculum. The pharmacists’ unique knowledge of a medicine’s chemistry enhances their understanding of the pharmacological activity, manufacturing, storage, use, supply, and handling of drugs. However, chemistry is a challenging subject for both teaching [...] Read more.
Medicinal chemistry (MC) is an indispensable component of the pharmacy curriculum. The pharmacists’ unique knowledge of a medicine’s chemistry enhances their understanding of the pharmacological activity, manufacturing, storage, use, supply, and handling of drugs. However, chemistry is a challenging subject for both teaching and learning. These challenges are typically caused by the inability of students to construct a mental image of the three-dimensional (3D) structure of a drug molecule from its two-dimensional presentations. This study explores a prototype virtual reality (VR) gamification option, as an educational tool developed to aid the learning process and to improve the delivery of the MC subject to students. The developed system is evaluated by a cohort of 41 students. The analysis of the results was encouraging and provided invaluable feedback for the future development of the proposed system. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

22 pages, 840 KiB  
Article
Association of Individual Factors with Simulator Sickness and Sense of Presence in Virtual Reality Mediated by Head-Mounted Displays (HMDs)
by Simone Grassini, Karin Laumann and Ann Kristin Luzi
Multimodal Technol. Interact. 2021, 5(3), 7; https://doi.org/10.3390/mti5030007 - 24 Feb 2021
Cited by 22 | Viewed by 5714
Abstract
Many studies have attempted to understand which individual differences may be related to the symptoms of discomfort during the virtual experience (simulator sickness) and the generally considered positive sense of being inside the simulated scene (sense of presence). Nevertheless, a very limited number [...] Read more.
Many studies have attempted to understand which individual differences may be related to the symptoms of discomfort during the virtual experience (simulator sickness) and the generally considered positive sense of being inside the simulated scene (sense of presence). Nevertheless, a very limited number of studies have employed modern consumer-oriented head-mounted displays (HMDs). These systems aim to produce a high the sense of the presence of the user, remove stimuli from the external environment, and provide high definition, photo-realistic, three-dimensional images. Our results showed that motion sickness susceptibility and simulator sickness are related, and neuroticism may be associated and predict simulator sickness. Furthermore, the results showed that people who are more used to playing videogames are less susceptible to simulator sickness; female participants reported more simulator sickness compared to males (but only for nausea-related symptoms). Female participants also experienced a higher sense of presence compared to males. We suggest that published findings on simulator sickness and the sense of presence in virtual reality environments need to be replicated with the use of modern HMDs. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

23 pages, 45251 KiB  
Article
StARboard & TrACTOr: Actuated Tangibles in an Educational TAR Application
by Emanuel Vonach, Christoph Schindler and Hannes Kaufmann
Multimodal Technol. Interact. 2021, 5(2), 6; https://doi.org/10.3390/mti5020006 - 9 Feb 2021
Cited by 2 | Viewed by 3845
Abstract
We explore the potential of direct haptic interaction in a novel approach to Tangible Augmented Reality in an educational context. Employing our prototyping platform ACTO, we developed a tabletop Augmented Reality application StARboard for sailing students. In this personal viewpoint environment virtual objects, [...] Read more.
We explore the potential of direct haptic interaction in a novel approach to Tangible Augmented Reality in an educational context. Employing our prototyping platform ACTO, we developed a tabletop Augmented Reality application StARboard for sailing students. In this personal viewpoint environment virtual objects, e.g., sailing ships, are physically represented by actuated micro robots. These align with virtual objects, allowing direct physical interaction with the scene. When a user tries to pick up a virtual ship, its physical robot counterpart is grabbed instead. We also developed a tracking solution TrACTOr, employing a depth sensor to allow tracking independent of the table surface. In this paper we present concept and development of StARboard and TrACTOr. We report results of our user study with 18 participants using our prototype. They show that direct haptic interaction in tabletop AR scores en-par with traditional mouse interaction on a desktop setup in usability (mean SUS = 86.7 vs. 82.9) and performance (mean RTLX = 15.0 vs. 14.8), while outperforming the mouse in factors related to learning like presence (mean 6.0 vs 3.1) and absorption (mean 5.4 vs. 4.2). It was also rated the most fun (13× vs. 0×) and most suitable for learning (9× vs. 4×). Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

16 pages, 4051 KiB  
Article
The Cost of Production in Elicitation Studies and the Legacy Bias-Consensus Trade off
by Adam S. Williams, Jason Garcia, Fernando De Zayas, Fidel Hernandez, Julia Sharp and Francisco R. Ortega
Multimodal Technol. Interact. 2020, 4(4), 88; https://doi.org/10.3390/mti4040088 - 4 Dec 2020
Cited by 6 | Viewed by 3719
Abstract
Gesture elicitation studies are a popular means of gaining valuable insights into how users interact with novel input devices. One of the problems elicitation faces is that of legacy bias, when elicited interactions are biased by prior technologies use. In response, methodologies have [...] Read more.
Gesture elicitation studies are a popular means of gaining valuable insights into how users interact with novel input devices. One of the problems elicitation faces is that of legacy bias, when elicited interactions are biased by prior technologies use. In response, methodologies have been introduced to reduce legacy bias. This is the first study that formally examines the production method of reducing legacy bias (i.e., repeated proposals for a single referent). This is done through a between-subject study that had 27 participants per group (control and production) with 17 referents placed in a virtual environment using a head-mounted display. This study found that over a range of referents, legacy bias was not significantly reduced over production trials. Instead, production reduced participant consensus on proposals. However, in the set of referents that elicited the most legacy biased proposals, production was an effective means of reducing legacy bias, with an overall reduction of 11.93% for the chance of eliciting a legacy bias proposal. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

15 pages, 4257 KiB  
Article
Virtual Reality Nature Exposure and Test Anxiety
by Alison O’Meara, Marica Cassarino, Aaron Bolger and Annalisa Setti
Multimodal Technol. Interact. 2020, 4(4), 75; https://doi.org/10.3390/mti4040075 - 22 Oct 2020
Cited by 18 | Viewed by 6102
Abstract
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by [...] Read more.
The number of students affected by exam anxiety continues to rise. Therefore, it is becoming progressively relevant to explore innovative remediation strategies that will help mitigate the debilitating effects of exam anxiety. The study aimed to investigate whether green environment exposure, delivered by virtual reality (VR) technology, would serve as an effective intervention to mitigate participants’ test anxiety and therefore improve the experience of the exam, measured by positive and negative affect, and increase test scores in a pseudo exam. Twenty high and twenty low exam anxiety students completed a pseudo exam before and after being exposed to either a simulated green environment or urban environment. Only those who had high anxiety and were exposed to the nature VR intervention had significant reductions in negative affect (F(1, 31) = 5.86, p = 0.02, ηp2 = 0.15), supporting the idea that exposure to nature, even if simulated, may benefit students’ feelings about their academic performance. The findings are discussed in light of future developments in nature and educational research. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

15 pages, 22812 KiB  
Article
Spot-Presentation of Stereophonic Earcons to Assist Navigation for the Visually Impaired
by Yuichi Mashiba, Ryunosuke Iwaoka, Hisham E. Bilal Salih, Masayuki Kawamoto, Naoto Wakatsuki, Koichi Mizutani and Keiichi Zempo
Multimodal Technol. Interact. 2020, 4(3), 42; https://doi.org/10.3390/mti4030042 - 20 Jul 2020
Cited by 6 | Viewed by 4647
Abstract
This study seeks to demonstrate that a navigation system using stereophonic sound technology is effective in supporting visually impaired people in public spaces. In the proposed method, stereophonic sound is produced by a pair of parametric speakers for a person who comes to [...] Read more.
This study seeks to demonstrate that a navigation system using stereophonic sound technology is effective in supporting visually impaired people in public spaces. In the proposed method, stereophonic sound is produced by a pair of parametric speakers for a person who comes to a specific position, detected by an RGB-D sensor. The sound is a stereophonic earcon representing the target facility. The recipient can intuitively understand the direction of the target facility. The sound is not audible for anyone except for the person being supported and is not noisy. This system is constructed in a shopping mall, and an experiment is conducted, in which the proposed system and guidance by a tactile map lead to a designated facility. As a result, it is confirmed, that the execution time of the proposed method is reduced. It is also confirmed that the proposed method shows higher performance in terms of the average time required to grasp the direction than the tactile map approach. In the actual environment where this system is supposed to be used, the correct answer rate is over 80%. These results suggest that the proposed method can replace the conventional tactile map as a guidance system. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

23 pages, 9242 KiB  
Article
Augmenting Printed School Atlases with Thematic 3D Maps
by Raimund Schnürer, Cédric Dind, Stefan Schalcher, Pascal Tschudi and Lorenz Hurni
Multimodal Technol. Interact. 2020, 4(2), 23; https://doi.org/10.3390/mti4020023 - 27 May 2020
Cited by 9 | Viewed by 5220
Abstract
Digitalization in schools requires a rethinking of teaching materials and methods in all subjects. This upheaval also concerns traditional print media, like school atlases used in geography classes. In this work, we examine the cartographic technological feasibility of extending a printed school atlas [...] Read more.
Digitalization in schools requires a rethinking of teaching materials and methods in all subjects. This upheaval also concerns traditional print media, like school atlases used in geography classes. In this work, we examine the cartographic technological feasibility of extending a printed school atlas with digital content by augmented reality (AR). While previous research rather focused on topographic three-dimensional (3D) maps, our prototypical application for Android tablets complements map sheets of the Swiss World Atlas with thematically related data. We follow a natural marker approach using the AR engine Vuforia and the game engine Unity. We compare two workflows to insert geo-data, being correctly aligned with the map images, into the game engine. Next, the imported data are transformed into partly animated 3D visualizations, such as a dot distribution map, curved lines, pie chart billboards, stacked cuboids, extruded bars, and polygons. Additionally, we implemented legends, elements for temporal and thematic navigation, a screen capture function, and a touch-based feature query for the user interface. We evaluated our prototype in a usability experiment, which showed that secondary school students are as effective, interested, and sustainable with printed as with augmented maps when solving geographic tasks. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 1646 KiB  
Review
Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy
by Julian Kreimeier and Timo Götzelmann
Multimodal Technol. Interact. 2020, 4(4), 79; https://doi.org/10.3390/mti4040079 - 17 Nov 2020
Cited by 8 | Viewed by 5757
Abstract
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can [...] Read more.
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects. Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction)
Show Figures

Graphical abstract

Back to TopTop