Next Issue
Volume 3, September
Previous Issue
Volume 3, March
 
 

Multimodal Technol. Interact., Volume 3, Issue 2 (June 2019) – 22 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
12 pages, 4377 KiB  
Article
Analogue Meets Digital: History and Present IT Augmentation of Europe’s Largest Landscape Relief Model in Villach, Austria
by Manfred F. Buchroithner
Multimodal Technol. Interact. 2019, 3(2), 44; https://doi.org/10.3390/mti3020044 - 20 Jun 2019
Cited by 1 | Viewed by 3477
Abstract
Brought to completion in 1913 after a production time of 24 years, the landscape relief model of Carinthia (Kärnten), on display in Villach, Austria, is, at 182 m², the largest of its kind in Europe. It is painted with nature-like land-cover information and [...] Read more.
Brought to completion in 1913 after a production time of 24 years, the landscape relief model of Carinthia (Kärnten), on display in Villach, Austria, is, at 182 m², the largest of its kind in Europe. It is painted with nature-like land-cover information and presents the whole federal state of Carinthia and its surroundings including Austria’s highest peak, Großglockner, at a scale of 1:10,000. From 2016 to 2018, a series of computer-generated and partly computer-animated educational contents for rental tablets as well as for projection onto the terrain model and above it have been produced. Their topics are briefly presented. The described Relief von Kärnten is also a paramount example and master copy of how to improve the attractivity of historical physical landscape relief models by means of state-of-the-art information technology. The article is, furthermore, meant to raise awareness for a piece of “geo-art”, which is worth being known at an international scale by both experts and laymen. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

13 pages, 5539 KiB  
Article
Interactive Landscape Design and Flood Visualisation in Augmented Reality
by Adam Tomkins and Eckart Lange
Multimodal Technol. Interact. 2019, 3(2), 43; https://doi.org/10.3390/mti3020043 - 15 Jun 2019
Cited by 23 | Viewed by 6952
Abstract
In stakeholder participation workshops, digital and hard-copy maps, alongside other representation formats in 2D and 3D, are used extensively to support communication, spatial evaluation and interactive decision making processes. In this paper, we present a novel tool to enhance traditional map-based workshop activities [...] Read more.
In stakeholder participation workshops, digital and hard-copy maps, alongside other representation formats in 2D and 3D, are used extensively to support communication, spatial evaluation and interactive decision making processes. In this paper, we present a novel tool to enhance traditional map-based workshop activities using augmented reality. Augmented reality technology is gaining momentum as a tool for visualising environmental design choices in planning and design, and is used in a range of applications including stakeholder participation, design evaluation and flood risk communication. We demonstrate interactive and collaborative 3D cartographic visualisations which enable real-time multi-user exercises in landscape intervention design and flood visualisation. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

11 pages, 366 KiB  
Article
Websites with Multimedia Content: A Heuristic Evaluation of the Medical/Anatomical Museums
by Matina Kiourexidou, Nikos Antonopoulos, Eugenia Kiourexidou, Maria Piagkou, Rigas Kotsakis and Konstantinos Natsis
Multimodal Technol. Interact. 2019, 3(2), 42; https://doi.org/10.3390/mti3020042 - 12 Jun 2019
Cited by 11 | Viewed by 4327
Abstract
The internet and web technologies have radically changed the way that users interact with museum exhibits. The websites and their related services play an important role in accessibility and interaction with the multimedia content of museums. The aim of the current research is [...] Read more.
The internet and web technologies have radically changed the way that users interact with museum exhibits. The websites and their related services play an important role in accessibility and interaction with the multimedia content of museums. The aim of the current research is to present a heuristic evaluation of forty-seven medical and anatomy museum websites from usability experts, for the determination of the principal/key characteristics and issues towards the effective design of a museum website. For homogeneity and comparison purposes, the websites of museums with no support of English language were not included in the evaluation process. In the present paper, the methodology was structured with the assessment of the technologies and services of anatomy museum websites in mind. The results of the current statistical examination are subsequently analyzed and discussed. Full article
(This article belongs to the Special Issue Multimodal Conversational Interaction and Interfaces)
Show Figures

Figure 1

5 pages, 171 KiB  
Editorial
Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving
by Andreas Riener and Myounghoon Jeon
Multimodal Technol. Interact. 2019, 3(2), 41; https://doi.org/10.3390/mti3020041 - 12 Jun 2019
Viewed by 3825
Abstract
Recent advancements in automated vehicle technologies pose numerous opportunities and challenges to support the diverse facets of user needs [...] Full article
14 pages, 4270 KiB  
Article
3D Point Clouds and Eye Tracking for Investigating the Perception and Acceptance of Power Lines in Different Landscapes
by Ulrike Wissen Hayek, Kilian Müller, Fabian Göbel, Peter Kiefer, Reto Spielhofer and Adrienne Grêt-Regamey
Multimodal Technol. Interact. 2019, 3(2), 40; https://doi.org/10.3390/mti3020040 - 4 Jun 2019
Cited by 5 | Viewed by 4512
Abstract
The perception of the visual landscape impact is a significant factor explaining the public’s acceptance of energy infrastructure developments. Yet, there is lack of knowledge how people perceive and accept power lines in certain landscape types and in combination with wind turbines, a [...] Read more.
The perception of the visual landscape impact is a significant factor explaining the public’s acceptance of energy infrastructure developments. Yet, there is lack of knowledge how people perceive and accept power lines in certain landscape types and in combination with wind turbines, a required setting to achieve goals of the energy turnaround. The goal of this work was to demonstrate how 3D point cloud visualizations could be used for an eye tracking study to systematically investigate the perception of landscape scenarios with power lines. 3D visualizations of near-natural and urban landscapes were prepared based on data from airborne and terrestrial laser scanning. These scenes were altered with varying amounts of the respective infrastructure, and they provided the stimuli in a laboratory experiment with 49 participants. Eye tracking and questionnaires served for measuring the participants’ responses. The results show that the point cloud-based simulations offered suitable stimuli for the eye tracking study. Particularly for the analysis of guided perceptions, the approach fostered an understanding of disturbing landscape elements. A comparative in situ eye tracking study is recommended to further evaluate the quality of the point cloud simulations, whether they produce similar responses as in the real world. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

20 pages, 262 KiB  
Review
A Review of Augmented Reality Applications for History Education and Heritage Visualisation
by Jennifer Challenor and Minhua Ma
Multimodal Technol. Interact. 2019, 3(2), 39; https://doi.org/10.3390/mti3020039 - 30 May 2019
Cited by 111 | Viewed by 15582
Abstract
Augmented reality is a field with a versatile range of applications used in many fields including recreation and education. Continually developing technology spanning the last decade has drastically improved the viability for augmented reality projects now that most of the population possesses a [...] Read more.
Augmented reality is a field with a versatile range of applications used in many fields including recreation and education. Continually developing technology spanning the last decade has drastically improved the viability for augmented reality projects now that most of the population possesses a mobile device capable of supporting the graphic rendering systems required for them. Education in particular has benefited from these technological advances as there are now many fields of research branching into how augmented reality can be used in schools. For the purposes of Holocaust education however, there has been remarkable little research into how Augmented Reality can be used to enhance its delivery or impact. The purpose of this study is to speculate regarding the following questions: How is augmented reality currently being used to enhance history education? Does the usage of augmented reality assist in developing long-term memories? Is augmented reality capable of conveying the emotional weight of historical events? Will augmented reality be appropriate for teaching a complex field such as the Holocaust? To address these, multiple studies have been analysed for their research methodologies and how their findings may assist with the development of Holocaust education. Full article
(This article belongs to the Special Issue Virtual, Augmented and Mixed Reality in Improving Education)
13 pages, 5040 KiB  
Article
Exploring User Perception Challenges in Vibrotactile Haptic Display Using Resonant Microbeams under Contact with Skin
by Daehan Wi and Angela A. Sodemann
Multimodal Technol. Interact. 2019, 3(2), 38; https://doi.org/10.3390/mti3020038 - 28 May 2019
Viewed by 3372
Abstract
Resonant vibrotactile microbeams use the concept of resonance to excite the vibration of cantilever beams, which correspond to pixels of an image. The primary benefit of this type of tactile display is its potential for high resolution. This paper presents the concept of [...] Read more.
Resonant vibrotactile microbeams use the concept of resonance to excite the vibration of cantilever beams, which correspond to pixels of an image. The primary benefit of this type of tactile display is its potential for high resolution. This paper presents the concept of the proposed system and human skin contact experiments to explore user perception challenges related to beam vibration during skin contact. The human skin contact experiments can be described in five phases: dried skin contact to metal beam tips, wet and soaped skin contact to metal beam tips, skin contact with a constraint, normal force measurement, and skin contact to the tips of silicone rubber beams attached to metal beam tips. Experimental results are analyzed to determine in what cases of skin contact the beams stop vibrating. It is found that the addition of silicone rubber beams allows the primary metal beams to continue vibrating while in contact with skin. Thus, the vibration response of a metal beam with silicone rubber beams is investigated for the better understanding of the effect of silicone rubber beams on the metal beam vibration. Full article
(This article belongs to the Special Issue Haptics for Human Augmentation)
Show Figures

Figure 1

20 pages, 453 KiB  
Article
A Survey on Psycho-Physiological Analysis & Measurement Methods in Multimodal Systems
by Muhammad Zeeshan Baig and Manolya Kavakli
Multimodal Technol. Interact. 2019, 3(2), 37; https://doi.org/10.3390/mti3020037 - 28 May 2019
Cited by 47 | Viewed by 9571
Abstract
Psycho-physiological analysis has gained greater attention in the last few decades in various fields including multimodal systems. Researchers use psychophysiological feedback devices such as skin conductance (SC), Electroencephalography (EEG) and Electrocardiography (ECG) to detect the affective states of the users during task performance. [...] Read more.
Psycho-physiological analysis has gained greater attention in the last few decades in various fields including multimodal systems. Researchers use psychophysiological feedback devices such as skin conductance (SC), Electroencephalography (EEG) and Electrocardiography (ECG) to detect the affective states of the users during task performance. Psycho-physiological feedback has been successful in detection of the cognitive states of users in human-computer interaction (HCI). Recently, in game studies, psycho-physiological feedback has been used to capture the user experience and the effect of interaction on human psychology. This paper reviews several psycho-physiological, cognitive, and affective assessment studies and focuses on the use of psychophysiological signals in estimating the user’s cognitive and emotional states in multimodal systems. In this paper, we review the measurement techniques and methods that have been used to record psycho-physiological signals as well as the cognitive and emotional states in a variety of conditions. The aim of this review is to conduct a detailed study to identify, describe and analyze the key psycho-physiological parameters that relate to different mental and emotional states in order to provide an insight into key approaches. Furthermore, the advantages and limitations of these approaches are also highlighted in this paper. The findings state that the classification accuracy of >90% has been achieved in classifying emotions with EEG signals. A strong correlation between self-reported data, HCI experience, and psychophysiological data has been observed in a wide range of domains including games, human-robot interaction, mobile interaction, and simulations. An increase in β and γ -band activity have been observed in high intense games and simulations. Full article
(This article belongs to the Special Issue Multimodal User Interfaces Modelling and Development)
Show Figures

Figure 1

14 pages, 1373 KiB  
Article
Better Sleep Experience for the Critically Ill: A Comprehensive Strategy for Designing Hospital Soundscapes
by Dilip Birdja and Elif Özcan
Multimodal Technol. Interact. 2019, 3(2), 36; https://doi.org/10.3390/mti3020036 - 22 May 2019
Cited by 8 | Viewed by 4665
Abstract
In this paper, the sleep phenomenon is considered in relation to critical care soundscapes with the intention to inform hospital management, medical device producers and policy makers regarding the complexity of the issue and possible modes of design interventions. We propose a comprehensive [...] Read more.
In this paper, the sleep phenomenon is considered in relation to critical care soundscapes with the intention to inform hospital management, medical device producers and policy makers regarding the complexity of the issue and possible modes of design interventions. We propose a comprehensive strategy based on soundscape design approach that facilitates a systematic way of tackling the auditory quality of critical care settings in favor of better patient sleep experience. Future research directions are presented to tackle the knowledge deficits in designing for critical care soundscapes that cater for patient sleep. The need for scientifically-informed design interventions for improving patient sleep experience in critical care is highlighted. The value of the soundscape design approach for resolving other sound-induced problems in critical care and how the approach allows for patient-centred innovation that is beyond the immediate sound issue are further discussed. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

15 pages, 3184 KiB  
Article
Promoting Contemplative Culture through Media Arts
by Jiayue Wu
Multimodal Technol. Interact. 2019, 3(2), 35; https://doi.org/10.3390/mti3020035 - 21 May 2019
Cited by 2 | Viewed by 3675
Abstract
This paper presents the practice of designing mediation technologies as artistic tools to expand the creative repertoire to promote contemplative cultural practice. Three art–science collaborations—Mandala, Imagining the Universe, and Resonance of the Heart—are elaborated on as proof-of-concept case studies. Scientifically, the empirical research [...] Read more.
This paper presents the practice of designing mediation technologies as artistic tools to expand the creative repertoire to promote contemplative cultural practice. Three art–science collaborations—Mandala, Imagining the Universe, and Resonance of the Heart—are elaborated on as proof-of-concept case studies. Scientifically, the empirical research examines the mappings from (bodily) action to (sound/visual) perception in technology-mediated performing art. Theoretically, the author synthesizes media arts practices on a level of defining general design principles and post-human artistic identities. Technically, the author implements machine learning techniques, digital audio/visual signal processing, and sensing technology to explore post-human artistic identities and give voice to underrepresented groups. Realized by a group of multinational media artists, computer engineers, audio engineers, and cognitive neuroscientists, this work preserves, promotes, and further explores contemplative culture with emerging technologies. Full article
(This article belongs to the Special Issue Musical Interactions)
Show Figures

Figure 1

15 pages, 2734 KiB  
Article
Combining VR Visualization and Sonification for Immersive Exploration of Urban Noise Standards
by Markus Berger and Ralf Bill
Multimodal Technol. Interact. 2019, 3(2), 34; https://doi.org/10.3390/mti3020034 - 13 May 2019
Cited by 23 | Viewed by 5002
Abstract
Urban traffic noise situations are usually visualized as conventional 2D maps or 3D scenes. These representations are indispensable tools to inform decision makers and citizens about issues of health, safety, and quality of life but require expert knowledge in order to be properly [...] Read more.
Urban traffic noise situations are usually visualized as conventional 2D maps or 3D scenes. These representations are indispensable tools to inform decision makers and citizens about issues of health, safety, and quality of life but require expert knowledge in order to be properly understood and put into context. The subjectivity of how we perceive noise as well as the inaccuracies in common noise calculation standards are rarely represented. We present a virtual reality application that seeks to offer an audiovisual glimpse into the background workings of one of these standards, by employing a multisensory, immersive analytics approach that allows users to interactively explore and listen to an approximate rendering of the data in the same environment that the noise simulation occurs in. In order for this approach to be useful, it should manage complicated noise level calculations in a real time environment and run on commodity low-cost VR hardware. In a prototypical implementation, we utilized simple VR interactions common to current mobile VR headsets and combined them with techniques from data visualization and sonification to allow users to explore road traffic noise in an immersive real-time urban environment. The noise levels were calculated over CityGML LoD2 building geometries, in accordance with Common Noise Assessment Methods in Europe (CNOSSOS-EU) sound propagation methods. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

20 pages, 2585 KiB  
Article
Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study
by Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue and Katie Atkinson
Multimodal Technol. Interact. 2019, 3(2), 33; https://doi.org/10.3390/mti3020033 - 9 May 2019
Cited by 13 | Viewed by 4625
Abstract
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and [...] Read more.
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions. Full article
Show Figures

Figure 1

17 pages, 4762 KiB  
Article
Recognition of Tactile Facial Action Units by Individuals Who Are Blind and Sighted: A Comparative Study
by Troy McDaniel, Diep Tran, Abhik Chowdhury, Bijan Fakhri and Sethuraman Panchanathan
Multimodal Technol. Interact. 2019, 3(2), 32; https://doi.org/10.3390/mti3020032 - 8 May 2019
Cited by 6 | Viewed by 3723
Abstract
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of [...] Read more.
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind. Full article
(This article belongs to the Special Issue Haptics for Human Augmentation)
Show Figures

Figure 1

25 pages, 5009 KiB  
Article
User-Generated Gestures for Voting and Commenting on Immersive Displays in Urban Planning
by Guiying Du, Auriol Degbelo and Christian Kray
Multimodal Technol. Interact. 2019, 3(2), 31; https://doi.org/10.3390/mti3020031 - 7 May 2019
Cited by 5 | Viewed by 3291
Abstract
Traditional methods of public consultation offer only limited interactivity with urban planning materials, leading to a restricted engagement of citizens. Public displays and immersive virtual environments have the potential to address this issue, enhance citizen engagement and improve the public consultation process, overall. [...] Read more.
Traditional methods of public consultation offer only limited interactivity with urban planning materials, leading to a restricted engagement of citizens. Public displays and immersive virtual environments have the potential to address this issue, enhance citizen engagement and improve the public consultation process, overall. In this paper, we investigate how people would interact with a large display showing urban planning content. We conducted an elicitation study with a large immersive display, where we asked participants (N = 28) to produce gestures to vote and comment on urban planning material. Our results suggest that the phone interaction modality may be more suitable than the hand interaction modality for voting and commenting on large interactive displays. Our findings may inform the design of interactions for large immersive displays, in particular, those showing urban planning content. Full article
Show Figures

Figure 1

13 pages, 15052 KiB  
Article
Visual Analysis of a Smart City’s Energy Consumption
by Juan Trelles Trabucco, Dongwoo Lee, Sybil Derrible and G. Elisabeta Marai
Multimodal Technol. Interact. 2019, 3(2), 30; https://doi.org/10.3390/mti3020030 - 2 May 2019
Cited by 8 | Viewed by 4093
Abstract
Through the use of open data portals, cities, districts and countries are increasingly making available energy consumption data. These data have the potential to inform both policymakers and local communities. At the same time, however, these datasets are large and complicated to analyze. [...] Read more.
Through the use of open data portals, cities, districts and countries are increasingly making available energy consumption data. These data have the potential to inform both policymakers and local communities. At the same time, however, these datasets are large and complicated to analyze. We present the activity-centered-design, from requirements to evaluation, of a web-based visual analysis tool to explore energy consumption in Chicago. The resulting application integrates energy consumption data and census data, making it possible for both amateurs and experts to analyze disaggregated datasets at multiple levels of spatial aggregation and to compare temporal and spatial differences. An evaluation through case studies and qualitative feedback demonstrates that this visual analysis application successfully meets the goals of integrating large, disaggregated urban energy consumption datasets and of supporting analysis by both lay users and experts. Full article
(This article belongs to the Special Issue Interactive Visualizations for Sustainability)
Show Figures

Figure 1

15 pages, 1097 KiB  
Article
Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving
by Yannick Forster, Sebastian Hergeth, Frederik Naujoks, Josef Krems and Andreas Keinath
Multimodal Technol. Interact. 2019, 3(2), 29; https://doi.org/10.3390/mti3020029 - 29 Apr 2019
Cited by 6 | Viewed by 4173
Abstract
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the [...] Read more.
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the development of ratings itself over time and use case specificity were examined. In a driving simulator study, N = 55 participants completed several transitions between Society of Automotive Engineers (SAE) level 0, level 2, and level 3 automated driving. One half of the participants received feedback on their interaction performance immediately after each use case, while the other half did not. As expected, the results revealed that participants judged the interactions to become easier over time. However, a use case specificity was present, as transitions to L0 did not show effects over time. The role of feedback also depended on the respective use case. We observed more conservative evaluations when feedback was provided than when it was not. The present study supports the application of perceived ease of use as a diagnostic measure in interaction with automated driving. Evaluations of interfaces can benefit from supporting feedback to obtain more conservative results. Full article
Show Figures

Figure 1

9 pages, 1132 KiB  
Brief Report
A Virtual Reality System for Practicing Conversation Skills for Children with Autism
by Natalia Stewart Rosenfield, Kathleen Lamkin, Jennifer Re, Kendra Day, LouAnne Boyd and Erik Linstead
Multimodal Technol. Interact. 2019, 3(2), 28; https://doi.org/10.3390/mti3020028 - 20 Apr 2019
Cited by 46 | Viewed by 9370
Abstract
We describe a virtual reality environment, Bob’s Fish Shop, which provides a system where users diagnosed with Autism Spectrum Disorder (ASD) can practice social interactions in a safe and controlled environment. A case study is presented which suggests such an environment can provide [...] Read more.
We describe a virtual reality environment, Bob’s Fish Shop, which provides a system where users diagnosed with Autism Spectrum Disorder (ASD) can practice social interactions in a safe and controlled environment. A case study is presented which suggests such an environment can provide the opportunity for users to build the skills necessary to carry out a conversation without the fear of negative social consequences present in the physical world. Through the repetition and analysis of these virtual interactions, users can improve social and conversational understanding. Full article
Show Figures

Figure 1

18 pages, 3012 KiB  
Article
Comparing Interaction Techniques to Help Blind People Explore Maps on Small Tactile Devices
by Mathieu Simonnet, Anke M. Brock, Antonio Serpa, Bernard Oriola and Christophe Jouffrais
Multimodal Technol. Interact. 2019, 3(2), 27; https://doi.org/10.3390/mti3020027 - 20 Apr 2019
Cited by 13 | Viewed by 4628
Abstract
Exploring geographic maps on touchscreens is a difficult task in the absence of vision as those devices miss tactile cues. Prior research has therefore introduced non-visual interaction techniques designed to allow visually impaired people to explore spatial configurations on tactile devices. In this [...] Read more.
Exploring geographic maps on touchscreens is a difficult task in the absence of vision as those devices miss tactile cues. Prior research has therefore introduced non-visual interaction techniques designed to allow visually impaired people to explore spatial configurations on tactile devices. In this paper, we present a study in which six blind and six blindfolded sighted participants evaluated three of those interaction techniques compared to a screen reader condition. We observed that techniques providing guidance result in a higher user satisfaction and more efficient exploration. Adding a grid-like structure improved the estimation of distances. None of the interaction techniques improved the reconstruction of the spatial configurations. The results of this study allow improving the design of non-visual interaction techniques that support a better exploration and memorization of maps in the absence of vision. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Show Figures

Figure 1

16 pages, 2927 KiB  
Article
Tactile Cues for Improving Target Localization in Subjects with Tunnel Vision
by Damien Camors, Damien Appert, Jean-Baptiste Durand and Christophe Jouffrais
Multimodal Technol. Interact. 2019, 3(2), 26; https://doi.org/10.3390/mti3020026 - 19 Apr 2019
Cited by 2 | Viewed by 3621
Abstract
The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the [...] Read more.
The loss of peripheral vision is experienced by millions of people with glaucoma or retinitis pigmentosa, and has a major impact in everyday life, specifically to locate visual targets in the environment. In this study, we designed a wearable interface to render the location of specific targets with private and non-intrusive tactile cues. Three experimental studies were completed to design and evaluate the tactile code and the device. In the first study, four different tactile codes (single stimuli or trains of pulses rendered either in a Cartesian or a Polar coordinate system) were evaluated with a head pointing task. In the following studies, the most efficient code, trains of pulses with Cartesian coordinates, was used on a bracelet located on the wrist, and evaluated during a visual search task in a complex virtual environment. The second study included ten subjects with a simulated restrictive field of view (10°). The last study consisted of proof of a concept with one visually impaired subject with restricted peripheral vision due to glaucoma. The results show that the device significantly improved the visual search efficiency with a factor of three. Including object recognition algorithm to smart glass, the device could help to detect targets of interest either on demand or suggested by the device itself (e.g., potential obstacles), facilitating visual search, and more generally spatial awareness of the environment. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Show Figures

Figure 1

24 pages, 276 KiB  
Article
IMPAct: A Holistic Framework for Mixed Reality Robotic User Interface Classification and Design
by Dennis Krupke, Jianwei Zhang and Frank Steinicke
Multimodal Technol. Interact. 2019, 3(2), 25; https://doi.org/10.3390/mti3020025 - 11 Apr 2019
Cited by 2 | Viewed by 3980
Abstract
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. [...] Read more.
The number of scientific publications combining robotic user interfaces and mixed reality highly increased during the 21st Century. Counting the number of yearly added publications containing the keywords “mixed reality” and “robot” listed on Google Scholar indicates exponential growth. The interdisciplinary nature of mixed reality robotic user interfaces (MRRUI) makes them very interesting and powerful, but also very challenging to design and analyze. Many single aspects have already been successfully provided with theoretical structure, but to the best of our knowledge, there is no contribution combining everything into an MRRUI taxonomy. In this article, we present the results of an extensive investigation of relevant aspects from prominent classifications and taxonomies in the scientific literature. During a card sorting experiment with professionals from the field of human–computer interaction, these aspects were clustered into named groups for providing a new structure. Further categorization of these groups into four different categories was obvious and revealed a memorable structure. Thus, this article provides a framework of objective, technical factors, which finds its application in a precise description of MRRUIs. An example shows the effective use of the proposed framework for precise system description, therefore contributing to a better understanding, design, and comparison of MRRUIs in this growing field of research. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Show Figures

Graphical abstract

22 pages, 3632 KiB  
Article
Exploring the Development Requirements for Virtual Reality Gait Analysis
by Mohammed Soheeb Khan, Vassilis Charissis and Sophia Sakellariou
Multimodal Technol. Interact. 2019, 3(2), 24; https://doi.org/10.3390/mti3020024 - 10 Apr 2019
Cited by 4 | Viewed by 5722
Abstract
The hip joint is highly prone to traumatic and degenerative pathologies resulting in irregular locomotion. Monitoring and treatment depend on high-end technology facilities requiring physician and patient co-location, thus limiting access to specialist monitoring and treatment for populations living in rural and remote [...] Read more.
The hip joint is highly prone to traumatic and degenerative pathologies resulting in irregular locomotion. Monitoring and treatment depend on high-end technology facilities requiring physician and patient co-location, thus limiting access to specialist monitoring and treatment for populations living in rural and remote locations. Telemedicine offers an alternative means of monitoring, negating the need for patient physical presence. In addition, emerging technologies, such as virtual reality (VR) and immersive technologies, offer potential future solutions through virtual presence, where the patient and health professional can meet in a virtual environment (a virtual clinic). To this end, a prototype asynchronous telemedicine VR gait analysis system was designed, aiming to transfer a full clinical facility within the patients’ local proximity. The proposed system employs cost-effective alternative motion capture combined with the system’s immersive 3D virtual gait analysis clinic. The user interface and the tools in the application offer health professionals asynchronous, objective, and subjective analyses. This paper investigates the requirements for the design of such a system and discusses preliminary comparative data of its performance evaluation against a high-fidelity gait analysis clinical application. Full article
(This article belongs to the Special Issue Digital Health Applications of Ubiquitous HCI Research)
Show Figures

Figure 1

15 pages, 5305 KiB  
Article
Tango vs. HoloLens: A Comparison of Collaborative Indoor AR Visualisations Using Hand-Held and Hands-Free Devices
by Urs Riedlinger, Leif Oppermann and Wolfgang Prinz
Multimodal Technol. Interact. 2019, 3(2), 23; https://doi.org/10.3390/mti3020023 - 3 Apr 2019
Cited by 30 | Viewed by 5329
Abstract
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in [...] Read more.
In this article, we compare a Google Tango tablet with the Microsoft HoloLens smartglasses in the context of the visualisation and interaction with Building Information Modeling data. A user test was conducted where 16 participants solved four tasks, two for each device, in small teams of two. Two aspects are analysed in the user test: the visualisation of interior designs and the visualisation of Building Information Modeling data. The results show that the Tango tablet is surprisingly preferred by most users when it comes to collaboration and discussion in our scenario. While the HoloLens offers hands-free operation and a stable tracking, users mentioned that the interaction with the Tango tablet felt more natural. In addition, users reported that it was easier to get an overall impression with the Tango tablet rather than with the HoloLens smartglasses. Full article
(This article belongs to the Special Issue Mixed Reality Interfaces)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop