Next Issue
Volume 6, September
Previous Issue
Volume 6, July
 
 

Multimodal Technol. Interact., Volume 6, Issue 8 (August 2022) – 10 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
13 pages, 1973 KiB  
Article
A Usability Study on Widget Design for Selecting Boolean Operations
by Diogo Chambel Lopes, Helena Mendes, Ricardo Portal, Rui de Klerk, Isabel Nogueira and Daniel Simões Lopes
Multimodal Technol. Interact. 2022, 6(8), 70; https://doi.org/10.3390/mti6080070 - 18 Aug 2022
Viewed by 1412
Abstract
Applying the correct set of Boolean operations is a fundamental task in constructive solid geometry (CSG), which is a staple in automated manufacturing systems. Although textual buttons and icons are the most common interfaces to apply such operations, these require an unnecessary cognitive [...] Read more.
Applying the correct set of Boolean operations is a fundamental task in constructive solid geometry (CSG), which is a staple in automated manufacturing systems. Although textual buttons and icons are the most common interfaces to apply such operations, these require an unnecessary cognitive load that hampers the solid modeling process. This study presents VennPad, a novel CSG widget that gathers all Boolean operations under the same user interface control element and is represented as a two-set Venn diagram. Contrary to conventional CSG widgets, VennPad supports a graphical interface that gives simultaneous access to several types of Boolean operations (intersection, union, difference, symmetric difference and split). A usability study was conducted to ascertain whether VennPad is a more natural interface compared to textual buttons and icon-based widgets for different solid modeling tasks. VennPad proved to be an effective interface to perform Boolean operations. Qualitative feedback places VennPad as the preferred interface, but efficiency results are operation dependent, thus, opening the way to new design iterations. Full article
Show Figures

Figure 1

16 pages, 936 KiB  
Article
Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction
by Filippo Cantucci and Rino Falcone
Multimodal Technol. Interact. 2022, 6(8), 69; https://doi.org/10.3390/mti6080069 - 17 Aug 2022
Cited by 6 | Viewed by 1509
Abstract
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a [...] Read more.
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios. Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
Show Figures

Figure 1

19 pages, 2224 KiB  
Article
Assessing the Influence of Multimodal Feedback in Mobile-Based Musical Task Performance
by Alexandre Clément and Gilberto Bernardes
Multimodal Technol. Interact. 2022, 6(8), 68; https://doi.org/10.3390/mti6080068 - 8 Aug 2022
Viewed by 1434
Abstract
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle [...] Read more.
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle to complete performer–instrument interaction in particular due to the lack of embodied control. Mobile-based digital musical instruments present a particular case by natively providing the possibility of enriching basic auditory feedback with additional multimodal feedback. In the experiment presented in this article, we focused on using visual and haptic feedback to support and enrich auditory content to evaluate the impact on basic musical tasks (i.e., note pitch tuning accuracy and time). The experiment implemented a protocol based on presenting several musical note examples to participants and asking them to reproduce them, with their performance being compared between different multimodal feedback combinations. Collected results show that additional visual feedback was found to reduce user hesitation in pitch tuning, allowing users to reach the proximity of desired notes in less time. Nonetheless, neither visual nor haptic feedback was found to significantly impact pitch tuning time and accuracy compared to auditory-only feedback. Full article
Show Figures

Figure 1

20 pages, 2762 KiB  
Article
Ability-Based Methods for Personalized Keyboard Generation
by Claire L. Mitchell, Gabriel J. Cler, Susan K. Fager, Paola Contessa, Serge H. Roy, Gianluca De Luca, Joshua C. Kline and Jennifer M. Vojtech
Multimodal Technol. Interact. 2022, 6(8), 67; https://doi.org/10.3390/mti6080067 - 3 Aug 2022
Cited by 1 | Viewed by 2205
Abstract
This study introduces an ability-based method for personalized keyboard generation, wherein an individual’s own movement and human–computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, [...] Read more.
This study introduces an ability-based method for personalized keyboard generation, wherein an individual’s own movement and human–computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user’s movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual’s movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user’s motor abilities when designing virtual interfaces. Full article
Show Figures

Figure 1

10 pages, 2375 KiB  
Article
Smart Map Augmented: Exploring and Learning Maritime Audio-Tactile Maps without Vision: The Issue of Finger or Marker Tracking
by Mathieu Simonnet
Multimodal Technol. Interact. 2022, 6(8), 66; https://doi.org/10.3390/mti6080066 - 3 Aug 2022
Viewed by 1475
Abstract
Background: When exploring audio-tactile nautical charts without vision, users could trigger vocal announcements of a seamark’s name thanks to video tracking. In a first condition they could simply use a green sticker fastened at the tip of a finger and in a second [...] Read more.
Background: When exploring audio-tactile nautical charts without vision, users could trigger vocal announcements of a seamark’s name thanks to video tracking. In a first condition they could simply use a green sticker fastened at the tip of a finger and in a second condition they could handle a small handy green object, called the marker. Methods: In this study, we attempted to compare finger and marker tracking conditions to complete spatial tasks without vision. More precisely, we aimed to better understand which kind of interaction was the most efficient to perform either localization or estimation of distance and direction tasks. Twelve blindfolded participants realized these two spatial tasks on a 3D-printed audio-tactile nautical chart. Results: Results of the localization tasks revealed that in finger condition, participants were faster in finding geographic elements, i.e., seamarks. During estimation tasks, no differences were found between the precision of distances and direction estimations in both conditions. However, spatial reasoning took significantly less time in marker condition. Finally, we discussed the issue of the efficiency of these two interaction conditions depending on the spatial tasks. Conclusions: More experimentation and discussion should be undertaken to identify better modalities for helping visually impaired persons to explore audio-tactile maps and to prepare navigation. Full article
Show Figures

Figure 1

23 pages, 2500 KiB  
Article
Cognitive Learning and Robotics: Innovative Teaching for Inclusivity
by Nurziya Oralbayeva, Aida Amirova, Anna CohenMiller and Anara Sandygulova
Multimodal Technol. Interact. 2022, 6(8), 65; https://doi.org/10.3390/mti6080065 - 3 Aug 2022
Viewed by 2116
Abstract
We present the interdisciplinary CoWriting Kazakh project in which a social robot acts as a peer in learning the new Kazakh Latin alphabet, to which Kazakhstan is going to shift from the current Kazakh Cyrillic by 2030. We discuss the past literature on [...] Read more.
We present the interdisciplinary CoWriting Kazakh project in which a social robot acts as a peer in learning the new Kazakh Latin alphabet, to which Kazakhstan is going to shift from the current Kazakh Cyrillic by 2030. We discuss the past literature on cognitive learning and script acquisition in-depth and present a theoretical framing for this study. The results of word and letter analyses from two user studies conducted between 2019 and 2020 are presented. Learning the new alphabet through Kazakh words with two or more syllables and special native letters resulted in significant learning gains. These results suggest that reciprocal Cyrillic-to-Latin script learning results in considerable cognitive benefits due to mental conversion, word choice, and handwriting practices. Overall, this system enables school-age children to practice the new Kazakh Latin script in an engaging learning scenario. The proposed theoretical framework illuminates the understanding of teaching and learning within the multimodal robot-assisted script learning scenario and beyond its scope. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Figure 1

21 pages, 323 KiB  
Article
Behaviour of True Artificial Peers
by Norman Weißkirchen and Ronald Böck
Multimodal Technol. Interact. 2022, 6(8), 64; https://doi.org/10.3390/mti6080064 - 2 Aug 2022
Cited by 1 | Viewed by 1609
Abstract
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of [...] Read more.
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of the current situation for performing a sensible decision-making process. We present a method for a system capable of interacting with their user to optimise their information-gathering task, while at the same time ensuring the necessary satisfaction with the system, so that the user may not be discouraged from further interaction. Based on this collected information, the system may then create and employ a specifically adapted rule-set base which is much closer to an intelligent companion than a typical technical user interface. A further aspect is the perception of the system as a trustworthy and understandable partner, allowing an empathetic understanding between the user and the system, leading to a closer integrated smart environment. Full article
Show Figures

Figure 1

23 pages, 33931 KiB  
Article
ViviPaint: Creating Dynamic Painting with a Thermochromic Toolkit
by Guanhong Liu, Tianyu Yu, Zhihao Yao, Haiqing Xu, Yunyi Zhang, Xuhai Xu, Xiaomeng Xu, Mingyue Gao, Qirui Sun, Tingliang Zhang and Haipeng Mi
Multimodal Technol. Interact. 2022, 6(8), 63; https://doi.org/10.3390/mti6080063 - 27 Jul 2022
Cited by 7 | Viewed by 2733
Abstract
New materials and technologies facilitate the design of thermochromic dynamic paintings. However, creating a thermochromic painting requires knowledge of electrical engineering and computer science, which is a barrier for artists and enthusiasts with non-technology backgrounds. Existing toolkits only support limited design space and [...] Read more.
New materials and technologies facilitate the design of thermochromic dynamic paintings. However, creating a thermochromic painting requires knowledge of electrical engineering and computer science, which is a barrier for artists and enthusiasts with non-technology backgrounds. Existing toolkits only support limited design space and fail to provide usable solutions for independent creation and for meeting the needs of the artists. We present ViviPaint, a toolkit that assists artists and enthusiasts in creating thermochromic paintings easily and conveniently. We summarized the pain points and challenges by observing a professional artist’s entire thermochromic painting creation process. We then designed ViviPaint consisting of a design tool and a set of hardware components. The design tool provides a GUI animation choreography interface, hardware assembly guidance, and assistance in assembly process. The hardware components comprise an augmented picture frame with a detachable structure and 24 temperature-changing units using Peltier elements. The results of our evaluation study (N = 8) indicate that our toolkit is easy to use and effectively assists users in creating thermochromic paintings. Full article
Show Figures

Figure 1

23 pages, 364 KiB  
Article
Perspectives on Socially Intelligent Conversational Agents
by Luisa Brinkschulte, Stephan Schlögl, Alexander Monz, Pascal Schöttle and Matthias Janetschek
Multimodal Technol. Interact. 2022, 6(8), 62; https://doi.org/10.3390/mti6080062 - 25 Jul 2022
Viewed by 2821
Abstract
The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic [...] Read more.
The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic behavior. One of the required characteristics in this shift from a utilitarian tool to an emotional character is the adoption of social intelligence. Although past research has recognized this need, more multi-disciplinary investigations should be devoted to the exploration of relevant traits and their potential embedding in future agent technology. Aiming to lay a foundation for further developments, we report on the results of a Delphi study highlighting the respective opinions of 21 multi-disciplinary domain experts. Results exhibit 14 distinctive characteristics of social intelligence, grouped into different levels of consensus, maturity, and abstraction, which may be considered a relevant basis, assisting the definition and consequent development of socially intelligent conversational agents. Full article
(This article belongs to the Special Issue Multimodal Conversational Interaction and Interfaces, Volume II)
19 pages, 2103 KiB  
Article
Learning Management System Analytics on Arithmetic Fluency Performance: A Skill Development Case in K6 Education
by Umar Bin Qushem, Athanasios Christopoulos and Mikko-Jussi Laakso
Multimodal Technol. Interact. 2022, 6(8), 61; https://doi.org/10.3390/mti6080061 - 22 Jul 2022
Cited by 3 | Viewed by 2426
Abstract
Achieving fluency in arithmetic operations is vital if students are to develop mathematical creativity and critical thinking abilities. Nevertheless, a substantial body of literature has demonstrated that students are struggling to develop such skills, due to the absence of appropriate instructional support or [...] Read more.
Achieving fluency in arithmetic operations is vital if students are to develop mathematical creativity and critical thinking abilities. Nevertheless, a substantial body of literature has demonstrated that students are struggling to develop such skills, due to the absence of appropriate instructional support or motivation. A proposed solution to tackle this problem is the rapid evolution and widespread integration of educational technology into the modern school system. To be precise, the Learning Management System (LMS) has been found to be particularly useful in the instructional process, especially where matters related to personalised and self-regulated learning are concerned. In the present work, we explored the aforementioned topics in the context of a longitudinal study in which 720 primary education students (4th–6th grade), from United Arab Emirates (UAE), utilised an LMS, at least once per week, for one school year (nine months). The findings revealed that the vast majority (97% of the 6th graders, 83% of the 4th graders, and 76% of the 5th graders) demonstrated a positive improvement in their arithmetic fluency development. Moreover, the Multiple Linear Regression analysis revealed that students need to practice deliberately for approximately 68 days (a minimum of 3 min a day) before seeing any substantial improvement in their performance. The study also made an additional contribution by demonstrating how design practice compliance with gamification and Learning Analytics in LMS may lead children to be fluent in simple arithmetic operations. For educators interested in LMS-based intervention, research implications and directions are presented. Full article
(This article belongs to the Special Issue Effective and Efficient Digital Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop