Multimodal User Interfaces Modelling and Development

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (14 June 2019)

Special Issue Editors


E-Mail Website
Guest Editor
Human-IST Institute & Department of Informatics, University of Fribourg, Fribourg, Switzerland
Interests: human-computer interaction; multimodal interaction; information visualization; tangible interaction; gestural interaction; affective user interfaces; adaptive user interfaces
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Computer Science, University of Namur, Namur, Belgium
Interests: human–computer interaction; multimodal interaction; adaptation to user and context; cross-media systems

Special Issue Information

Dear Colleagues,

Multimodal interfaces, relying on input, as well as output, modalities, such as speech, gestures, or emotions, are considered one of the most promising tracks for next-generation user interfaces. However, most multimodal user interfaces created nowadays still rely on ad-hoc development with little thorough modelling. Beyond the software engineering challenge of having to “reinvent the wheel” every time a new multimodal user interface is being designed and developed, multimodality can greatly benefit from research in fields such as fusion and fission of interactive modalities, modelling of user and task, multimodal interaction modelling, requirements engineering or software architectures for interactive multimodal systems. Beyond these challenges, integrating multimodal interfaces with other interaction styles, such as tangible interfaces or adaptive systems are also active fields. This Special Issue aims to provide a collection of high quality research articles that address broad challenges in both theoretical and applied aspects of multimodal interfaces development.

We welcome contributions related to the following topics related to multimodal interaction:

  • Multimodal interaction modelling
  • User and task modelling
  • Fusion and fission of interactive modalities
  • Software architectures for interactive multimodal systems
  • User Interfaces for the multimodal interface development
  • Multimodal systems and applications
  • Novel individual recognizers for multimodal interaction

Prof. Dr. Denis Lalanne
Prof. Dr. Bruno Dumas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 453 KiB  
Article
A Survey on Psycho-Physiological Analysis & Measurement Methods in Multimodal Systems
by Muhammad Zeeshan Baig and Manolya Kavakli
Multimodal Technol. Interact. 2019, 3(2), 37; https://doi.org/10.3390/mti3020037 - 28 May 2019
Cited by 44 | Viewed by 7948
Abstract
Psycho-physiological analysis has gained greater attention in the last few decades in various fields including multimodal systems. Researchers use psychophysiological feedback devices such as skin conductance (SC), Electroencephalography (EEG) and Electrocardiography (ECG) to detect the affective states of the users during task performance. [...] Read more.
Psycho-physiological analysis has gained greater attention in the last few decades in various fields including multimodal systems. Researchers use psychophysiological feedback devices such as skin conductance (SC), Electroencephalography (EEG) and Electrocardiography (ECG) to detect the affective states of the users during task performance. Psycho-physiological feedback has been successful in detection of the cognitive states of users in human-computer interaction (HCI). Recently, in game studies, psycho-physiological feedback has been used to capture the user experience and the effect of interaction on human psychology. This paper reviews several psycho-physiological, cognitive, and affective assessment studies and focuses on the use of psychophysiological signals in estimating the user’s cognitive and emotional states in multimodal systems. In this paper, we review the measurement techniques and methods that have been used to record psycho-physiological signals as well as the cognitive and emotional states in a variety of conditions. The aim of this review is to conduct a detailed study to identify, describe and analyze the key psycho-physiological parameters that relate to different mental and emotional states in order to provide an insight into key approaches. Furthermore, the advantages and limitations of these approaches are also highlighted in this paper. The findings state that the classification accuracy of >90% has been achieved in classifying emotions with EEG signals. A strong correlation between self-reported data, HCI experience, and psychophysiological data has been observed in a wide range of domains including games, human-robot interaction, mobile interaction, and simulations. An increase in β and γ -band activity have been observed in high intense games and simulations. Full article
(This article belongs to the Special Issue Multimodal User Interfaces Modelling and Development)
Show Figures

Figure 1

30 pages, 15991 KiB  
Article
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
by Chris Zimmerer, Martin Fischbach and Marc Erich Latoschik
Multimodal Technol. Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081 - 06 Dec 2018
Cited by 5 | Viewed by 4329
Abstract
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user [...] Read more.
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. Full article
(This article belongs to the Special Issue Multimodal User Interfaces Modelling and Development)
Show Figures

Figure 1

Back to TopTop