Human Behavior, Emotion and Representation

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (28 February 2018)

Special Issue Editors


E-Mail Website
Guest Editor
University of Grenoble, Grenoble, France
Interests: multimodal perception of humans; smart spaces/ubiquitous computing; healthcare and assistive technologies; affective computing

E-Mail Website
Guest Editor
Center of Excellence, Cognitive Interaction Technology, Inspiration, Bielefeld, Germany
Interests: eye tracking; visual perception; assistive/adaptive systems; user experience and usability; computer vision and image processing

E-Mail Website
Guest Editor
University Grenoble-Alpes, LIG, Inria, France
Interests: multimodal perception; affective computing

E-Mail Website
Guest Editor
Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld University, Bielefeld, Germany
Interests: complex networks; multi-layer networks; linguistic networks; cognitive modeling; chess expertise; subliminal suggestibility of chess players; mathematical physics: analysis; linear algebra and stochastics for physicists

Special Issue Information

Dear Colleagues,

Natural behavior skills based on cognitive abilities are key challenges for robots, virtual agents and intelligent machines while interacting with humans. This becomes particular evident by the expected increase in the use of intelligent interaction partners designed to support humans in everyday situations within the next coming decade (such as virtual coaches, companion robots, assistive systems and autonomous cars). These systems will need to develop their autonomy and they have to elicit social interaction and social synchrony. In order to achieve these goals, their perception of humans, as well as their behavior, must build on more complex inputs about emotion, mental state and models of the human partners compared to the mainly more low-level based approaches currently in use. Recent advances in multidisciplinary research on behavior, emotional states, visual behavior, neurofeedback, physiological parameters or mental memory representations help to understand the cognitive background of action and interaction in everyday interactions and therefore to pave the way for the design of new building blocks for a more natural and intuitive human-machine interaction.

This special issue focuses on building blocks of human/agent(s) interactions. Collecting and analyzing multi-modal data from different measurements also allows constructing solid computational models. These blocks of interaction will serve as basis for building artificial cognitive systems being able to interact with humans in an intuitive way and to acquire new skills by learning from the user. This will result in new forms of human-computer interaction such as individualized, adaptive assistance systems for scaffolding cognitive, emotional and attentive learning processes. In this context, it is clearly advantageous for intelligent robots and virtual agents to know how these cognitive representations and physiological parameters are formed, stabilized and adapted during different phases in daily actions. This knowledge enables a technical system to specify and perceive individual’s current level of learning and performance, and therefore to shape the interaction. These interactions must be (socially) appropriate, not excessive. Such systems can assist users in developing (interaction) skills in a variety of domains and during different phases in daily-life actions. At the same time, interactive systems should fulfill constraints such as usability, acceptability and ethics.

Topics of interest of this special issue include all aspects of affective computing dedicated to robotics and interactive systems including, but not limited to, the following topics:

  • Acoustic, visual or multimodal processing for affect recognition
  • Real time and embedded perception into the wild
  • Human Behavior analysis
  • Anticipation and Imitation of Human Behavior
  • Affects and social interaction modeling
  • Affective computing in the human/robot interaction loop
  • Affects rendering and synthesis
  • Acceptability and usability while interacting
  • Affects in developmental robotics
  • Computational modeling of cognitive interaction components
  • Case Studies and Applications in Real-life Contexts
Dr. Vaufreydaz Dominique
Dr. Kai Essig
Mr. Thomas Guntz
Mr. Thomas Küchelmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2191 KiB  
Article
Multimodal Observation and Classification of People Engaged in Problem Solving: Application to Chess Players
by Thomas Guntz, Raffaella Balzarini, Dominique Vaufreydaz and James Crowley
Multimodal Technol. Interact. 2018, 2(2), 11; https://doi.org/10.3390/mti2020011 - 31 Mar 2018
Cited by 4 | Viewed by 3674
Abstract
In this paper we present the first results of a pilot experiment in the interpretation of multimodal observations of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other [...] Read more.
In this paper we present the first results of a pilot experiment in the interpretation of multimodal observations of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. Domains of application for such cognitive model based systems are, for instance, healthy autonomous ageing or automated training systems. Abilities to observe cognitive abilities and emotional reactions can allow artificial systems to provide appropriate assistance in such contexts. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant’s awareness of the current situation and to predict ability to respond effectively to challenging situations. Feature selection has been performed to construct a multimodal classifier relying on the most relevant features from each modality. Initial results indicate that eye-gaze, body posture and emotion are good features to capture such awareness. This experiment also validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving. Full article
(This article belongs to the Special Issue Human Behavior, Emotion and Representation)
Show Figures

Figure 1

Back to TopTop