Multimodal Pattern Recognition of Social Signals in HCI

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 20624

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, University of Arkansas at Little Rock, Little Rock, AR 72204, USA
Interests: computer vision; human computer interaction; AI; machine learning; evolutionary computation; augmented reality; computer graphics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Insitute of Neural Information Processing, Ulm University, James Frank Ring, 89081 Ulm, Germany
Interests: artificial neural networks; pattern recognition; cluster analysis; statistical learning theory; data mining; multiple classifier systems; sensor fusion; affective computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
Interests: computer vision; audio processing; machine learning; human–robot interaction

Special Issue Information

Dear Colleagues,

The 7th International Workshop on Multimodal pattern recognition of social signals in human computer interaction (MPRSS 2022) will be held on 21 August 2022  in conjunction with the 26th International Conference on Pattern Recognition (ICPR 2022) in Montreal, Canada, which will take place on 21–25 August 2022. For more information about the conference, please visit here: https://neuro.informatik.uni-ulm.de/MPRSS2022/. Building intelligent artificial companions capable of interacting with humans in the same way humans interact with each other is a major challenge in affective computing. Such a type of interactive companion must be capable of perceiving and interpreting multimodal information about the user in order to be able to produce an appropriate response. The proposed workshop mainly focuses on pattern recognition and machine learning methods for the perception of the user’s affective states, activities, and intentions.

The authors of selected papers which are presented at the workshop are invited to submit extended versions to this Special Issue of the journal Computers after the conference. Submitted papers should be extended to the size of regular research or review articles, with at least 50% extension of new results. All submitted papers will undergo our standard peer review procedure. Accepted papers will be published in open access format in Computers and collected together in this Special Issue’s website. Accepted extended papers will be published free of charge. There are no page limitations in this journal.

We also invite regular submissions related to the latest challenges, technologies, solutions, techniques, and fundamentals pertaining to the topic of this Special Issue. Topics of interest include but are not limited to:

  • Algorithms to recognize emotions, behaviors, activities, and intentions
    • Facial expression recognition
    • Recognition of gestures, head/body poses
    • Audiovisual emotion recognition
    • Analysis of biophysiological data for emotion recognition
    • Multimodal information fusion architectures
    • Multiclassifier systems and multiview classifiers
    • Gesture recognition, activity recognition, behavior recognition
    • Temporal information fusion
  • Learning algorithms for social signal processing
    • Learning from unlabeled and partially labeled data
    • Learning with noisy/uncertain labels
    • Deep learning architectures
    • Learning of time series
  • Applications relevant to the workshop
    • Companion technologies
    • Robotics
    • Assistive systems
  • Benchmark datasets relevant to workshop topics

Prof. Dr. Mariofanna Milanova
Prof. Dr. Friedhelm Schwenker
Dr. Xavier Alameda-Pineda
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 
 

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 455 KiB  
Article
Pain Detection in Biophysiological Signals: Knowledge Transfer from Short-Term to Long-Term Stimuli Based on Distance-Specific Segment Selection
by Tobias Benjamin Ricken, Peter Bellmann, Steffen Walter and Friedhelm Schwenker
Computers 2023, 12(4), 71; https://doi.org/10.3390/computers12040071 - 31 Mar 2023
Cited by 2 | Viewed by 1864
Abstract
In this study, we analyze a signal segmentation-specific pain duration transfer task by applying knowledge transfer from short-term (phasic) pain stimuli to long-term (tonic) pain stimuli. To this end, we focus on the physiological signals of the X-ITE Pain Database. We evaluate different [...] Read more.
In this study, we analyze a signal segmentation-specific pain duration transfer task by applying knowledge transfer from short-term (phasic) pain stimuli to long-term (tonic) pain stimuli. To this end, we focus on the physiological signals of the X-ITE Pain Database. We evaluate different distance-based segment selection approaches with the aim of identifying individual segments of the corresponding tonic stimuli that lead to the best classification performance. The phasic domain is used to train the classification model. In the first main step, we compute class-specific prototypes for the phasic domain. In the second main step, we compute the distances between all segments of the tonic samples and each prototype. The segment with the lowest distance to the prototypes is then fed to the classifier. Our analysis includes the evaluation of a variety of distance metrics, namely the Euclidean, Bray–Curtis, Canberra, Chebyshev, City-Block and Wasserstein distances. Our results show that in combination with most of the metrics used, the distance-based selection of one individual segment outperforms the naive approach in which the tonic stimuli are fed to the phasic domain-based classification model without any adaptation. Moreover, most of the evaluated distance-based segment selection approaches lead to outcomes that are close to the classification performance, which is obtained by focusing on the respective best segments. For instance, for the trapezius (TRA) signal, in combination with the electric pain domain, we obtained an averaged accuracy of 68.0%, while the naive approach led to 66.0%. For the thermal pain domain, in combination with the electrodermal activity (EDA) signal, we obtained an averaged accuracy of 59.6%, outperforming the naive approach, which led to 53.2%. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

13 pages, 2493 KiB  
Article
Automatic Evaluation of Neural Network Training Results
by Roman Barinov, Vasiliy Gai, George Kuznetsov and Vladimir Golubenko
Computers 2023, 12(2), 26; https://doi.org/10.3390/computers12020026 - 20 Jan 2023
Cited by 5 | Viewed by 2941
Abstract
This article is dedicated to solving the problem of an insufficient degree of automation of artificial neural network training. Despite the availability of a large number of libraries for training neural networks, machine learning engineers often have to manually control the training process [...] Read more.
This article is dedicated to solving the problem of an insufficient degree of automation of artificial neural network training. Despite the availability of a large number of libraries for training neural networks, machine learning engineers often have to manually control the training process to detect overfitting or underfitting. This article considers the task of automatically estimating neural network training results through an analysis of learning curves. Such analysis allows one to determine one of three possible states of the training process: overfitting, underfitting, and optimal training. We propose several algorithms for extracting feature descriptions from learning curves using mathematical statistics. Further state classification is performed using classical machine learning models. The proposed automatic estimation model serves to improve the degree of automation of neural network training and interpretation of its results, while also taking a step toward constructing self-training models. In most cases when the training process of neural networks leads to overfitting, the developed model determines its onset ahead of the early stopping method by 3–5 epochs. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

22 pages, 5962 KiB  
Article
Intelligent Robotic Welding Based on a Computer Vision Technology Approach
by Nazar Kais AL-Karkhi, Wisam T. Abbood, Enas A. Khalid, Adnan Naji Jameel Al-Tamimi, Ali A. Kudhair and Oday Ibraheem Abdullah
Computers 2022, 11(11), 155; https://doi.org/10.3390/computers11110155 - 29 Oct 2022
Cited by 9 | Viewed by 4531
Abstract
Robots have become an essential part of modern industries in welding departments to increase the accuracy and rate of production. The intelligent detection of welding line edges to start the weld in a proper position is very important. This work introduces a new [...] Read more.
Robots have become an essential part of modern industries in welding departments to increase the accuracy and rate of production. The intelligent detection of welding line edges to start the weld in a proper position is very important. This work introduces a new approach using image processing to detect welding lines by tracking the edges of plates according to the required speed by three degrees of a freedom robotic arm. The two different algorithms achieved in the developed approach are the edge detection and top-hat transformation. An adaptive neuro-fuzzy inference system ANFIS was used to choose the best forward and inverse kinematics of the robot. MIG welding at the end-effector was applied as a tool in this system, and the weld was completed according to the required working conditions and performance. The parts of the system work with compatible and consistent performances, with acceptable accuracy for tracking the line of the welding path. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

17 pages, 4380 KiB  
Article
Machine Learning Models for Classification of Human Emotions Using Multivariate Brain Signals
by Shashi Kumar G. S., Ahalya Arun, Niranjana Sampathila and R. Vinoth
Computers 2022, 11(10), 152; https://doi.org/10.3390/computers11100152 - 13 Oct 2022
Cited by 9 | Viewed by 4164
Abstract
Humans can portray different expressions contrary to their emotional state of mind. Therefore, it is difficult to judge humans’ real emotional state simply by judging their physical appearance. Although researchers are working on facial expressions analysis, voice recognition, and gesture recognition; the accuracy [...] Read more.
Humans can portray different expressions contrary to their emotional state of mind. Therefore, it is difficult to judge humans’ real emotional state simply by judging their physical appearance. Although researchers are working on facial expressions analysis, voice recognition, and gesture recognition; the accuracy levels of such analysis are much less and the results are not reliable. Hence, it becomes vital to have realistic emotion detector. Electroencephalogram (EEG) signals remain neutral to the external appearance and behavior of the human and help in ensuring accurate analysis of the state of mind. The EEG signals from various electrodes in different scalp regions are studied for performance. Hence, EEG has gained attention over time to obtain accurate results for the classification of emotional states in human beings for human–machine interaction as well as to design a program where an individual could perform a self-analysis of his emotional state. In the proposed scheme, we extract power spectral densities of multivariate EEG signals from different sections of the brain. From the extracted power spectral density (PSD), the features which provide a better feature for classification are selected and classified using long short-term memory (LSTM) and bi-directional long short-term memory (Bi-LSTM). The 2-D emotion model considered for the classification of frontal, parietal, temporal, and occipital is studied. The region-based classification is performed by considering positive and negative emotions. The performance accuracy of our previous model’s results of artificial neural network (ANN), support vector machine (SVM), K-nearest neighbor (K-NN), and LSTM was compared and 94.95% accuracy was received using Bi-LSTM considering four prefrontal electrodes. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

10 pages, 5467 KiB  
Article
A New Method of Disabling Face Detection by Drawing Lines between Eyes and Mouth
by Chongyang Zhang and Hiroyuki Kameda
Computers 2022, 11(9), 134; https://doi.org/10.3390/computers11090134 - 8 Sep 2022
Viewed by 2217
Abstract
Face swapping technology is approaching maturity, and it is difficult to distinguish between real images and fake images. In order to prevent malicious face swapping and ensure the privacy and security of personal photos, we propose a new way to disable the face [...] Read more.
Face swapping technology is approaching maturity, and it is difficult to distinguish between real images and fake images. In order to prevent malicious face swapping and ensure the privacy and security of personal photos, we propose a new way to disable the face detector in the face detection stage, which is to add a black line structure to the face part. Using neural network visualization, we found that the black line structure can interrupt the continuity of facial features extracted by the face detector, thus making the three face detectors MTCNN, S3FD, and SSD fail simultaneously. By widening the width of the black line, MTCNN, S3FD, and SSD are able to reach probability of failure levels up to 95.7%. To reduce the amount of perturbation added and determine the effective range of perturbation addition, we firstly experimentally prove that adding perturbation to the background cannot interfere with the detector’s detection of faces. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 519 KiB  
Review
A Critical Review on the 3D Cephalometric Analysis Using Machine Learning
by Shtwai Alsubai
Computers 2022, 11(11), 154; https://doi.org/10.3390/computers11110154 - 28 Oct 2022
Cited by 9 | Viewed by 3835
Abstract
Machine learning applications have momentously enhanced the quality of human life. The past few decades have seen the progression and application of machine learning in diverse medical fields. With the rapid advancement in technology, machine learning has secured prominence in the prediction and [...] Read more.
Machine learning applications have momentously enhanced the quality of human life. The past few decades have seen the progression and application of machine learning in diverse medical fields. With the rapid advancement in technology, machine learning has secured prominence in the prediction and classification of diseases through medical images. This technological expansion in medical imaging has enabled the automated recognition of anatomical landmarks in radiographs. In this context, it is decisive that machine learning is capable of supporting clinical decision support systems with image processing and whose scope is found in the cephalometric analysis. Though the application of machine learning has been seen in dentistry and medicine, its progression in orthodontics has grown slowly despite promising outcomes. Therefore, the present study has performed a critical review of recent studies that have focused on the application of machine learning in 3D cephalometric analysis consisting of landmark identification, decision making, and diagnosis. The study also focused on the reliability and accuracy of existing methods that have employed machine learning in 3D cephalometry. In addition, the study also contributed by outlining the integration of deep learning approaches in cephalometric analysis. Finally, the applications and challenges faced are briefly explained in the review. The final section of the study comprises a critical analysis from which the most recent scope will be comprehended. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

Back to TopTop