Multimedia and Cross-modal Retrieval

A special issue of Technologies (ISSN 2227-7080).

Deadline for manuscript submissions: closed (31 July 2019) | Viewed by 15234

Special Issue Editors


E-Mail Website
Guest Editor
Leibniz Information Centre for Science and Technology, (TIB) – Technische Informationsbibliothek, Welfengarten 1 B, 30167 Hannover, Germany
Interests: multimedia retrieval; understanding of multimodal data; cross-modal search; visual analytics; multimedia applications

E-Mail Website
Co-Guest Editor
Leibniz Information Centre for Science and Technology, Technische Informationsbibliothek (TIB), Welfengarten 1 B, 30167 Hannover, Germany
Interests: search as learning; multimedia retrieval; semantic web technologies; user profiling; science reproducibility; visual analytics; computer ethics

Special Issue Information

Dear Colleagues,

The proliferation and importance of multimedia data have increased significantly in recent years. This is obvious for the World Wide Web (social media data, videos, etc.), but, also, automatically-generated sensor data have become more and more relevant. In the era of big data, automatic indexing and understanding of multimedia information are essential to enable semantic content-based searches. Advanced analytics and intelligent human–computer interaction technologies are crucial to enable the exploration of large multimedia and multimodal datasets. Finally, there is a call for more transparency in (multimedia) retrieval systems—applications ranging from detection and adaptation of biased machine learning models to automatic identification of fake information.

In this Special Issue we seek for contributions in the field of multimedia/multimodal analysis and retrieval in a broad sense. We invite submissions from, but not limited to, the following subject areas:

(a) analysis and understanding of multimodal data and cross-modal searches;
(b) social media analysis;
(c) affective multimedia content analysis;
(d) multimedia analytics, machine learning and deep learning for multimedia;
(e) HCI and visualisation for exploration of large multimedia databases
(f) multimedia applications for academic search, digital humanities, sports, medicine, etc.

If you are not sure if your paper fits the focus of this Special Issue, please contact the Guest Editor.

Prof. Dr. Ralph Ewerth
Dr. Anett Hoppe
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Content-based multimedia analysis and retrieval
  • Analysis and understanding of multimodal data
  • Cross-modal search and retrieval
  • Social media analysis and search
  • Affective multimedia content analysis
  • Transparency and bias of multimedia retrieval results
  • Novel interfaces and HCI for multimedia data
  • Machine learning and deep learning for multimedia
  • Multimedia information representation and knowledge graphs
  • Multimedia browsing, summarisation, and visualisation
  • Multimedia analytics
  • Applications: academic (multimedia) search engines, digital humanities, sports, medicine

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3545 KiB  
Article
CogBeacon: A Multi-Modal Dataset and Data-Collection Platform for Modeling Cognitive Fatigue
by Michalis Papakostas, Akilesh Rajavenkatanarayanan and Fillia Makedon
Technologies 2019, 7(2), 46; https://doi.org/10.3390/technologies7020046 - 13 Jun 2019
Cited by 17 | Viewed by 7824
Abstract
In this work, we present CogBeacon, a multi-modal dataset designed to target the effects of cognitive fatigue in human performance. The dataset consists of 76 sessions collected from 19 male and female users performing different versions of a cognitive task inspired by the [...] Read more.
In this work, we present CogBeacon, a multi-modal dataset designed to target the effects of cognitive fatigue in human performance. The dataset consists of 76 sessions collected from 19 male and female users performing different versions of a cognitive task inspired by the principles of the Wisconsin Card Sorting Test (WCST), a popular cognitive test in experimental and clinical psychology designed to assess cognitive flexibility, reasoning, and specific aspects of cognitive functioning. During each session, we record and fully annotate user EEG functionality, facial keypoints, real-time self-reports on cognitive fatigue, as well as detailed information of the performance metrics achieved during the cognitive task (success rate, response time, number of errors, etc.). Along with the dataset we provide free access to the CogBeacon data-collection software to provide a standardized mechanism to the community for collecting and annotating physiological and behavioral data for cognitive fatigue analysis. Our goal is to provide other researchers with the tools to expand or modify the functionalities of the CogBeacon data-collection framework in a hardware-independent way. As a proof of concept we show some preliminary machine learning-based experiments on cognitive fatigue detection using the EEG information and the subjective user reports as ground truth. Our experiments highlight the meaningfulness of the current dataset, and encourage our efforts towards expanding the CogBeacon platform. To our knowledge, this is the first multi-modal dataset specifically designed to assess cognitive fatigue and the only free software available to allow experiment reproducibility for multi-modal cognitive fatigue analysis. Full article
(This article belongs to the Special Issue Multimedia and Cross-modal Retrieval)
Show Figures

Figure 1

16 pages, 805 KiB  
Communication
A Pipeline for Rapid Post-Crisis Twitter Data Acquisition, Filtering and Visualization
by Mayank Kejriwal and Yao Gu
Technologies 2019, 7(2), 33; https://doi.org/10.3390/technologies7020033 - 02 Apr 2019
Cited by 6 | Viewed by 6933
Abstract
Due to instant availability of data on social media platforms like Twitter, and advances in machine learning and data management technology, real-time crisis informatics has emerged as a prolific research area in the last decade. Although several benchmarks are now available, especially on [...] Read more.
Due to instant availability of data on social media platforms like Twitter, and advances in machine learning and data management technology, real-time crisis informatics has emerged as a prolific research area in the last decade. Although several benchmarks are now available, especially on portals like CrisisLex, an important, practical problem that has not been addressed thus far is the rapid acquisition, benchmarking and visual exploration of data from free, publicly available streams like the Twitter API in the immediate aftermath of a crisis. In this paper, we present such a pipeline for facilitating immediate post-crisis data collection, curation and relevance filtering from the Twitter API. The pipeline is minimally supervised, alleviating the need for feature engineering by including a judicious mix of data preprocessing and fast text embeddings, along with an active learning framework. We illustrate the utility of the pipeline by describing a recent case study wherein it was used to collect and analyze millions of tweets in the immediate aftermath of the Las Vegas shootings in 2017. Full article
(This article belongs to the Special Issue Multimedia and Cross-modal Retrieval)
Show Figures

Figure 1

Back to TopTop