sensors-logo

Journal Browser

Journal Browser

Machine Learning for Signal, Image, and Video Processing

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Physical Sensors".

Viewed by 3619

Editor


E-Mail Website
Collection Editor
Electrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, 16145 Genova, Italy
Interests: machine learning; embedded systems; edge computing; deep learning for computer vision; machine learning for robotics and prosthetic limbs
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Through the combination of effective theoretical models and powerful computing resources, machine learning (ML) is becoming a fundamental technology for the development of smart sensing systems. In this regard, one of the main challenges for the future is the integration of ML into suitable hardware devices. ML may require a very high processing power (e.g., deep learning), while sensing systems may involve hard constraints in terms of computation resources (e.g., battery-operated devices).

This Topical Collection puts the focus on ML models for signal, image, and video processing. The goal is to collect manuscripts presenting methodologies, systems, and novel solutions that address the integration of ML into hardware platforms for building the next generation of sensor-based intelligent systems.

The topics of interest for this Collection include, but are not limited to:

  • High-performance, low-power computing for deep-learning-based computer vision;
  • High-performance, low-power computing for deep-learning-based audio and speech processing;
  • Embedded machine learning;
  • Machine learning implementations on FPGAs;
  • Online learning on resource-constrained edge devices;
  • On-chip training of machine learning models;
  • Lightweight architectures for deep learning;
  • Adversarial attacks to machine learning.

Dr. Paolo Gastaldo
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

2023

18 pages, 2406 KiB  
Article
Joint Video Super-Resolution and Frame Interpolation via Permutation Invariance
by Jinsoo Choi and Tae-Hyun Oh
Sensors 2023, 23(5), 2529; https://doi.org/10.3390/s23052529 - 24 Feb 2023
Viewed by 2625
Abstract
We propose a joint super resolution (SR) and frame interpolation framework that can perform both spatial and temporal super resolution. We identify performance variation according to permutation of inputs in video super-resolution and video frame interpolation. We postulate that favorable features extracted from [...] Read more.
We propose a joint super resolution (SR) and frame interpolation framework that can perform both spatial and temporal super resolution. We identify performance variation according to permutation of inputs in video super-resolution and video frame interpolation. We postulate that favorable features extracted from multiple frames should be consistent regardless of input order if the features are optimally complementary for respective frames. With this motivation, we propose a permutation invariant deep architecture that makes use of the multi-frame SR principles by virtue of our order (permutation) invariant network. Specifically, given two adjacent frames, our model employs a permutation invariant convolutional neural network module to extract “complementary” feature representations facilitating both the SR and temporal interpolation tasks. We demonstrate the effectiveness of our end-to-end joint method against various combinations of the competing SR and frame interpolation methods on challenging video datasets, and thereby we verify our hypothesis. Full article
Show Figures

Figure 1

Back to TopTop