Selelcted papers from INTESA Workshop 2018

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (15 November 2018) | Viewed by 8376

Special Issue Editors


E-Mail Website
Guest Editor
Politecnico di Milano – DEIB, Milano, Italy
Interests: embedded systems; high-performance computing; energy aware design of Hw and Sw; multi-many cores; performance predictability and real-time; cybersecurity
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronics and Telecommunications, Polytechnic University of Turin, 10129 Torino, Italy
Interests: VLSI design; channel decoder architectures; image and video compression; digital signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We have organized a Special Issue of the INTESA workshop 2018 in Future Internet.

The main purpose of this Special Issue is to publish extended versions of the INTESA2018 papers. The papers from the attendees at Embedded Systems Week are also welcome. Such papers will be expected to make significant contributions to key areas, from architectures and design methodologies to support embedded intelligence to embedded intelligence: Best practices and software support.

The Special Issue especially focuses on works related to the journal topics.

Possible topics include, but are not limited to:

  • Special purpose hardware to support deep learning in embedded architectures
  • Edge computing for smart embedded systems: hardware and software aspects
  • Run-time resource management for smart IoT/Edge Computing systems
  • HW/SW codesign of Cyber Physical Systems
  • Programming models for IoT/Edge computing applications
  • Applications and case studies of intelligent embedded systems
  • Design methodologies and platforms for wearable computing
  • In-memory computing for unsupervised learning

Prof. William Fornaciari
Prof. Maurizio Martina
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • embedded intelligencedeep learning
  • edge computing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 2318 KiB  
Article
Fog vs. Cloud Computing: Should I Stay or Should I Go?
by Flávia Pisani, Vanderson Martins do Rosario and Edson Borin
Future Internet 2019, 11(2), 34; https://doi.org/10.3390/fi11020034 - 02 Feb 2019
Cited by 5 | Viewed by 3880
Abstract
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is [...] Read more.
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is “it depends”. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation. Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Show Figures

Figure 1

15 pages, 737 KiB  
Article
Layer-Wise Compressive Training for Convolutional Neural Networks
by Matteo Grimaldi, Valerio Tenace and Andrea Calimera
Future Internet 2019, 11(1), 7; https://doi.org/10.3390/fi11010007 - 28 Dec 2018
Cited by 7 | Viewed by 3992
Abstract
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model [...] Read more.
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (<1%). Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Show Figures

Figure 1

Back to TopTop