sensors-logo

Journal Browser

Journal Browser

Advances on Smart Vision Chips and Near-Sensor Inference for Edge AI

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 4851

Special Issue Editors


E-Mail Website
Guest Editor
CSIC - Instituto de Microelectronica de Sevilla (IMSE-CNM), 41092, Sevilla, Spain
Interests: smart imagers and vision chips; 3D and ToF imagers; vertical integration of sensors; multispectral imaging

E-Mail Website
Guest Editor
Universidad de Sevilla - Instituto de Microelectronica de Sevilla (IMSE-CNM), 41092, Sevilla, Spain
Interests: smart CMOS imagers; vision chips; hardware-software co-design; low-power low-cost edge visual AI

Special Issue Information

Dear Colleagues,

The emergence of artificial vision is fueled by the convergence of advanced image sensing technologies and embedded artificial intelligence. In addition to the obvious requirements on image sensors—high spatial and temporal resolution, high dynamic range, 3D information—the challenge at the sensor plane is now the extraction of visually relevant information. However, conveying visual sensing and processing to the edge is very challenging. Convolutional neural networks, now at the core of most vision pipelines because of their high accuracy, come at the cost of a heavy computational load. Lightweight representations of the scene, abridged networks, and dedicated circuitry for inference acceleration need to be explored to accomplish efficient visual processing at the edge.

This Special Issue aims at gathering the latest results on smart vision front-ends carrying out optical sensing, smart transduction, and early processing to boost the performance of the whole visual pipeline. Thus, we welcome contributions in—but not limited to—the following topics:

  • Analog-to-information image sensors;
  • In-sensor feature learning and extraction;
  • Compressive sensing inference;
  • Always-on visual sensing;
  • Emerging devices and technologies for smart optoelectronics;
  • Hardware-software co-design for efficient on-chip sensing-processing;
  • Vertical circuit integration for hierarchical processing;
  • Circuits and systems for concurrent 2D/3D image sensing;
  • Non-conventional visual information coding for accelerated processing.

Prof. Dr. Ricardo Carmona-Galán
Prof. Dr. Jorge Fernández-Berni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart image sensors
  • vision chips
  • machine learning accelerators
  • integrated vision systems

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1578 KiB  
Article
Robustifying the Deployment of tinyML Models for Autonomous Mini-Vehicles
by Miguel de Prado, Manuele Rusci, Alessandro Capotondi, Romain Donze, Luca Benini and Nuria Pazos
Sensors 2021, 21(4), 1339; https://doi.org/10.3390/s21041339 - 13 Feb 2021
Cited by 32 | Viewed by 4376
Abstract
Standard-sized autonomous vehicles have rapidly improved thanks to the breakthroughs of deep learning. However, scaling autonomous driving to mini-vehicles poses several challenges due to their limited on-board storage and computing capabilities. Moreover, autonomous systems lack robustness when deployed in dynamic environments where the [...] Read more.
Standard-sized autonomous vehicles have rapidly improved thanks to the breakthroughs of deep learning. However, scaling autonomous driving to mini-vehicles poses several challenges due to their limited on-board storage and computing capabilities. Moreover, autonomous systems lack robustness when deployed in dynamic environments where the underlying distribution is different from the distribution learned during training. To address these challenges, we propose a closed-loop learning flow for autonomous driving mini-vehicles that includes the target deployment environment in-the-loop. We leverage a family of compact and high-throughput tinyCNNs to control the mini-vehicle that learn by imitating a computer vision algorithm, i.e., the expert, in the target environment. Thus, the tinyCNNs, having only access to an on-board fast-rate linear camera, gain robustness to lighting conditions and improve over time. Moreover, we introduce an online predictor that can choose between different tinyCNN models at runtime—trading accuracy and latency—which minimises the inference’s energy consumption by up to 3.2×. Finally, we leverage GAP8, a parallel ultra-low-power RISC-V-based micro-controller unit (MCU), to meet the real-time inference requirements. When running the family of tinyCNNs, our solution running on GAP8 outperforms any other implementation on the STM32L4 and NXP k64f (traditional single-core MCUs), reducing the latency by over 13× and the energy consumption by 92%. Full article
(This article belongs to the Special Issue Advances on Smart Vision Chips and Near-Sensor Inference for Edge AI)
Show Figures

Figure 1

Back to TopTop