sensors-logo

Journal Browser

Journal Browser

Convolutional Neural Networks and Edge Computing Application

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 10477

Special Issue Editors


E-Mail Website
Guest Editor
Information Science and Technology Institute (ISTI), Italian National Research Council Department (CNR), Moruzzi 1, 56124 Pisa, Italy
Interests: artificial intelligence; deep learning; information retrieval; similarity search; access methods for multimedia information retrieval; wireless sensor networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information Science and Technology Institute (ISTI), Italian National Research Council Department (CNR), Moruzzi 1, 56124 Pisa, Italy
Interests: face recognition; face detection; edge computing; content-based image retrieval; wireless sensor networks

Special Issue Information

Dear Colleagues,

Edge computing has emerged as a new computing paradigm in the last few years due to the explosion of big data generated by millions of edge devices that exchange information without the need for central coordination. In the edge computing paradigm, both computation and data storage and management are pushed mainly at the edge of the network, on the end devices where the data are usually produced and the processing of data is often required.

This brings several advantages: a reduced transmission latency, an improved computation efficiency, lower network congestion, the ability to be much more scalable than a central system, and greater reliability from failures. Additionally, the privacy and the security of the data are preserved since the data do not need to traverse the network to reach a central server.

However, this new paradigm also poses new challenges, such as the handling of a massive amount of data. The massive diffusion of artificial intelligence technologies, such as convolutional neural networks, can certainly help to address the new challenges posed by edge computing and can further boost the advantages of this new paradigm.

This Special Issue seeks original, previously unpublished works addressing the issues and challenges related to the design, implementation, deployment, operation, and evaluation of new solutions based on the integration of the edge computing paradigm with convolutional neural networks.

Dr. Claudio Gennaro
Dr. Claudio Vairo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning algorithms for edge computing
  • Architectures, techniques and applications of intelligent edge cloud
  • Distributed machine learning algorithms for edge computing
  • Smart applications of edge computing
  • Deep learning applications on edge
  • Deep learning inference in edge
  • Edge computing for deep learning
  • Deep learning for optimizing edge
  • Deep learning training at edge
  • Machine learning in distributed camera networks
  • Federated learning
  • Reinforcement learning on edge

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2188 KiB  
Article
GradFreeBits: Gradient-Free Bit Allocation for Mixed-Precision Neural Networks
by Benjamin Jacob Bodner, Gil Ben-Shalom and Eran Treister
Sensors 2022, 22(24), 9772; https://doi.org/10.3390/s22249772 - 13 Dec 2022
Cited by 1 | Viewed by 1554
Abstract
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low-resource edge devices. Training QNNs using different levels of precision throughout the network (mixed-precision quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different [...] Read more.
Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low-resource edge devices. Training QNNs using different levels of precision throughout the network (mixed-precision quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different precision levels of QNNs can be complicated, as the values of the bit allocations are discrete and difficult to differentiate for. Moreover, adequately accounting for the dependencies between the bit allocation of different layers is not straightforward. To meet these challenges, in this work, we propose GradFreeBits: a novel joint optimization scheme for training mixed-precision QNNs, which alternates between gradient-based optimization for the weights and gradient-free optimization for the bit allocation. Our method achieves a better or on par performance with the current state-of-the-art low-precision classification networks on CIFAR10/100 and ImageNet, semantic segmentation networks on Cityscapes, and several graph neural networks benchmarks. Furthermore, our approach can be extended to a variety of other applications involving neural networks used in conjunction with parameters that are difficult to optimize for. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Edge Computing Application)
Show Figures

Figure 1

17 pages, 1448 KiB  
Article
Optimization of Edge Resources for Deep Learning Application with Batch and Model Management
by Seungwoo Kum, Seungtaek Oh, Jeongcheol Yeom and Jaewon Moon
Sensors 2022, 22(17), 6717; https://doi.org/10.3390/s22176717 - 5 Sep 2022
Cited by 7 | Viewed by 2706
Abstract
As deep learning technology paves its way, real-world applications that make use of it become popular these days. Edge computing architecture is one of the service architectures to realize the deep learning based service, which makes use of the resources near the data [...] Read more.
As deep learning technology paves its way, real-world applications that make use of it become popular these days. Edge computing architecture is one of the service architectures to realize the deep learning based service, which makes use of the resources near the data source or client. In Edge computing architecture it becomes important to manage resource usage, and there is research on optimization of deep learning, such as pruning or binarization, which makes deep learning models more lightweight, along with the research for the efficient distribution of workloads on cloud or edge resources. Those are to reduce the workload on edge resources. In this paper, a usage optimization method with batch and model management is proposed. The proposed method is to increase the utilization of GPU resource by modifying the batch size of the input of an inference application. To this end, the inference pipelines are identified to see how the different kinds of resources are used, and then the effect of batch inference on GPU is measured. The proposed method consists of a few modules, including a tool for batch size management which is able to change a batch size with respect to the available resources, and another one for model management which supports on-the-fly update of a model. The proposed methods are implemented on a real-time video analysis application and deployed in the Kubernetes cluster as a Docker container. The result shows that the proposed method can optimize the usage of edge resources for real-time video analysis deep learning applications. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Edge Computing Application)
Show Figures

Figure 1

15 pages, 546 KiB  
Article
Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations
by Yui-Kai Weng, Shih-Hsu Huang and Hsu-Yu Kao
Sensors 2021, 21(22), 7468; https://doi.org/10.3390/s21227468 - 10 Nov 2021
Cited by 2 | Viewed by 1574
Abstract
In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous [...] Read more.
In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Edge Computing Application)
Show Figures

Figure 1

29 pages, 2208 KiB  
Article
Modeling of a Generic Edge Computing Application Design
by Pedro Juan Roig, Salvador Alcaraz, Katja Gilly, Cristina Bernad and Carlos Juiz
Sensors 2021, 21(21), 7276; https://doi.org/10.3390/s21217276 - 1 Nov 2021
Cited by 4 | Viewed by 3505
Abstract
Edge computing applications leverage advances in edge computing along with the latest trends of convolutional neural networks in order to achieve ultra-low latency, high-speed processing, low-power consumptions scenarios, which are necessary for deploying real-time Internet of Things deployments efficiently. As the importance of [...] Read more.
Edge computing applications leverage advances in edge computing along with the latest trends of convolutional neural networks in order to achieve ultra-low latency, high-speed processing, low-power consumptions scenarios, which are necessary for deploying real-time Internet of Things deployments efficiently. As the importance of such scenarios is growing by the day, we propose to undertake two different kind of models, such as an algebraic models, with a process algebra called ACP and a coding model with a modeling language called Promela. Both approaches have been used to build models considering an edge infrastructure with a cloud backup, which has been further extended with the addition of extra fog nodes, and after having applied the proper verification techniques, they have all been duly verified. Specifically, a generic edge computing design has been specified in an algebraic manner with ACP, being followed by its corresponding algebraic verification, whereas it has also been specified by means of Promela code, which has been verified by means of the model checker Spin. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Edge Computing Application)
Show Figures

Figure 1

Back to TopTop