Edge Computing and Tiny Machine Learning in the Internet of Things: Latest Advances and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 May 2024 | Viewed by 5000

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering, University of Pisa, via G. Caruso 16, 56122 Pisa, Italy
Interests: electronics systems; embedded systems; edge computing; data acquisition; machine learning; industrial IoT; telemedicine; assistive technology

Special Issue Information

Dear Colleagues,

Internet of Things (IoT) applications have become popular over the last few years in numerous sectors, including home automation, healthcare, transportation, agriculture, manufacturing, energy management, and smart cities. 

With the rapid growth of IoT devices, the increasing amount of generated data, and the need for real-time data processing and low-latency services, edge computing and tiny machine learning have emerged as promising approaches to address the challenges posed by centralized cloud computing by bringing computational and storage capabilities closer to the data source.

This Special Issue seeks original contributions that delve into cutting-edge advancements, challenges, and practical edge computing applications in the IoT landscape. Authors are encouraged to present novel ideas, theoretical models, enabling technologies, experimental results, and practical implementations that contribute to the advancement of edge computing and tiny machine learning in the IoT domain. Interdisciplinary research that combines edge computing and tiny machine learning with other emerging technologies, such as blockchain, artificial intelligence, and augmented reality, is highly welcomed.

The topics of interest for this Special Issue include, but are not limited to:

  • Edge computing architectures and frameworks for IoT;
  • Edge intelligence and tiny machine learning;
  • Edge analytics and data processing techniques;
  • Security and privacy challenges in edge computing for IoT;
  • Resource management and optimization in edge environments;
  • Enabling technologies and hardware accelerators for edge computing;
  • Edge-based data storage and retrieval mechanisms;
  • Communication protocols and networking solutions for edge IoT;
  • Performance evaluation and benchmarking of edge computing systems;
  • Edge-assisted IoT applications and services;
  • Edge-enabled systems and smart wearable devices;
  • Edge computing in 5G and beyond networks.

Dr. Massimiliano Donati
Prof. Dr. Riccardo Berta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • internet of things
  • IoT
  • smart device
  • edge intelligence
  • edge applications
  • tiny machine learning

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1012 KiB  
Article
Edge HPC Architectures for AI-Based Video Surveillance Applications
by Federico Rossi and Sergio Saponara
Electronics 2024, 13(9), 1757; https://doi.org/10.3390/electronics13091757 - 02 May 2024
Viewed by 125
Abstract
The introduction of artificial intelligence (AI) in video surveillance systems has significantly transformed security practices, allowing for autonomous monitoring and real-time detection of threats. However, the effectiveness and efficiency of AI-powered surveillance rely heavily on the hardware infrastructure, specifically high-performance computing (HPC) architectures. [...] Read more.
The introduction of artificial intelligence (AI) in video surveillance systems has significantly transformed security practices, allowing for autonomous monitoring and real-time detection of threats. However, the effectiveness and efficiency of AI-powered surveillance rely heavily on the hardware infrastructure, specifically high-performance computing (HPC) architectures. This article examines the impact of different platforms for HPC edge servers, including x86 and ARM CPU-based systems and Graphics Processing Units (GPUs), on the speed and accuracy of video processing tasks. By using advanced deep learning frameworks, a video surveillance system based on YOLO object detection and DeepSort tracking algorithms is developed and evaluated. This study thoroughly assesses the strengths, limitations, and suitability of different hardware architectures for various AI-based surveillance scenarios. Full article
17 pages, 2074 KiB  
Article
CBin-NN: An Inference Engine for Binarized Neural Networks
by Fouad Sakr, Riccardo Berta, Joseph Doyle, Alessio Capello, Ali Dabbous, Luca Lazzaroni and Francesco Bellotti
Electronics 2024, 13(9), 1624; https://doi.org/10.3390/electronics13091624 - 24 Apr 2024
Viewed by 264
Abstract
Binarization is an extreme quantization technique that is attracting research in the Internet of Things (IoT) field, as it radically reduces the memory footprint of deep neural networks without a correspondingly significant accuracy drop. To support the effective deployment of Binarized Neural Networks [...] Read more.
Binarization is an extreme quantization technique that is attracting research in the Internet of Things (IoT) field, as it radically reduces the memory footprint of deep neural networks without a correspondingly significant accuracy drop. To support the effective deployment of Binarized Neural Networks (BNNs), we propose CBin-NN, a library of layer operators that allows the building of simple yet flexible convolutional neural networks (CNNs) with binary weights and activations. CBin-NN is platform-independent and is thus portable to virtually any software-programmable device. Experimental analysis on the CIFAR-10 dataset shows that our library, compared to a set of state-of-the-art inference engines, speeds up inference by 3.6 times and reduces the memory required to store model weights and activations by 7.5 times and 28 times, respectively, at the cost of slightly lower accuracy (2.5%). An ablation study stresses the importance of a Quantized Input Quantized Kernel Convolution layer to improve accuracy and reduce latency at the cost of a slight increase in model size. Full article
Show Figures

Figure 1

12 pages, 533 KiB  
Article
A Voice User Interface on the Edge for People with Speech Impairments
by Davide Mulfari and Massimo Villari
Electronics 2024, 13(7), 1389; https://doi.org/10.3390/electronics13071389 - 07 Apr 2024
Viewed by 905
Abstract
Nowadays, fine-tuning has emerged as a powerful technique in machine learning, enabling models to adapt to a specific domain by leveraging pre-trained knowledge. One such application domain is automatic speech recognition (ASR), where fine-tuning plays a crucial role in addressing data scarcity, especially [...] Read more.
Nowadays, fine-tuning has emerged as a powerful technique in machine learning, enabling models to adapt to a specific domain by leveraging pre-trained knowledge. One such application domain is automatic speech recognition (ASR), where fine-tuning plays a crucial role in addressing data scarcity, especially for languages with limited resources. In this study, we applied fine-tuning in the context of atypical speech recognition, focusing on Italian speakers with speech impairments, e.g., dysarthria. Our objective was to build a speaker-dependent voice user interface (VUI) tailored to their unique needs. To achieve this, we harnessed a pre-trained OpenAI’s Whisper model, which has been exposed to vast amounts of general speech data. However, to adapt it specifically for disordered speech, we fine-tuned it using our private corpus including 65 K voice recordings contributed by 208 speech-impaired individuals globally. We exploited three variants of the Whisper model (small, base, tiny), and by evaluating their relative performance, we aimed to identify the most accurate configuration for handling disordered speech patterns. Furthermore, our study dealt with the local deployment of the trained models on edge computing nodes, with the aim to realize custom VUIs for persons with impaired speech. Full article
Show Figures

Figure 1

22 pages, 7383 KiB  
Article
GymHydro: An Innovative Modular Small-Scale Smart Agriculture System for Hydroponic Greenhouses
by Cristian Bua, Davide Adami and Stefano Giordano
Electronics 2024, 13(7), 1366; https://doi.org/10.3390/electronics13071366 - 04 Apr 2024
Viewed by 516
Abstract
In response to the challenges posed by climate change, including extreme weather events, such as heavy rainfall and droughts, the agricultural sector is increasingly seeking solutions for the efficient use of resources, particularly water. Pivotal aspects of smart agriculture include the establishment of [...] Read more.
In response to the challenges posed by climate change, including extreme weather events, such as heavy rainfall and droughts, the agricultural sector is increasingly seeking solutions for the efficient use of resources, particularly water. Pivotal aspects of smart agriculture include the establishment of weather-independent systems and the implementation of precise monitoring and control of plant growth and environmental conditions. Hydroponic cultivation techniques have emerged as transformative solutions with the potential to reduce water consumption for cultivation and offer a sheltered environment for crops, protecting them from the unpredictable impacts of climate change. However, a significant challenge lies in the frequent need for human intervention to ensure the efficiency and effectiveness of these systems. This paper introduces a novel system with a modular architecture, offering the ability to incorporate new functionalities without necessitating a complete system redesign. The autonomous hydroponic greenhouse, designed and implemented in this study, maintains stable environmental parameters to create an ideal environment for cultivating tomato plants. Actuators, receiving commands from a cloud application situated at the network’s edge, automatically regulate environmental conditions. Decision-making within this application is facilitated by a PID control algorithm, ensuring precision in control commands transmitted through the MQTT protocol and the NGSI-LD message format. The system transitioned from a single virtual machine in the public cloud to edge computing, specifically on a Raspberry Pi 3, to address latency concerns. In this study, we analyzed various delay aspects and network latency to better understand their significance in delays. This transition resulted in a significant reduction in communication latency and a reduction in total service delay, enhancing the system’s real-time responsiveness. The utilization of LoRa communication technology connects IoT devices to a gateway, typically located at the main farm building, addressing the challenge of limited Internet connectivity in remote greenhouse locations. Monitoring data are made accessible to end-users through a smartphone app, offering real-time insights into the greenhouse environment. Furthermore, end-users have the capability to modify system parameters manually and remotely when necessary. This approach not only provides a robust solution to climate-induced challenges but also enhances the efficiency and intelligence of agricultural practices. The transition to digitization poses a significant challenge for farmers. Our proposed system not only represents a step forward toward sustainable and precise agriculture but also serves as a practical demonstrator, providing farmers with a key tool during this crucial digital transition. The demonstrator enables farmers to optimize crop growth and resource management, concretely showcasing the benefits of smart and precise agriculture. Full article
Show Figures

Figure 1

13 pages, 1561 KiB  
Article
Forward Learning of Large Language Models by Consumer Devices
by Danilo Pietro Pau and Fabrizio Maria Aymone
Electronics 2024, 13(2), 402; https://doi.org/10.3390/electronics13020402 - 18 Jan 2024
Viewed by 1462
Abstract
Large Language Models achieve state of art performances on a broad variety of Natural Language Processing tasks. In the pervasive IoT era, their deployment on edge devices is more compelling than ever. However, their gigantic model footprint has hindered on-device learning applications which [...] Read more.
Large Language Models achieve state of art performances on a broad variety of Natural Language Processing tasks. In the pervasive IoT era, their deployment on edge devices is more compelling than ever. However, their gigantic model footprint has hindered on-device learning applications which enable AI models to continuously learn and adapt to changes over time. Back-propagation, in use by the majority of deep learning frameworks, is computationally intensive and requires storing intermediate activations into memory to cope with the model’s weights update. Recently, “Forward-only algorithms” have been proposed since they are biologically plausible alternatives. By applying more “forward” passes, this class of algorithms can achieve memory reductions with respect to more naive forward-only approaches and by removing the need to store intermediate activations. This comes at the expense of increased computational complexity. This paper considered three Large Language Model: DistilBERT, GPT-3 Small and AlexaTM. It investigated quantitatively any improvements about memory usage and computational complexity brought by known approaches named PEPITA and MEMPEPITA with respect to backpropagation. For low number of tokens in context, and depending on the model, PEPITA increases marginally or reduces substantially arithmetic operations. On the other hand, for large number of tokens in context, PEPITA reduces computational complexity by 30% to 50%. MEMPEPITA increases PEPITA’s complexity by one third. About memory, PEPITA and backpropagation, require a comparable amount of memory to store activations, while MEMPEPITA reduces it by 50% to 94% with the benefits being more evident for architectures with a long sequence of blocks. In various real case scenarios, MEMPEPITA’s memory reduction was essential for meeting the tight memory requirements of 128 MB equipped edge consumer devices, which are commonly available as smartphone and industrial application multi processors. Full article
Show Figures

Figure 1

22 pages, 721 KiB  
Article
Tiny Machine Learning Zoo for Long-Term Compensation of Pressure Sensor Drifts
by Danilo Pau, Welid Ben Yahmed, Fabrizio Maria Aymone, Gian Domenico Licciardo and Paola Vitolo
Electronics 2023, 12(23), 4819; https://doi.org/10.3390/electronics12234819 - 28 Nov 2023
Cited by 1 | Viewed by 1078
Abstract
Pressure sensors embodied in very tiny packages are deployed in a wide range of advanced applications. Examples of applications range from industrial to altitude location services. They are also becoming increasingly pervasive in many other application fields, ranging from industrial to military to [...] Read more.
Pressure sensors embodied in very tiny packages are deployed in a wide range of advanced applications. Examples of applications range from industrial to altitude location services. They are also becoming increasingly pervasive in many other application fields, ranging from industrial to military to consumer. However, the inexpensive manufacturing technology of these sensors is strongly affected by environmental stresses, which ultimately affect their measurement accuracy in the form of variations in gain, hysteresis, and nonlinear responses. Thermal stresses are the main source of sensor behavior deviation. They are particularly insidious because even a few minutes of high temperature exposure can cause measurement drift for many days in the sensor responses. Therefore, conventional calibration techniques are challenged in their adequacy to achieve high accuracy and over the entire deployment life of the sensor. To manage this, several costly and time-consuming calibration procedures have to be performed. Machine learning (ML) techniques are known, supported by the universal approximation theorem, to provide effective data-driven solutions to the above problems. In this context, this paper addresses two case studies, corresponding to post-soldering thermal stresses and exposure to moderately high temperatures, for which two separate datasets have been built and 53 different tiny ML models (collected into a zoo) have been devised and compared. The ML zoo has been constructed with models such as artificial neural networks (ANN), random forest (RFR), and support vector regressors (SVR), able to predict the error introduced by the thermal drift and to compensate for the drift of the measurements. The models in the zoo also satisfy the memory, computational, and accuracy constraints associated with their deployment on resource-constrained embedded devices to be integrated at the edge. Quantitative results achieved by the zoo are reported and discussed, as well as their deployability on tiny micro-controllers. These results reveal the suitability of a tiny ML zoo for the long-term compensation of MEMS pressure sensors affected by drift in their measurements. Full article
Show Figures

Figure 1

Back to TopTop