sensors-logo

Journal Browser

Journal Browser

Edge Computing in Sensors Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 3067

Special Issue Editors


E-Mail Website
Guest Editor
Safety & Security Engineering Group, DICMA SAPIENZA, University of Rome, Piazzale Aldo Moro, 5, 00185 Rome, Italy
Interests: integrated multidisciplinary security; integrated security systems; machine learning; human factor
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Physics, Engineering & Computer Science, Department of Engineering and Technology, University of Hertfordshire, Hatfield AL10 9EU, UK
Interests: biometrics; intelligent transport systems; smart mobility
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensor networks have developed significant contemporary popularity in edge computing applications. This relates, in the most of contexts.

Edge computing in sensor networks allows the intensive use of sensor-equipped mobile devices to collect and process data at a low cost. This allows researchers to solve general and specific issues of complex contexts such as security and safety applications, surveillance, environment and weather monitoring, smart transportation, infrastructures monitoring, etc.

The methods of data sensing and processing involved in edge computing sensor networks are evolving rapidly. The main and more challenging goals are represented by high-worth data sensing and processing, together with the development of approaches to process and handle the quantity of information of data, information quality, reliability, security and privacy, cost for collection data, data sensing, collection platforms, tools, etc.

The purpose of this Special Issue is to publish high-quality research papers, as well as review articles addressing recent advances in high-quality edge computing in sensor networks. Potential topics include, but are not limited to:

  • New data sensing and processing techniques and framework in edge computing, also regarding high-speed response times for finale users.
  • Sensors data processing techniques and frameworks for edge computing.
  • Big data techniques and frameworks for high-quality data sensing and processing, including theoretical and computational models, etc.
  • Reliability and resilience aspects of edge computing in sensor networks.
  • Algorithms and techniques for security applications, data sensing and processing for edge computing.
  • Algorithms and techniques of energy efficient data sensing and processing for edge computing.
  • Schemes, techniques and policies for intrusion and threat detection of edge computing in sensor networks.
  • Countermeasures against attacks on data sensing and processing in edge computing.
  • Methodologies and techniques for data sensing and processing with privacy protecting, together with source location privacy, communication privacy, and other privacy issues.
  • Methodologies and techniques of optimization in data sensing and processing for collaborative edge computing sensing networking also including Internet of Things (IoT).
  • Frameworks and system architectures for high-quality data sensing and processing in edge computing.
  • Simulating, modelling and experimental results on edge computing in sensor networks.

Prof. Dr. Garzia Fabio
Dr. Soodamani Ramalingam
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • sensor networks
  • IoT
  • data sensing and processing
  • security and privacy issues
  • frameworks and system architecture
  • security applications
  • monitoring applications
  • surveillance applications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1543 KiB  
Article
Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments
by June-Pyo Jung, Young-Bae Ko and Sung-Hwa Lim
Sensors 2024, 24(8), 2476; https://doi.org/10.3390/s24082476 - 12 Apr 2024
Viewed by 329
Abstract
Federated learning (FL) is an emerging distributed learning technique through which models can be trained using the data collected by user devices in resource-constrained situations while protecting user privacy. However, FL has three main limitations: First, the parameter server (PS), which aggregates the [...] Read more.
Federated learning (FL) is an emerging distributed learning technique through which models can be trained using the data collected by user devices in resource-constrained situations while protecting user privacy. However, FL has three main limitations: First, the parameter server (PS), which aggregates the local models that are trained using local user data, is typically far from users. The large distance may burden the path links between the PS and local nodes, thereby increasing the consumption of the network and computing resources. Second, user device resources are limited, but this aspect is not considered in the training of the local model and transmission of the model parameters. Third, the PS-side links tend to become highly loaded as the number of participating clients increases. The links become congested owing to the large size of model parameters. In this study, we propose a resource-efficient FL scheme. We follow the Pareto optimality concept with the biased client selection to limit client participation, thereby ensuring efficient resource consumption and rapid model convergence. In addition, we propose a hierarchical structure with location-based clustering for device-to-device communication using k-means clustering. Simulation results show that with prate at 0.75, the proposed scheme effectively reduced transmitted and received network traffic by 75.89% and 78.77%, respectively, compared to the FedAvg method. It also achieves faster model convergence compared to other FL mechanisms, such as FedAvg and D2D-FedAvg. Full article
(This article belongs to the Special Issue Edge Computing in Sensors Networks)
Show Figures

Figure 1

22 pages, 4830 KiB  
Article
Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training
by Muhammad Junaid, Hayotjon Aliev, SangBo Park, HyungWon Kim, Hoyoung Yoo and Sanghoon Sim
Sensors 2024, 24(7), 2145; https://doi.org/10.3390/s24072145 - 27 Mar 2024
Viewed by 626
Abstract
The rapid advancement in AI requires efficient accelerators for training on edge devices, which often face challenges related to the high hardware costs of floating-point arithmetic operations. To tackle these problems, efficient floating-point formats inspired by block floating-point (BFP), such as Microsoft Floating [...] Read more.
The rapid advancement in AI requires efficient accelerators for training on edge devices, which often face challenges related to the high hardware costs of floating-point arithmetic operations. To tackle these problems, efficient floating-point formats inspired by block floating-point (BFP), such as Microsoft Floating Point (MSFP) and FlexBlock (FB), are emerging. However, they have limited dynamic range and precision for the smaller magnitude values within a block due to the shared exponent. This limits the BFP’s ability to train deep neural networks (DNNs) with diverse datasets. This paper introduces the hybrid precision (HPFP) selection algorithms, designed to systematically reduce precision and implement hybrid precision strategies, thereby balancing layer-wise arithmetic operations and data path precision to address the shortcomings of traditional floating-point formats. Reducing the data bit width with HPFP allows more read/write operations from memory per cycle, thereby decreasing off-chip data access and the size of on-chip memories. Unlike traditional reduced precision formats that use BFP for calculating partial sums and accumulating those partial sums in 32-bit Floating Point (FP32), HPFP leads to significant hardware savings by performing all multiply and accumulate operations in reduced floating-point format. For evaluation, two training accelerators for the YOLOv2-Tiny model were developed, employing distinct mixed precision strategies, and their performance was benchmarked against an accelerator utilizing a conventional brain floating point of 16 bits (Bfloat16). The HPFP selection, employing 10 bits for the data path of all layers and for the arithmetic of layers requiring low precision, along with 12 bits for layers requiring higher precision, results in a 49.4% reduction in energy consumption and a 37.5% decrease in memory access. This is achieved with only a marginal mean Average Precision (mAP) degradation of 0.8% when compared to an accelerator based on Bfloat16. This comparison demonstrates that the proposed accelerator based on HPFP can be an efficient approach to designing compact and low-power accelerators without sacrificing accuracy. Full article
(This article belongs to the Special Issue Edge Computing in Sensors Networks)
Show Figures

Figure 1

44 pages, 10319 KiB  
Article
Performance Analysis of Lambda Architecture-Based Big-Data Systems on Air/Ground Surveillance Application with ADS-B Data
by Mustafa Umut Demirezen and Tuğba Selcen Navruz
Sensors 2023, 23(17), 7580; https://doi.org/10.3390/s23177580 - 31 Aug 2023
Viewed by 1479
Abstract
This study introduces a novel methodology designed to assess the accuracy of data processing in the Lambda Architecture (LA), an advanced big-data framework qualified for processing streaming (data in motion) and batch (data at rest) data. Distinct from prior studies that have focused [...] Read more.
This study introduces a novel methodology designed to assess the accuracy of data processing in the Lambda Architecture (LA), an advanced big-data framework qualified for processing streaming (data in motion) and batch (data at rest) data. Distinct from prior studies that have focused on hardware performance and scalability evaluations, our research uniquely targets the intricate aspects of data-processing accuracy within the various layers of LA. The salient contribution of this study lies in its empirical approach. For the first time, we provide empirical evidence that validates previously theoretical assertions about LA, which have remained largely unexamined due to LA’s intricate design. Our methodology encompasses the evaluation of prospective technologies across all levels of LA, the examination of layer-specific design limitations, and the implementation of a uniform software development framework across multiple layers. Specifically, our methodology employs a unique set of metrics, including data latency and processing accuracy under various conditions, which serve as critical indicators of LA’s accurate data-processing performance. Our findings compellingly illustrate LA’s “eventual consistency”. Despite potential transient inconsistencies during real-time processing in the Speed Layer (SL), the system ultimately converges to deliver precise and reliable results, as informed by the comprehensive computations of the Batch Layer (BL). This empirical validation not only confirms but also quantifies the claims posited by previous theoretical discourse, with our results indicating a 100% accuracy rate under various severe data-ingestion scenarios. We applied this methodology in a practical case study involving air/ground surveillance, a domain where data accuracy is paramount. This application demonstrates the effectiveness of the methodology using real-world data-intake scenarios, therefore distinguishing this study from hardware-centric evaluations. This study not only contributes to the existing body of knowledge on LA but also addresses a significant literature gap. By offering a novel, empirically supported methodology for testing LA, a methodology with potential applicability to other big-data architectures, this study sets a precedent for future research in this area, advancing beyond previous work that lacked empirical validation. Full article
(This article belongs to the Special Issue Edge Computing in Sensors Networks)
Show Figures

Figure 1

Back to TopTop