sensors-logo

Journal Browser

Journal Browser

Sensors Based SoCs, FPGA in IoT Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 15 February 2025 | Viewed by 11665

Special Issue Editor


E-Mail Website
Guest Editor
Engineering Product Development (EPD), Singapore University of Technology and Design, Singapore 487372, Singapore
Interests: low-power and low-voltage design for sensor interface; mixed-signal wireless; AI integrated circuit
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Drastic development in big data processing with machine learning (ML) techniques demands hardware such as System On Chips (SoCs) and Field Programmable Gate Arrays (FPGSs) integration with sensor technologies for Internet of Things (IoT) applications. IoT could potentially contribute to industry advancement (e.g., Industry 4.0), environment sustainability, disaster monitoring, as well as electrical vehicles (EVs) through Artificial Intelligent (AI) based Digital Twin technologies. At the same time, the recent development in Reduced Instruction Set Computer (RISC) devices triggers the microprocessor design racing in new computational devices architecture, system as well as manufacturing technologies. The designer could take ride of this development to integrate various technologies into the microprocessor design such as AI, ML, and sensor technologies. With massive number of IoT devices, hardware on-chip security becomes inevitably important for data protection, cyber-security, and safety. This Special Issue welcomes various contributions related to sensor-based SoCs, FPGAs in IoT applications from hardware, software, algorithm, applications, etc. The submission is not limited to the mentioned topics; other original contributions are welcome to submit as well.

Dr. Tee Hui Teo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • system of chips
  • field programable gate array
  • artificial intelligence
  • digital twin
  • machine learning
  • electrical vehicle
  • sustainability
  • reduced instruction set computer
  • Industry 4.0

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 2637 KiB  
Article
A Mixed Approach for Clock Synchronization in Distributed Data Acquisition Systems
by Gabriele Manduchi, Andrea Rigoni, Luca Trevisan and Tommaso Patton
Sensors 2024, 24(18), 6155; https://doi.org/10.3390/s24186155 - 23 Sep 2024
Viewed by 757
Abstract
Proper timing synchronization is important when data from sensors are acquired by different devices. This paper proposes a simple but effective solution for System on Chip (SoC) architectures that integrates a general-purpose Field Programmable Gate Array (FPGA) with a CPU. The proposed approach [...] Read more.
Proper timing synchronization is important when data from sensors are acquired by different devices. This paper proposes a simple but effective solution for System on Chip (SoC) architectures that integrates a general-purpose Field Programmable Gate Array (FPGA) with a CPU. The proposed approach relies on a network synchronization protocol implemented in software, such as Network Time Protocol (NTP) or Precision Time Protocol (PTP), and uses the FPGA to generate a clock reference that is maintained in step with the synchronized system clock. The clock generated by the FPGA is obtained from the FPGA oscillator via appropriate fractional clock division. Clock drift is avoided via a software program that periodically compares the FPGA and the system counters, respectively, and adjusts the fractional clock divider in order to slightly adjust the FPGA clock frequency using a Proportional Integral controller. A specific implementation is presented on the RedPitaya platform, generating a 1 MHz clock in step with the NTP synchronized system clock. The presented system has been used in a distributed data acquisition system for fast transient recording in the neutral beam test facility for the ITER nuclear fusion experiment. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

19 pages, 501 KiB  
Article
The Guardian Node Slow DoS Detection Model for Real-Time Application in IoT Networks
by Andy Reed, Laurence Dooley and Soraya Kouadri Mostefaoui
Sensors 2024, 24(17), 5581; https://doi.org/10.3390/s24175581 - 28 Aug 2024
Viewed by 739
Abstract
The pernicious impact of malicious Slow DoS (Denial of Service) attacks on the application layer and web-based Open Systems Interconnection model services like Hypertext Transfer Protocol (HTTP) has given impetus to a range of novel detection strategies, many of which use machine learning [...] Read more.
The pernicious impact of malicious Slow DoS (Denial of Service) attacks on the application layer and web-based Open Systems Interconnection model services like Hypertext Transfer Protocol (HTTP) has given impetus to a range of novel detection strategies, many of which use machine learning (ML) for computationally intensive full packet capture and post-event processing. In contrast, existing detection mechanisms, such as those found in various approaches including ML, artificial intelligence, and neural networks neither facilitate real-time detection nor consider the computational overhead within resource-constrained Internet of Things (IoT) networks. Slow DoS attacks are notoriously difficult to reliably identify, as they masquerade as legitimate application layer traffic, often resembling nodes with slow or intermittent connectivity. This means they often evade detection mechanisms because they appear as genuine node activity, which increases the likelihood of mistakenly being granted access by intrusion-detection systems. The original contribution of this paper is an innovative Guardian Node (GN) Slow DoS detection model, which analyses the two key network attributes of packet length and packet delta time in real time within a live IoT network. By designing the GN to operate within a narrow window of packet length and delta time values, accurate detection of all three main Slow DoS variants is achieved, even under the stealthiest malicious attack conditions. A unique feature of the GN model is its ability to reliably discriminate Slow DoS attack traffic from both genuine and slow nodes experiencing high latency or poor connectivity. A rigorous critical evaluation has consistently validated high, real-time detection accuracies of more than 98% for the GN model across a range of demanding traffic profiles. This performance is analogous to existing ML approaches, whilst being significantly more resource efficient, with computational and storage overheads being over 96% lower than full packet capture techniques, so it represents a very attractive alternative for deployment in resource-scarce IoT environments. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

25 pages, 1399 KiB  
Article
SHA-256 Hardware Proposal for IoT Devices in the Blockchain Context
by Carlos E. B. Santos, Jr., Lucileide M. D. da Silva, Matheus F. Torquato, Sérgio N. Silva and Marcelo A. C. Fernandes
Sensors 2024, 24(12), 3908; https://doi.org/10.3390/s24123908 - 17 Jun 2024
Cited by 1 | Viewed by 1231
Abstract
This work proposes an implementation of the SHA-256, the most common blockchain hash algorithm, on a field-programmable gate array (FPGA) to improve processing capacity and power saving in Internet of Things (IoT) devices to solve security and privacy issues. This implementation presents a [...] Read more.
This work proposes an implementation of the SHA-256, the most common blockchain hash algorithm, on a field-programmable gate array (FPGA) to improve processing capacity and power saving in Internet of Things (IoT) devices to solve security and privacy issues. This implementation presents a different approach than other papers in the literature, using clustered cores executing the SHA-256 algorithm in parallel. Details about the proposed architecture and an analysis of the resources used by the FPGA are presented. The implementation achieved a throughput of approximately 1.4 Gbps for 16 cores on a single FPGA. Furthermore, it saved dynamic power, using almost 1000 times less compared to previous works in the literature, making this proposal suitable for practical problems for IoT devices in blockchain environments. The target FPGA used was the Xilinx Virtex 6 xc6vlx240t-1ff1156. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

17 pages, 10692 KiB  
Article
A Multi-Channel Borehole Strain Measurement and Acquisition System Based on FPGA
by Xin Xu, Zheng Chen, Hong Li, Weiwei Zhan, Wenbo Wang, Yunkai Dong, Liheng Wu and Xiang Li
Sensors 2023, 23(15), 6981; https://doi.org/10.3390/s23156981 - 6 Aug 2023
Cited by 1 | Viewed by 1624
Abstract
In this study, an FPGA(Field Programmable Gate Array)-based borehole strain measurement system was designed that makes extensive use of digital signal processing operations to replace analog circuits. Through the formidable operational capability of FPGA, the sampled data were filtered and denoised to improve [...] Read more.
In this study, an FPGA(Field Programmable Gate Array)-based borehole strain measurement system was designed that makes extensive use of digital signal processing operations to replace analog circuits. Through the formidable operational capability of FPGA, the sampled data were filtered and denoised to improve the signal-to-noise ratios. Then, with the goal of not reducing observational accuracy, the signal amplification circuit was removed, the excitation voltage was reduced, and the dynamic range of the primary adjustments was expanded to 130 dB. The system’s online compilation function made it more flexible to changes in measurement parameters, allowing it to adapt to various needs. In addition, the efficiency of the equipment use was enhanced. The actual observational results showed that this study’s FPGA-based borehole strain measurement system had a voltage resolution higher than 1 μV. Clear solid tides were successfully recorded in low-frequency bands, and seismic wave strain was accurately recorded in high-frequency bands. The arrival times and seismic phases of the seismic waves S and P were clearly recorded, which met the requirements for geophysical field deformation observations. Therefore, the system proposed in this study is of major significance for future analyses of geophysical and crust deformation observations. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

15 pages, 775 KiB  
Article
Pre-Computing Batch Normalisation Parameters for Edge Devices on a Binarized Neural Network
by Nicholas Phipps, Jin-Jia Shang, Tee Hui Teo and I-Chyn Wey
Sensors 2023, 23(12), 5556; https://doi.org/10.3390/s23125556 - 14 Jun 2023
Cited by 1 | Viewed by 1716
Abstract
Binarized Neural Network (BNN) is a quantized Convolutional Neural Network (CNN), reducing the precision of network parameters for a much smaller model size. In BNNs, the Batch Normalisation (BN) layer is essential. When running BN on edge devices, floating point instructions take up [...] Read more.
Binarized Neural Network (BNN) is a quantized Convolutional Neural Network (CNN), reducing the precision of network parameters for a much smaller model size. In BNNs, the Batch Normalisation (BN) layer is essential. When running BN on edge devices, floating point instructions take up a significant number of cycles to perform. This work leverages the fixed nature of a model during inference, to reduce the full-precision memory footprint by half. This was achieved by pre-computing the BN parameters prior to quantization. The proposed BNN was validated through modeling the network on the MNIST dataset. Compared to the traditional method of computation, the proposed BNN reduced the memory utilization by 63% at 860-bytes without any significant impact on accuracy. By pre-computing portions of the BN layer, the number of cycles required to compute is reduced to two cycles on an edge device. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

34 pages, 9745 KiB  
Article
Energy-Efficient and Variability-Resilient 11T SRAM Design Using Data-Aware Read–Write Assist (DARWA) Technique for Low-Power Applications
by Sargunam Thirugnanam, Lim Way Soong, Chinnaraj Munirathina Prabhu and Ajay Kumar Singh
Sensors 2023, 23(11), 5095; https://doi.org/10.3390/s23115095 - 26 May 2023
Cited by 7 | Viewed by 1913
Abstract
The need for power-efficient devices, such as smart sensor nodes, mobile devices, and portable digital gadgets, is markedly increasing and these devices are becoming commonly used in daily life. These devices continue to demand an energy-efficient cache memory designed on Static Random-Access Memory [...] Read more.
The need for power-efficient devices, such as smart sensor nodes, mobile devices, and portable digital gadgets, is markedly increasing and these devices are becoming commonly used in daily life. These devices continue to demand an energy-efficient cache memory designed on Static Random-Access Memory (SRAM) with enhanced speed, performance, and stability to perform on-chip data processing and faster computations. This paper presents an energy-efficient and variability-resilient 11T (E2VR11T) SRAM cell, which is designed with a novel Data-Aware Read–Write Assist (DARWA) technique. The E2VR11T cell comprises 11 transistors and operates with single-ended read and dynamic differential write circuits. The simulated results in a 45 nm CMOS technology exhibit 71.63% and 58.77% lower read energy than ST9T and LP10T and lower write energies of 28.25% and 51.79% against S8T and LP10T cells, respectively. The leakage power is reduced by 56.32% and 40.90% compared to ST9T and LP10T cells. The read static noise margin (RSNM) is improved by 1.94× and 0.18×, while the write noise margin (WNM) is improved by 19.57% and 8.70% against C6T and S8T cells. The variability investigation using the Monte Carlo simulation on 5000 samples highly validates the robustness and variability resilience of the proposed cell. The improved overall performance of the proposed E2VR11T cell makes it suitable for low-power applications. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

28 pages, 1384 KiB  
Article
Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
by Mara Pistellato, Filippo Bergamasco, Gianluca Bigaglia, Andrea Gasparetto, Andrea Albarelli, Marco Boschetti and Roberto Passerone
Sensors 2023, 23(10), 4667; https://doi.org/10.3390/s23104667 - 11 May 2023
Cited by 3 | Viewed by 2762
Abstract
Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to [...] Read more.
Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

Back to TopTop