Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,207)

Search Parameters:
Keywords = network hardware design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 14121 KB  
Article
Systematic Development and Hardware-in-the-Loop Testing of an IEC 61850 Standard-Based Monitoring and Protection System for a Modern Power Grid Point of Common Coupling
by Sinawo Nomandela, Mkhululi E. S. Mnguni and Atanda K. Raji
Energies 2025, 18(19), 5281; https://doi.org/10.3390/en18195281 (registering DOI) - 5 Oct 2025
Abstract
This paper presents a systematic approach to the development and validation of a monitoring and protection system based on the IEC 61850 standard, evaluated through hardware-in-the-loop (HIL) testing. The study utilized an already existing model of a modern power grid consisting of the [...] Read more.
This paper presents a systematic approach to the development and validation of a monitoring and protection system based on the IEC 61850 standard, evaluated through hardware-in-the-loop (HIL) testing. The study utilized an already existing model of a modern power grid consisting of the IEEE 9-bus power system integrated with a large-scale wind power plant (LSWPP). The SEL-487B Relay was configured to protect the PCC using a low-impedance busbar differential monitoring and protection system equipped with adaptive setting group logic that automatically transitions between Group 1 and Group 2 based on system loading conditions. Significant steps were followed for selecting and configuring instrument transformers and implementing relay logic in compliance with IEEE and IEC standards. Real-time digital simulation using Real-Time Digital Simulator (RTDS) hardware and its software, Real-time Simulation Computer-Aided Design (RSCAD), was used to assess the performance of the overall monitoring and protection system, focusing on the monitoring and publishing of the selected electrical and mechanical measurements from a selected wind turbine generator unit (WTGU) on the LSWPP side through the IEC 61850 standard network, and on the behavior of the monitoring and protection system under initial and increased load conditions through monitoring of differential and restraint currents. The overall monitoring and protection system was tested under both initial and increased load conditions, confirming its capability to reliably publish analog values from WTGU13 for availability on the IEC 61850 standard network while maintaining secure protection operation. Quantitatively, the measured differential (operate) and restraint currents were 0.32 PU and 4.38 PU under initial loading, and 1.96 PU and 6.20 PU under increased loading, while total fault clearance times were 606.667 ms and 706.667 ms for faults under initial load and increased load demand conditions, respectively. These results confirm that the developed framework provides accurate real-time monitoring and reliable operation for faults, while demonstrating a practical and replicable solution for monitoring and protection at transmission-level PCCs within renewable-integrated networks. Full article
(This article belongs to the Special Issue Planning, Operation, and Control of New Power Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 1081 KB  
Article
Hybrid Deep Learning Approach for Secure Electric Vehicle Communications in Smart Urban Mobility
by Abdullah Alsaleh
Vehicles 2025, 7(4), 112; https://doi.org/10.3390/vehicles7040112 - 2 Oct 2025
Abstract
The increasing adoption of electric vehicles (EVs) within intelligent transportation systems (ITSs) has elevated the importance of cybersecurity, especially with the rise in Vehicle-to-Everything (V2X) communications. Traditional intrusion detection systems (IDSs) struggle to address the evolving and complex nature of cyberattacks in such [...] Read more.
The increasing adoption of electric vehicles (EVs) within intelligent transportation systems (ITSs) has elevated the importance of cybersecurity, especially with the rise in Vehicle-to-Everything (V2X) communications. Traditional intrusion detection systems (IDSs) struggle to address the evolving and complex nature of cyberattacks in such dynamic environments. To address these challenges, this study introduces a novel deep learning-based IDS designed specifically for EV communication networks. We present a hybrid model that integrates convolutional neural networks (CNNs), long short-term memory (LSTM) layers, and adaptive learning strategies. The model was trained and validated using the VeReMi dataset, which simulates a wide range of attack scenarios in V2X networks. Additionally, an ablation study was conducted to isolate the contribution of each of its modules. The model demonstrated strong performance with 98.73% accuracy, 97.88% precision, 98.91% sensitivity, and 98.55% specificity, as well as an F1-score of 98.39%, an MCC of 0.964, a false-positive rate of 1.45%, and a false-negative rate of 1.09%, with a detection latency of 28 ms and an AUC-ROC of 0.994. Specifically, this work fills a clear gap in the existing V2X intrusion detection literature—namely, the lack of scalable, adaptive, and low-latency IDS solutions for hardware-constrained EV platforms—by proposing a hybrid CNN–LSTM architecture coupled with an elastic weight consolidation (EWC)-based adaptive learning module that enables online updates without full retraining. The proposed model provides a real-time, adaptive, and high-precision IDS for EV networks, supporting safer and more resilient ITS infrastructures. Full article
Show Figures

Figure 1

26 pages, 1208 KB  
Article
Quantum Computing Meets Deep Learning: A QCNN Model for Accurate and Efficient Image Classification
by Sunil Prajapat, Manish Tomar, Pankaj Kumar, Rajesh Kumar and Athanasios V. Vasilakos
Mathematics 2025, 13(19), 3148; https://doi.org/10.3390/math13193148 - 2 Oct 2025
Abstract
In deep learning, Convolutional Neural Networks (CNNs) serve as fundamental models, leveraging the correlational structure of data for tasks such as image classification and processing. However, CNNs face significant challenges in terms of computational complexity and accuracy. Quantum computing offers a promising avenue [...] Read more.
In deep learning, Convolutional Neural Networks (CNNs) serve as fundamental models, leveraging the correlational structure of data for tasks such as image classification and processing. However, CNNs face significant challenges in terms of computational complexity and accuracy. Quantum computing offers a promising avenue to overcome these limitations by introducing a quantum counterpart-Quantum Convolutional Neural Networks (QCNNs). QCNNs significantly reduce computational complexity, enhance the models ability to capture intricate patterns, and improve classification accuracy. This paper presents a fully parameterized QCNN model, specifically designed for Noisy Intermediate-Scale Quantum (NISQ) devices. The proposed model employs two-qubit interactions throughout the algorithm, leveraging parameterized quantum circuits (PQCs) with rotation and entanglement gates to efficiently encode and process image data. This design not only ensures computational efficiency but also enhances compatibility with current quantum hardware. Our experimental results demonstrate the model’s notable performance in binary classification tasks on the MNIST dataset, highlighting the potential of quantum-enhanced deep learning in image recognition. Further, we extend our framework to the Wine dataset, reformulated as a binary classification problem distinguishing Class 0 wines from the rest. The QCNN again demonstrates remarkable learning capability, achieving 97.22% test accuracy. This extension validates the versatility of the model across domains and reinforces the promising role of quantum neural networks in tackling a broad range of classification tasks. Full article
Show Figures

Figure 1

24 pages, 4022 KB  
Article
Dynamic Vision Sensor-Driven Spiking Neural Networks for Low-Power Event-Based Tracking and Recognition
by Boyi Feng, Rui Zhu, Yue Zhu, Yan Jin and Jiaqi Ju
Sensors 2025, 25(19), 6048; https://doi.org/10.3390/s25196048 - 1 Oct 2025
Abstract
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and [...] Read more.
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and effectively handling spatio-temporal complexity, which limits their potential for real-time applications on embedded sensing systems such as object tracking and recognition. Targeting this neuromorphic sensing pipeline, this paper proposes the Dynamic Tracking with Event Attention Spiking Network (DTEASN), a novel framework designed to address these challenges by employing a pure SNN architecture, bypassing conventional convolutional neural network (CNN) operations, and reducing GPU resource dependency, while tailoring the processing to DVS signal characteristics (asynchrony, sparsity, and polarity). The model incorporates two innovative, self-developed components: an event-driven multi-scale attention mechanism and a spatio-temporal event convolver, both of which significantly enhance spatio-temporal feature extraction from raw DVS events. An Event-Weighted Spiking Loss (EW-SLoss) is introduced to optimise the learning process by prioritising informative events and improving robustness to sensor noise. Additionally, a lightweight event tracking mechanism and a custom synaptic connection rule are proposed to further improve model efficiency for low-power, edge deployment. The efficacy of DTEASN is demonstrated through empirical results on event-based (DVS) object recognition and tracking benchmarks, where it outperforms conventional methods in accuracy, latency, event throughput (events/s) and spike rate (spikes/s), memory footprint, spike-efficiency (energy proxy), and overall computational efficiency under typical DVS settings. By virtue of its event-aligned, sparse computation, the framework is amenable to highly parallel neuromorphic hardware, supporting on- or near-sensor inference for embedded applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

36 pages, 2656 KB  
Article
Energy Footprint and Reliability of IoT Communication Protocols for Remote Sensor Networks
by Jerzy Krawiec, Martyna Wybraniak-Kujawa, Ilona Jacyna-Gołda, Piotr Kotylak, Aleksandra Panek, Robert Wojtachnik and Teresa Siedlecka-Wójcikowska
Sensors 2025, 25(19), 6042; https://doi.org/10.3390/s25196042 - 1 Oct 2025
Abstract
Excessive energy consumption of communication protocols in IoT/IIoT systems constitutes one of the key constraints for the operational longevity of remote sensor nodes, where radio transmission often incurs higher energy costs than data acquisition or local computation. Previous studies have remained fragmented, typically [...] Read more.
Excessive energy consumption of communication protocols in IoT/IIoT systems constitutes one of the key constraints for the operational longevity of remote sensor nodes, where radio transmission often incurs higher energy costs than data acquisition or local computation. Previous studies have remained fragmented, typically focusing on selected technologies or specific layers of the communication stack, which has hindered the development of comparable quantitative metrics across protocols. The aim of this study is to design and validate a unified evaluation framework enabling consistent assessment of both wired and wireless protocols in terms of energy efficiency, reliability, and maintenance costs. The proposed approach employs three complementary research methods: laboratory measurements on physical hardware, profiling of SBC devices, and simulations conducted in the COOJA/Powertrace environment. A Unified Comparative Method was developed, incorporating bilinear interpolation and weighted normalization, with its robustness confirmed by a Spearman rank correlation coefficient exceeding 0.9. The analysis demonstrates that MQTT-SN and CoAP (non-confirmable mode) exhibit the highest energy efficiency, whereas HTTP/3 and AMQP incur the greatest energy overhead. Results are consolidated in the ICoPEP matrix, which links protocol characteristics to four representative RS-IoT scenarios: unmanned aerial vehicles (UAVs), ocean buoys, meteorological stations, and urban sensor networks. The framework provides well-grounded engineering guidelines that may extend node lifetime by up to 35% through the adoption of lightweight protocol stacks and optimized sampling intervals. The principal contribution of this work is the development of a reproducible, technology-agnostic tool for comparative assessment of IoT/IIoT communication protocols. The proposed framework addresses a significant research gap in the literature and establishes a foundation for further research into the design of highly energy-efficient and reliable IoT/IIoT infrastructures, supporting scalable and long-term deployments in diverse application environments. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
18 pages, 1425 KB  
Article
Exploring DC Power Quality Measurement and Characterization Techniques
by Yara Daaboul, Daniela Istrate, Yann Le Bihan, Ludovic Bertin and Xavier Yang
Sensors 2025, 25(19), 6043; https://doi.org/10.3390/s25196043 - 1 Oct 2025
Abstract
Within the modernizing energy infrastructure of today, the integration of renewable energy sources and direct current (DC)-powered technologies calls for the re-examination of traditional alternative current (AC) networks. Low-voltage DC (LVDC) grids offer an attractive way forward in reducing conversion losses and simplifying [...] Read more.
Within the modernizing energy infrastructure of today, the integration of renewable energy sources and direct current (DC)-powered technologies calls for the re-examination of traditional alternative current (AC) networks. Low-voltage DC (LVDC) grids offer an attractive way forward in reducing conversion losses and simplifying local power management. However, ensuring reliable operation depends on a thorough understanding of DC distortions—phenomena generated by power converters, source instability, and varying loads. Two complementary traceable measurement chains are presented in this article with the purpose of measuring the steady-state DC component and the amplitude and frequency of the distortions around the DC bus with low uncertainties. One chain is optimized for laboratory environments, with high effectiveness in a controlled setup, and the other one is designed as a flexible and easily transportable solution, ensuring efficient and accurate assessments of DC distortions for field applications. In addition to our hardware solutions fully characterized by the uncertainty budget, we present the measurement method used for assessing DC distortions after evaluating the limitations of conventional AC techniques. Both arrangements are set to measure voltages of up to 1000 V, currents of up to 30 A, and frequency components of up to 150–500 kHz, with an uncertainty varying from 0.01% to less than 1%. This level of accuracy in the measurements will allow us to draw reliable conclusions regarding the dynamic behavior of future LVDC grids. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

31 pages, 12366 KB  
Article
Gateway-Free LoRa Mesh on ESP32: Design, Self-Healing Mechanisms, and Empirical Performance
by Danilo Arregui Almeida, Juan Chafla Altamirano, Milton Román Cañizares, Pablo Palacios Játiva, Javier Guaña-Moya and Iván Sánchez
Sensors 2025, 25(19), 6036; https://doi.org/10.3390/s25196036 - 1 Oct 2025
Abstract
LoRa is a long-range, low-power wireless communication technology widely used in Internet of Things (IoT) applications. However, its conventional implementation through Long Range Wide Area Network (LoRaWAN) presents operational constraints due to its centralized topology and reliance on gateways. To overcome these limitations, [...] Read more.
LoRa is a long-range, low-power wireless communication technology widely used in Internet of Things (IoT) applications. However, its conventional implementation through Long Range Wide Area Network (LoRaWAN) presents operational constraints due to its centralized topology and reliance on gateways. To overcome these limitations, this work designs and validates a gateway-free mesh communication system that operates directly on commercially available commodity microcontrollers, implementing lightweight self-healing mechanisms suitable for resource-constrained devices. The system, based on ESP32 microcontrollers and LoRa modulation, adopts a mesh topology with custom mechanisms including neighbor-based routing, hop-by-hop acknowledgments (ACKs), and controlled retransmissions. Reliability is achieved through hop-by-hop acknowledgments, listen-before-talk (LBT) channel access, and duplicate suppression using alternate link triggering (ALT). A modular prototype was developed and tested under three scenarios such as ideal conditions, intermediate node failure, and extended urban deployment. Results showed robust performance, achieving a Packet Delivery Ratio (PDR), the percentage of successfully delivered DATA packets over those sent, of up to 95% in controlled environments and 75% under urban conditions. In the failure scenario, an average Packet Recovery Ratio (PRR), the proportion of lost packets successfully recovered through retransmissions, of 88.33% was achieved, validating the system’s self-healing capabilities. Each scenario was executed in five independent runs, with values calculated for both traffic directions and averaged. These findings confirm that a compact and fault-tolerant LoRa mesh network, operating without gateways, can be effectively implemented on commodity ESP32-S3 + SX1262 hardware. Full article
23 pages, 1095 KB  
Article
HySecure: FPGA-Based Hybrid Post-Quantum and Classical Cryptography Platform for End-to-End IoT Security
by Bohao Zhang, Jinfa Hong, Gaoyu Mao, Shiyu Shen, Hao Yang, Guangyan Li, Shengzhe Lyu, Patrick S. Y. Hung and Ray C. C. Cheung
Electronics 2025, 14(19), 3908; https://doi.org/10.3390/electronics14193908 - 30 Sep 2025
Abstract
As the Internet of Things (IoT) continues to expand into mission-critical and long-lived applications, securing low-power wide-area networks (LPWANs) such as Narrowband IoT (NB-IoT) against both classical and quantum threats becomes imperative. Existing NB-IoT security mechanisms terminate at the core network, leaving transmission [...] Read more.
As the Internet of Things (IoT) continues to expand into mission-critical and long-lived applications, securing low-power wide-area networks (LPWANs) such as Narrowband IoT (NB-IoT) against both classical and quantum threats becomes imperative. Existing NB-IoT security mechanisms terminate at the core network, leaving transmission payloads exposed. This paper proposes HySecure, an FPGA-based hybrid cryptographic platform that integrates both classical elliptic curve and post-quantum schemes to achieve end-to-end (E2E) security for NB-IoT communication. Our architecture, built upon the lightweight RISC-V PULPino platform, incorporates hardware accelerators for X25519, Kyber, Ed25519, and Dilithium. We design a hybrid key establishment protocol combining ECDH and Kyber through HKDF, and a dual-signature scheme using EdDSA and Dilithium to ensure authenticity and integrity during handshake. Cryptographic functions are evaluated on FPGA, achieving a 32.2× to 145.4× speedup. NS-3 simulations under realistic NB-IoT configurations demonstrate acceptable latency and throughput for the proposed hybrid schemes, validating their practicality for secure constrained IoT deployments and communications. Full article
36 pages, 4047 KB  
Review
Application of FPGA Devices in Network Security: A Survey
by Abdulmunem A. Abdulsamad and Sándor R. Répás
Electronics 2025, 14(19), 3894; https://doi.org/10.3390/electronics14193894 - 30 Sep 2025
Abstract
Field-Programmable Gate Arrays (FPGAs) are increasingly shaping the future of network security, thanks to their flexibility, parallel processing capabilities, and energy efficiency. In this survey, we examine 50 peer-reviewed studies published between 2020 and 2025, selected from an initial pool of 210 articles [...] Read more.
Field-Programmable Gate Arrays (FPGAs) are increasingly shaping the future of network security, thanks to their flexibility, parallel processing capabilities, and energy efficiency. In this survey, we examine 50 peer-reviewed studies published between 2020 and 2025, selected from an initial pool of 210 articles based on relevance, hardware implementation, and the presence of empirical performance data. These studies encompass a broad range of topics, including cryptographic acceleration, intrusion detection and prevention systems (IDS/IPS), hardware firewalls, and emerging strategies that incorporate artificial intelligence (AI) and post-quantum cryptography (PQC). Our review focuses on five major application areas: cryptographic acceleration, intrusion detection and prevention systems (IDS/IPS), hardware firewalls, and emerging strategies involving artificial intelligence (AI) and post-quantum cryptography (PQC). We propose a structured taxonomy that organises the field by technical domain and challenge, and compare solutions in terms of scalability, resource usage, and real-world performance. Beyond summarising current advances, we explore ongoing limitations—such as hardware constraints, integration complexity, and the lack of standard benchmarking. We also outline future research directions, including low-power cryptographic designs, FPGA–AI collaboration for detecting zero-day attacks, and efficient PQC implementations. This survey aims to offer both a clear overview of recent progress and a valuable roadmap for researchers and engineers working toward secure, high-performance FPGA-based systems. Full article
Show Figures

Figure 1

26 pages, 1076 KB  
Article
NL-COMM: Enabling High-Performing Next-Generation Networks via Advanced Non-Linear Processing
by Chathura Jayawardena, George Ntavazlis Katsaros and Konstantinos Nikitopoulos
Future Internet 2025, 17(10), 447; https://doi.org/10.3390/fi17100447 - 30 Sep 2025
Abstract
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely [...] Read more.
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely due to limitations in current signal processing methods. Linear transceiver processing, though widely adopted, fails to fully exploit non-orthogonal transmissions, forcing massive MIMO systems to use a disproportionately large number of RF chains for relatively few streams, increasing power consumption. Non-linear processing can unlock the full potential of non-orthogonal schemes but is hindered by high computational complexity and integration challenges. Moreover, existing message-passing receivers for NOMA depend on specially designed sparse signals, limiting resource allocation flexibility and efficiency. This work presents NL-COMM, an efficient non-linear processing framework that translates the theoretical gains of non-orthogonal transmissions into practical benefits for both the uplink and downlink. NL-COMM delivers over 200% spectral efficiency gains, enables 50% reductions in antennas and RF chains (and thus base station power consumption), and increases concurrently supported users by 450%. In distributed MIMO deployments, the antenna reduction halves fronthaul bandwidth requirements, mitigating a key system bottleneck. Furthermore, NL-COMM offers the flexibility to unlock new NOMA schemes. Finally, we present both hardware and software architectures for NL-COMM that support massively parallel execution, demonstrating how advanced non-linear processing can be realized in practice to meet the demands of next-generation networks. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

20 pages, 6308 KB  
Article
An Intelligent Algorithm for the Optimal Deployment of Water Network Monitoring Sensors Based on Automatic Labelling and Graph Neural Network
by Guoxin Shi, Xianpeng Wang, Jingjing Zhang and Xinlei Gao
Information 2025, 16(10), 837; https://doi.org/10.3390/info16100837 - 27 Sep 2025
Abstract
In order to enhance leakage detection accuracy in water distribution networks (WDNs) while reducing sensor deployment costs, an intelligent algorithm for the optimal deployment of water network monitoring sensors based on the automatic labelling and graph neural network (ALGN) was proposed for the [...] Read more.
In order to enhance leakage detection accuracy in water distribution networks (WDNs) while reducing sensor deployment costs, an intelligent algorithm for the optimal deployment of water network monitoring sensors based on the automatic labelling and graph neural network (ALGN) was proposed for the optimal deployment of WDN monitoring sensors. The research aims to develop a data-driven, topology-aware sensor deployment strategy that achieves high leakage detection performance with minimal hardware requirements. The methodology consisted of three main steps: first, the dung beetle optimization algorithm (DBO) was employed to automatically determine optimal parameters for the DBSCAN clustering algorithm, which generated initial cluster labels; second, a customized graph neural network architecture was used to perform topology-aware node clustering, integrating network structure information; finally, optimal pressure sensor locations were selected based on minimum distance criteria within identified clusters. The key innovation lies in the integration of metaheuristic optimization with graph-based learning to fully automate the sensor placement process while explicitly incorporating the hydraulic network topology. The proposed approach was validated on real-world WDN infrastructure, demonstrating superior performance with 93% node coverage and 99.77% leakage detection accuracy, surpassing state-of-the-art methods by 2% and 0.7%, respectively. These results indicate that the ALGN framework provides municipal water utilities with a robust, automated solution for designing efficient pressure monitoring systems that balance detection performance with implementation cost. Full article
Show Figures

Figure 1

42 pages, 5827 KB  
Review
A Review of Reconfigurable Intelligent Surfaces in Underwater Wireless Communication: Challenges and Future Directions
by Tharuka Govinda Waduge, Yang Yang and Boon-Chong Seet
J. Sens. Actuator Netw. 2025, 14(5), 97; https://doi.org/10.3390/jsan14050097 - 26 Sep 2025
Abstract
Underwater wireless communication (UWC) is an emerging technology crucial for automating marine industries, such as offshore aquaculture and energy production, and military applications. It is a key part of the 6G vision of creating a hyperconnected world for extending connectivity to the underwater [...] Read more.
Underwater wireless communication (UWC) is an emerging technology crucial for automating marine industries, such as offshore aquaculture and energy production, and military applications. It is a key part of the 6G vision of creating a hyperconnected world for extending connectivity to the underwater environment. Of the three main practicable UWC technologies (acoustic, optical, and radiofrequency), acoustic methods are best for far-reaching links, while optical is best for high-bandwidth communication. Recently, utilizing reconfigurable intelligent surfaces (RISs) has become a hot topic in terrestrial applications, underscoring significant benefits for extending coverage, providing connectivity to blind spots, wireless power transmission, and more. However, the potential for further research works in underwater RIS is vast. Here, for the first time, we conduct an extensive survey of state-of-the-art of RIS and metasurfaces with a focus on underwater applications. Within a holistic perspective, this survey systematically evaluates acoustic, optical, and hybrid RIS, showing that environment-aware channel switching and joint communication architectures could deliver holistic gains over single-domain RIS in the distance–bandwidth trade-off, congestion mitigation, security, and energy efficiency. Additional focus is placed on the current challenges from research and realization perspectives. We discuss recent advances and suggest design considerations for coupling hybrid RIS with optical energy and piezoelectric acoustic energy harvesting, which along with distributed relaying, could realize self-sustainable underwater networks that are highly reliable, long-range, and high throughput. The most impactful future directions seem to be in applying RIS for enhancing underwater links in inhomogeneous environments and overcoming time-varying effects, realizing RIS hardware suitable for the underwater conditions, and achieving simultaneous transmission and reflection (STAR-RIS), and, particularly, in optical links—integrating the latest developments in metasurfaces. Full article
Show Figures

Figure 1

29 pages, 3798 KB  
Article
Hybrid Adaptive MPC with Edge AI for 6-DoF Industrial Robotic Manipulators
by Claudio Urrea
Mathematics 2025, 13(19), 3066; https://doi.org/10.3390/math13193066 - 24 Sep 2025
Viewed by 173
Abstract
Autonomous robotic manipulators in industrial environments face significant challenges, including time-varying payloads, multi-source disturbances, and real-time computational constraints. Traditional model predictive control frameworks degrade by over 40% under model uncertainties, while conventional adaptive techniques exhibit convergence times incompatible with industrial cycles. This work [...] Read more.
Autonomous robotic manipulators in industrial environments face significant challenges, including time-varying payloads, multi-source disturbances, and real-time computational constraints. Traditional model predictive control frameworks degrade by over 40% under model uncertainties, while conventional adaptive techniques exhibit convergence times incompatible with industrial cycles. This work presents a hybrid adaptive model predictive control framework integrating edge artificial intelligence with dual-stage parameter estimation for 6-DoF industrial manipulators. The approach combines recursive least squares with a resource-optimized neural network (three layers, 32 neurons, <500 KB memory) designed for industrial edge deployment. The system employs innovation-based adaptive forgetting factors, providing exponential convergence with mathematically proven Lyapunov-based stability guarantees. Simulation validation using the Fanuc CR-7iA/L manipulator demonstrates superior performance across demanding scenarios, including precision laser cutting and obstacle avoidance. Results show 52% trajectory tracking RMSE reduction (0.022 m to 0.012 m) under 20% payload variations compared to standard MPC, while achieving sub-5 ms edge inference latency with 99.2% reliability. The hybrid estimator achieves 65% faster parameter convergence than classical RLS, with 18% energy efficiency improvement. Statistical significance is confirmed through ANOVA (F = 24.7, p < 0.001) with large effect sizes (Cohen’s d > 1.2). This performance surpasses recent adaptive control methods while maintaining proven stability guarantees. Hardware validation under realistic industrial conditions remains necessary to confirm practical applicability. Full article
(This article belongs to the Special Issue Computation, Modeling and Algorithms for Control Systems)
Show Figures

Figure 1

18 pages, 1617 KB  
Article
GNN-MFF: A Multi-View Graph-Based Model for RTL Hardware Trojan Detection
by Senjie Zhang, Shan Zhou, Panpan Xue, Lu Kong and Jinbo Wang
Appl. Sci. 2025, 15(19), 10324; https://doi.org/10.3390/app151910324 - 23 Sep 2025
Viewed by 196
Abstract
The globalization of hardware design flows has increased the risk of Hardware Trojan (HT) insertion during the design phase. Graph-based learning methods have shown promise for HT detection at the Register Transfer Level (RTL). However, most existing approaches rely on representing RTL designs [...] Read more.
The globalization of hardware design flows has increased the risk of Hardware Trojan (HT) insertion during the design phase. Graph-based learning methods have shown promise for HT detection at the Register Transfer Level (RTL). However, most existing approaches rely on representing RTL designs through a single graph structure. This single-view modeling paradigm inherently constrains the model’s ability to perceive complex behavioral patterns, consequently limiting detection performance. To address these limitations, we propose GNN-MFF, an innovative multi-view feature fusion model based on Graph Neural Networks (GNNs). Our approach centers on joint multi-view modeling of RTL designs to achieve a more comprehensive representation. Specifically, we construct complementary graph-structural views: the Abstract Syntax Tree (AST) capturing structure information, and the Data Flow Graph (DFG) modeling logical dependency relationships. For each graph structure, customized GNN architectures are designed to effectively extract its features. Furthermore, we develop a feature fusion framework that leverages a multi-head attention mechanism to deeply explore and integrate heterogeneous features from distinct views, thereby enhancing the model’s capacity to structurally perceive anomalous logic patterns. Evaluated on an extended Trust-Hub-based HT benchmark dataset, our model achieves an average F1-score of 97.08% in automated detection of unseen HTs, surpassing current state-of-the-art methods. Full article
Show Figures

Figure 1

14 pages, 769 KB  
Article
A Novel Low-Power Ternary 6T SRAM Design Using XNOR-Based CIM Architecture in Advanced FinFET Technologies
by Adnan A. Patel, Sohan Sai Dasaraju, Achyuth Gundrapally and Kyuwon Ken Choi
Electronics 2025, 14(18), 3737; https://doi.org/10.3390/electronics14183737 - 22 Sep 2025
Viewed by 290
Abstract
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications—such as speech recognition, facial recognition, and object detection—has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, leading to [...] Read more.
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications—such as speech recognition, facial recognition, and object detection—has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, leading to significant challenges in terms of memory access time and power consumption. Compute-in-Memory (CIM) architectures have emerged as an alternative by executing computations directly within memory arrays, thereby reducing the expensive data transfer between memory and processor units. In this work, we present a 6T SRAM-based CIM architecture implemented using FinFET technology, aiming to reduce both power consumption and access delay. We explore and simulate three different SRAM cell structures—PLNA (P-Latch N-Access), NLPA (N-Latch P-Access), and SE (Single-Ended)—to assess their suitability for CIM operations. Compared to a reference 10T XNOR-based CIM design, our results show that the proposed structures achieve an average power consumption approximately 70% lower, along with significant delay reduction, without compromising functional integrity. A comparative analysis is presented to highlight the trade-offs between the three configurations, providing insights into their potential applications in low-power AI accelerator design. Full article
Show Figures

Figure 1

Back to TopTop