Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = topology processor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2982 KB  
Article
Hydrodynamic Shielding and Oxidation Suppression in Merging Lazy Plumes
by Atsuyoshi Sato, Arata Kioka, Masami Nakagawa and Takeshi Tsuji
Fluids 2026, 11(4), 92; https://doi.org/10.3390/fluids11040092 - 30 Mar 2026
Viewed by 250
Abstract
This paper investigates the combustion dynamics of interacting lazy multi-component gas plumes (i.e., buoyancy-dominated gas releases with a low initial momentum flux), a configuration relevant to coal mining waste emissions. By coupling a three-dimensional large eddy simulation (mesh size of 10−2 m; [...] Read more.
This paper investigates the combustion dynamics of interacting lazy multi-component gas plumes (i.e., buoyancy-dominated gas releases with a low initial momentum flux), a configuration relevant to coal mining waste emissions. By coupling a three-dimensional large eddy simulation (mesh size of 10−2 m; paralleling with 2048 processors) with detailed chemical kinetics (GRI-Mech 3.0), we analyzed the sensitivity of the flow structure and plume stabilization to the vent spacing of twin hydrogen-rich multi-component gas plumes (H2-CO-CH4-air). The results identified a distinct topological transition. While gas plumes from vents spaced at δ/D=5 (δ and D are the spacing and width of gas vents, respectively) evolve independently, those at closely spaced sources (δ/D=5/4) exhibit rapid coalescence driven by hydrodynamic shielding. This hydrodynamic merging results in a unified column with an effective hydraulic diameter of Deff2D. This leads to a significant reduction in the surface-to-volume ratio available for ambient air entrainment, maintaining a coherent combustible-rich core to higher altitudes than isolated-source correlations would predict. However, despite this mass retention, the rapid vertical acceleration of buoyancy-dominated flows induces high strain rates, significantly disrupting the reaction zone structure. These findings establish that, for clustered emission sources, the dispersion hazard is governed by a coupling between hydrodynamic coalescence, which maintains reactant concentration, and finite-rate chemistry, restricting oxidation efficiency. This paper provides critical insights for designing gas capture infrastructure and assessing flammability limits in multi-vent systems. Full article
(This article belongs to the Special Issue 10th Anniversary of Fluids—Recent Advances in Fluid Mechanics)
Show Figures

Figure 1

20 pages, 3772 KB  
Article
A 24 V-to-0.6~3 V Quadruple Step-Down Trans-Inductor Voltage Regulator with Phase-Overlap Operation and Ultra-Fast Transient Response for Processors
by Haoxin Cai, Bin Li and Zhaohui Wu
Electronics 2026, 15(6), 1307; https://doi.org/10.3390/electronics15061307 - 20 Mar 2026
Viewed by 172
Abstract
This paper presents a quadruple step-down (QSD) trans-inductor voltage regulator (TLVR) converter to accommodate the high-current and fast-transient requirements of processor power supplies. Evolved from dual-step-down (DSD) topology, the QSD configuration offers stronger load capacity; three additional flying capacitors are introduced between adjacent [...] Read more.
This paper presents a quadruple step-down (QSD) trans-inductor voltage regulator (TLVR) converter to accommodate the high-current and fast-transient requirements of processor power supplies. Evolved from dual-step-down (DSD) topology, the QSD configuration offers stronger load capacity; three additional flying capacitors are introduced between adjacent phases to break the 25% duty cycle constraint, thereby extending the output voltage range and accelerating the transient response. Moreover, the converter’s transient response is optimized to its full potential through both multi-phase simultaneous operation and the incorporation of the dedicated TLVR architecture. A modified adaptive on-time (AOT) controller supporting four-phase simultaneous operation is employed. Designed and verified via post-layout simulation in a 180 nm BCD process with all 6 V power transistors, the converter achieves a peak efficiency of 96.1% at 24 V input and 3 V output, as well as a maximum load capacity of 20 A. Under a 19 A load current step with a 19 ns rise time, it exhibits only a 37 mV output voltage droop and a 2 μs settling time, even with a 100 μF output capacitor. Full article
(This article belongs to the Special Issue Advanced DC-DC Converter Topology Design, Control, Application)
Show Figures

Figure 1

22 pages, 1151 KB  
Article
Directed and Resolution-Adaptive Louvain Community Method for Hardware Trojan Detection and Localization in Gate-Level Netlists
by Hongxu Gao, Dong Ding, Cai Zhen, Xin Liu, Yu Li, Jinping Li, Yuning Zhao and Quan Wang
Electronics 2026, 15(5), 1027; https://doi.org/10.3390/electronics15051027 - 28 Feb 2026
Viewed by 271
Abstract
The increasing complexity of modern gate-level circuits significantly degrades the efficiency of existing Hardware Trojan detection methods. Community partitioning is an efficient structural decomposition technique to address efficiency and scalability issues, yet current community-based detection schemes rely primarily on undirected graph modeling. To [...] Read more.
The increasing complexity of modern gate-level circuits significantly degrades the efficiency of existing Hardware Trojan detection methods. Community partitioning is an efficient structural decomposition technique to address efficiency and scalability issues, yet current community-based detection schemes rely primarily on undirected graph modeling. To address these issues, we propose an improved structure-aware community detection method for gate-level netlists, aiming to enhance the detection and localization capabilities of small-scale Hardware Trojans. First, an expanded dataset with structural diversity of clean and Trojan-inserted circuits is constructed by extending Trust-Hub benchmark circuits. Then, a directed and resolution-adaptive Louvain community detection algorithm is proposed—by introducing directed modularity, resolution parameters, and logic-gate semantic weighting, fine-grained community partitioning is achieved. On this basis, topological, functional, and anomaly features are extracted from community subgraphs, and a detection framework is built by combining graph neural networks and traditional detection models. All experiments are conducted on a unified platform equipped with an Intel (R) Core (TM) i7-10750H processor and an NVIDIA GeForce RTX 2060 GPU. Experimental results show that compared with configurations using the original Louvain partitioning and traditional features, the proposed method achieves significant improvements in both detection accuracy and localization capability. After introducing the improved community partitioning and feature design, the optimal model (CommunityGAT) yields a 3.3% increase in TPR and a 10.8% increase in ALC, verifying the method’s effectiveness in detecting small-scale concealed Trojans. Full article
(This article belongs to the Special Issue New Trends in Cybersecurity and Hardware Design for IoT)
Show Figures

Figure 1

23 pages, 2456 KB  
Article
Research on Intelligent Thermal Optimization for Chiplet-Based Heterogeneously Integrated AI Chip Embedded with Leaf-Vein-Inspired Fractal Microchannels
by Jie Wu, Yu Liang, Guibin Liu, Ruiyang Pang, Yi Teng, Chen Li, Xuetian Bao, Shi Lei and Zhikuang Cai
Materials 2026, 19(4), 679; https://doi.org/10.3390/ma19040679 - 10 Feb 2026
Viewed by 1036
Abstract
Conventional cooling schemes that rely on rigid heat-sink-to-die coupling in vertical stacks fail to track the dynamic, non-uniform heat map of high-performance artificial-intelligence (AI) chips employing chiplet-based heterogeneous integration, giving rise to local hot spots. To eliminate this mismatch, we present a leaf-vein-inspired [...] Read more.
Conventional cooling schemes that rely on rigid heat-sink-to-die coupling in vertical stacks fail to track the dynamic, non-uniform heat map of high-performance artificial-intelligence (AI) chips employing chiplet-based heterogeneous integration, giving rise to local hot spots. To eliminate this mismatch, we present a leaf-vein-inspired fractal microchannel tailored for such AI processors. Its hierarchical bifurcation–confluence topology adaptively reshapes the flow field, delivering ultra-low thermal resistance, high heat-transfer coefficients, and uniform dissipation. Coupled with reconfigurable chiplet placement, the design is evaluated through FEM-based orthogonal experiments that rank the influence of coolant, channel diameter/depth, inlet/outlet position, substrate thickness, and flow rate via range analysis and Analysis of Variance (ANOVA). A machine-learned surrogate model of junction temperature is then fed to Particle Swarm Optimization (PSO) for multi-parameter optimization. When re-simulated with the optimal parameter set, the symmetric fractal network lowered the AI chip junction temperature from 127.80 °C to 30.97 °C, a 76% improvement, offering a theoretical basis for hotspot mitigation in advanced heterogeneous AI packages. Full article
(This article belongs to the Special Issue Microstructural and Mechanical Characteristics of Welded Joints)
Show Figures

Graphical abstract

17 pages, 702 KB  
Article
Machine Learning the Decoherence Property of Superconducting and Semiconductor Quantum Devices from Graph Connectivity
by Quan Fu, Jie Liu, Xin Wang and Rui Xiong
Entropy 2026, 28(1), 89; https://doi.org/10.3390/e28010089 - 12 Jan 2026
Viewed by 647
Abstract
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays [...] Read more.
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays a crucial role in determining overall system susceptibility to environmental noise, yet systematic characterization of this relationship has been hindered by computational complexity. We develop a machine learning framework that bridges graph features with quantum device characterization to predict decoherence lifetime directly from connectivity patterns. By representing quantum architectures as connected graphs and using 14 topological features as input to supervised learning models, we achieve accurate lifetime predictions with R2>0.96 for both superconducting and semiconductor platforms. Our analysis reveals fundamentally distinct decoherence mechanisms: superconducting qubits show high sensitivity to global connectivity measures (betweenness centrality δ1=0.484, spectral entropy δ1=0.480), while semiconductor quantum dots exhibit exceptional sensitivity to system scale (node count δ2=0.919, importance = 1.860). The complete failure of cross-platform model transfer (R2 scores of −0.39 and −433.60) emphasizes the platform-specific nature of optimal connectivity design. Our approach enables rapid assessment of quantum architectures without expensive simulations, providing practical guidance for noise-optimized quantum processor design. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 9727 KB  
Article
An Energy-Efficient Neuromorphic Processor Using Unified Refractory Control-Based NoC for Edge AI
by Su-Hwan Na and Dong-Sun Kim
Electronics 2025, 14(24), 4959; https://doi.org/10.3390/electronics14244959 - 17 Dec 2025
Viewed by 656
Abstract
Neuromorphic computing has emerged as a promising paradigm for edge AI systems owing to its event-driven operation and high energy efficiency. However, conventional spiking neural network (SNN) architectures often suffer from redundant computation and inefficient power control, particularly during on-chip learning. This paper [...] Read more.
Neuromorphic computing has emerged as a promising paradigm for edge AI systems owing to its event-driven operation and high energy efficiency. However, conventional spiking neural network (SNN) architectures often suffer from redundant computation and inefficient power control, particularly during on-chip learning. This paper proposes a network-on-chip (NoC) architecture featuring a unified refractory-enabled neuron (UREN)-based router that globally coordinates spike-driven computation across multiple neuron cores. The router applies a unified refractory time to all neurons following a winner spike event, effectively enabling clock gating and suppressing redundant activity. The proposed design adopts a star-routing topology with multicasting support and integrates nearest-neighbor spike-timing-dependent plasticity (STDP) for local online learning. FPGA-based experiments demonstrate a 30% reduction in computation and 86.1% online classification accuracy on the MNIST dataset compared with baseline SNN implementations. These results confirm that the UREN-based router provides a scalable and power-efficient neuromorphic processor architecture, well suited for energy-constrained edge AI applications. Full article
Show Figures

Figure 1

26 pages, 1491 KB  
Article
Time and Memory Trade-Offs in Shortest-Path Algorithms Across Graph Topologies: A*, Bellman–Ford, Dijkstra, AI-Augmented A* and a Neural Baseline
by Nahier Aldhafferi
Computers 2025, 14(12), 545; https://doi.org/10.3390/computers14120545 - 10 Dec 2025
Viewed by 1356
Abstract
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, [...] Read more.
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, and scale-free graphs of sizes up to 103,103 nodes, specifically examining 100- and 1000-node grids, 100- and 1000-node random graphs, and 100-node scale-free graphs. The algorithms were benchmarked through repeated runs per condition on a server-class system equipped with an Intel Xeon Gold 6248R processor, NVIDIA Tesla V100 GPU (32 GB), 256 GB RAM, and Ubuntu 20.04. A* consistently outperformed Dijkstra’s algorithm when paired with an informative admissible heuristic, exhibiting faster runtimes by approximately 1.37× to 1.91× across various topologies. In comparison, Bellman-Ford was slower than Dijkstra’s by approximately 1.50× to 1.92×, depending on graph type and size; however, it remained a robust option in scenarios involving negative edge weights or when early-termination conditions reduced practical iterations. The AI model demonstrated the slowest performance across conditions, incurring runtimes that were 2.60× to 3.23× higher than A* and 1.62× to 2.15× higher than Bellman-Ford, offering limited gains as a direct solver. These findings underscore topology-sensitive trade-offs: A* is preferred when a suitable heuristic is available; Dijkstra’s serves as a strong baseline in the absence of heuristics; Bellman-Ford is appropriate for handling negative weights; and current AI approaches are not yet competitive for exact shortest paths but may hold promise as learned heuristics to augment A*. We provide environmental details and comparative results to support reproducibility and facilitate further investigation into hybrid learned-classical strategies. Full article
Show Figures

Figure 1

23 pages, 2960 KB  
Article
Analysis of Surface Code Algorithms on Quantum Hardware Using the Qrisp Framework
by Jan Krzyszkowski and Marcin Niemiec
Electronics 2025, 14(23), 4707; https://doi.org/10.3390/electronics14234707 - 29 Nov 2025
Viewed by 2151
Abstract
The pursuit of scalable quantum computing is intrinsically limited by qubit decoherence, making robust quantum error correction (QEC) techniques crucial. As a leading solution, the topological surface code offers inherent protection against local noise. This study presents the first comprehensive implementation and quantitative [...] Read more.
The pursuit of scalable quantum computing is intrinsically limited by qubit decoherence, making robust quantum error correction (QEC) techniques crucial. As a leading solution, the topological surface code offers inherent protection against local noise. This study presents the first comprehensive implementation and quantitative characterization of a full surface code pipeline, which includes encompassing lattice construction, multi-round syndrome extraction, and MWPM decoding, using the high-level Qrisp programming framework. The entire pipeline was executed on IQM superconducting quantum processors to provide an empirical assessment under current noisy intermediate-scale quantum (NISQ) conditions. Our experimental data definitively show that the system operates significantly below the fault-tolerance threshold. Crucially, a quantitative resource analysis isolates and establishes the lack of native qubit reset on the hardware as the dominant architectural bottleneck. This constraint forces the physical qubit count to scale as d2+(d21)T, effectively preventing scaling to larger code distances (d) and execution times (T) on current devices. The work confirms Qrisp’s capability to support advanced QEC protocols, demonstrating that high-level abstraction can reduce implementation complexity by simplifying scheduling and mapping, thereby facilitating deeper experimental analysis of hardware limitations. Full article
(This article belongs to the Special Issue Recent Advances in Quantum Information)
Show Figures

Figure 1

22 pages, 6033 KB  
Article
High-Density Neuromorphic Inference Platform (HDNIP) with 10 Million Neurons
by Yue Zuo, Ning Ning, Ke Cao, Rui Zhang, Cheng Fu, Shengxin Wang, Liwei Meng, Ruichen Ma, Guanchao Qiao, Yang Liu and Shaogang Hu
Electronics 2025, 14(17), 3412; https://doi.org/10.3390/electronics14173412 - 27 Aug 2025
Viewed by 1681
Abstract
Modern neuromorphic processors exhibit neuron densities that are orders of magnitude lower than those of the biological cortex, hindering the deployment of large-scale spiking neural networks (SNNs) on single chips. To bridge this gap, we propose HDNIP, a 40 nm high-density neuromorphic inference [...] Read more.
Modern neuromorphic processors exhibit neuron densities that are orders of magnitude lower than those of the biological cortex, hindering the deployment of large-scale spiking neural networks (SNNs) on single chips. To bridge this gap, we propose HDNIP, a 40 nm high-density neuromorphic inference platform with a density-first architecture. By eliminating area-intensive on-chip SRAM and using 1280 compact cores with a time-division multiplexing factor of up to 8192, HDNIP integrates 10 million neurons and 80 billion synapses within a 44.39 mm2 synthesized area. This achieves an unprecedented neuron density of 225 k neurons/mm2, over 100 times greater than prior art. The resulting bandwidth challenges are mitigated by a ReRAM-based near-memory computation strategy combined with input reuse, reducing off-chip data transfer by approximately 95%. Furthermore, adaptive TDM and dynamic core fusion ensure high hardware utilization across diverse network topologies. Emulator-based validation using large SNNs, demonstrates a throughput of 13 GSOP/s at a low power consumption of 146 mW. HDNIP establishes a scalable pathway towards single-chip, low-SWaP neuromorphic systems for complex edge intelligence applications. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

28 pages, 9730 KB  
Article
Interplay of Connectivity and Unwanted Physical Interactions Within the Architecture of the D-Wave 2000Q Chimera Processor
by Jessica Park, Susan Stepney and Irene D’Amico
Technologies 2025, 13(8), 355; https://doi.org/10.3390/technologies13080355 - 12 Aug 2025
Viewed by 705
Abstract
We consider dynamics relevant to annealing in qubit networks modelled on the architecture of the D-Wave 2000Q quantum processor (known as the Chimera topology). Our results report on the effects of the qubits’ connectivity and variable coupling strengths (based on physical interactions) on [...] Read more.
We consider dynamics relevant to annealing in qubit networks modelled on the architecture of the D-Wave 2000Q quantum processor (known as the Chimera topology). Our results report on the effects of the qubits’ connectivity and variable coupling strengths (based on physical interactions) on the dynamics of network. The networks we examine are up to 32 qubits in size and include coupling lengths varying by almost an order of magnitude. We show that while information transfer within the network can be strongly affected by the different interactions, the system maintains similar clusters of qubits with comparable fidelities even in the presence of some of the physical interactions. This suggests an intrinsic robustness of the Chimera topology to these perturbations, even if it includes such a variety of coupling lengths. Moreover, a similar clustering geometry was observed for other qubit properties in previous analysis of actual data from D-Wave 2000Q. This comparable behaviour suggests that the real quantum annealing chip is subject to little or no unwanted effects due to interactions that scale with the coupling lengths. This could be due to absence of the most damaging type of physical interactions and/or to D-Wave calibration methods tuning the control lines such that the couplings perform as if there is no effect due to their physical length. Our results are also relevant to the use of chaining for the creation of logical qubits. They show that even with very strong interactions between the chain, significant unwanted perturbations may occur due to the inhomogeneous fidelities of the overall dynamics and inhomogeneous dynamics should be expected for any given algorithm. Full article
(This article belongs to the Section Quantum Technologies)
Show Figures

Figure 1

18 pages, 2407 KB  
Article
IFDA: Intermittent Fault Diagnosis Algorithm for Augmented Cubes Under the PMC Model
by Chongwen Yuan, Chenghao Zou, Jiong Wu, Hao Feng and Jie Li
Appl. Sci. 2025, 15(15), 8197; https://doi.org/10.3390/app15158197 - 23 Jul 2025
Cited by 1 | Viewed by 639
Abstract
Fault diagnosis technology is a crucial technique for ensuring the reliability of multiprocessor systems. Many previous studies have paid close attention to the permanent faults of systems while ignoring the rise of intermittent faults. Meanwhile, there is a lack of a rapid diagnostic [...] Read more.
Fault diagnosis technology is a crucial technique for ensuring the reliability of multiprocessor systems. Many previous studies have paid close attention to the permanent faults of systems while ignoring the rise of intermittent faults. Meanwhile, there is a lack of a rapid diagnostic algorithm tailored for intermittent faults. In this paper, we propose multiple theorems to evaluate the intermittent fault diagnosability of different topologies under the PMC model. Through these theorems, we demonstrate that the intermittent fault diagnosability of an n-dimensional augmented cube (AQn) is (2n2) when n is greater than or equal to 4. Furthermore, we present a fast intermittent fault diagnosis algorithm, which is named as IFDA, to identify the processors with intermittent fault in the networks. Finally, we evaluate the performance of the algorithm in terms of the parameters Accuracy and Precision. The simulation experimental results show that the algorithm IFDA has good performance and efficiency. Full article
Show Figures

Figure 1

23 pages, 3558 KB  
Article
Research on High-Reliability Energy-Aware Scheduling Strategy for Heterogeneous Distributed Systems
by Ziyu Chen, Jing Wu, Lin Cheng and Tao Tao
Big Data Cogn. Comput. 2025, 9(6), 160; https://doi.org/10.3390/bdcc9060160 - 17 Jun 2025
Cited by 3 | Viewed by 2846
Abstract
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing [...] Read more.
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing studies now attempt to achieve the best balance in terms of time constraints, energy efficiency, and system reliability in Dynamic Voltage and Frequency Scaling environments. This study proposes a two-stage collaborative optimization strategy. With the help of an innovative algorithm design and theoretical analysis, the multi-objective optimization challenges mentioned above are systematically solved. First, based on a reliability-constrained model, we propose a topology-aware dynamic priority scheduling algorithm (EAWRS). This algorithm constructs a node priority function by incorporating in-degree/out-degree weighting factors and critical path analysis to enable multi-objective optimization. Second, to address the time-varying reliability characteristics introduced by DVFS, we propose a Fibonacci search-based dynamic frequency scaling algorithm (SEFFA). This algorithm effectively reduces energy consumption while ensuring task reliability, achieving sub-optimal processor energy adjustment. The collaborative mechanism of EAWRS and SEFFA has well solved the dynamic scheduling challenge based on DAG in heterogeneous multi-core processor systems in the Internet of Things environment. Experimental evaluations conducted at various scales show that, compared with the three most advanced scheduling algorithms, the proposed strategy reduces energy consumption by an average of 14.56% (up to 58.44% under high-reliability constraints) and shortens the makespan by 2.58–56.44% while strictly meeting reliability requirements. Full article
Show Figures

Figure 1

20 pages, 4186 KB  
Article
Hash-Based Message Authentication Code with a Reverse Fuzzy Extractor for a CMOS Image Sensor
by Yuki Rogi, Manami Hagizaki, Tatsuya Oyama, Hiroaki Ogawa, Kota Yoshida, Takeshi Fujino and Shunsuke Okura
Electronics 2025, 14(10), 1971; https://doi.org/10.3390/electronics14101971 - 12 May 2025
Cited by 1 | Viewed by 1081
Abstract
The MIPI (Mobile Industry Processor Interface) Alliance provides a security framework for in-vehicle network connections between sensors and processing electronic control units (ECUs). One approach within this framework is data integrity verification for sensors with limited hardware resources. In this paper, the security [...] Read more.
The MIPI (Mobile Industry Processor Interface) Alliance provides a security framework for in-vehicle network connections between sensors and processing electronic control units (ECUs). One approach within this framework is data integrity verification for sensors with limited hardware resources. In this paper, the security risks associated with image sensor data are described. Adversarial examples (AEs) targeting the MIPI interface can induce misclassification, making image data integrity verification essential. A CMOS image sensor with a message authentication code (CIS-MAC) is then proposed as a defense mechanism starting from the image sensor to protect image data from malicious manipulations, such as AE attacks. Evaluation results of the physically unclonable function (PUF) response and random number, which are utilized for generating the cryptographic key and MAC tag, are presented using a 2 Mpixel CMOS image sensor. The area of the CIS-MAC circuit is estimated based on a Verilog HDL design and synthesis using a 0.18 μm CMOS process. Various hash topologies are evaluated to select a hash function suitable for key generation and MAC tag generation within the CMOS image sensor. The estimated area of the CIS-MAC circuit is 67 kGE and 0.86mm2, demonstrating feasibility for implementation in a CMOS image sensor typically fabricated using analog process technology. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

16 pages, 5336 KB  
Article
A Control Strategy for Suppressing Zero-Crossing Current of Single-Phase Half-Bridge Active Neutral-Point-Clamped Three-Level Inverter
by Gi-Young Lee, Chul-Min Kim, Jungho Han and Jong-Soo Kim
Electronics 2024, 13(19), 3929; https://doi.org/10.3390/electronics13193929 - 4 Oct 2024
Cited by 1 | Viewed by 2408
Abstract
Multi-level inverters have characteristics suitable for high-voltage and high-power applications through various topology configurations. These reduce harmonic distortion and improve the quality of the output waveform by generating a multi-level output voltage waveform. In particular, an active neutral-point-clamped topology is one of the [...] Read more.
Multi-level inverters have characteristics suitable for high-voltage and high-power applications through various topology configurations. These reduce harmonic distortion and improve the quality of the output waveform by generating a multi-level output voltage waveform. In particular, an active neutral-point-clamped topology is one of the multi-level inverters advantageous for high-power and medium-voltage applications. It has the advantage of controlling the output waveform more precisely by actively clamping the neutral point using an active switch and diode. However, it has a problem, which is that an unwanted zero-crossing current may occur if an inaccurate switching signal is applied at the time when the polarity of the output voltage changes. In this paper, a control strategy to suppress the zero-crossing current of a single-phase half-bridge three-level active neutral-point-clamped inverter is proposed. The operating principle of a single-phase half-bridge three-level active neutral-point-clamped inverter is identified through an operation mode analysis. In addition, how the switching signal is reflected in an actual digital signal processor is analyzed to determine the situation in which the zero-crossing current occurs. Through this, a control strategy capable of suppressing zero-crossing current is designed. The proposed method prevents a zero-crossing current by appropriately modifying the update timing of reference voltages at the point where the polarity of the output changes. The validity of the proposed method is verified through simulation and experiments. Based on the proposed method, the total harmonic distortion of the output current is significantly reduced from 12.15% to 4.59% in a full-load situation. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

22 pages, 1199 KB  
Article
LSTM Gate Disclosure as an Embedded AI Methodology for Wearable Fall-Detection Sensors
by Sérgio D. Correia, Pedro M. Roque and João P. Matos-Carvalho
Symmetry 2024, 16(10), 1296; https://doi.org/10.3390/sym16101296 - 2 Oct 2024
Cited by 4 | Viewed by 2020
Abstract
In this paper, the concept of symmetry is used to design the efficient inference of a fall-detection algorithm for elderly people on embedded processors—that is, there is a symmetric relation between the model’s structure and the memory footprint on the embedded processor. Artificial [...] Read more.
In this paper, the concept of symmetry is used to design the efficient inference of a fall-detection algorithm for elderly people on embedded processors—that is, there is a symmetric relation between the model’s structure and the memory footprint on the embedded processor. Artificial intelligence (AI) and, more particularly, Long Short-Term Memory (LSTM) neural networks are commonly used in the detection of falls in the elderly population based on acceleration measures. Nevertheless, embedded systems that may be utilized on wearable or wireless sensor networks have a hurdle due to the customarily massive dimensions of those networks. Because of this, the algorithms’ most popular implementation relies on edge or cloud computing, which raises privacy concerns and presents challenges since a lot of data need to be sent via a communication channel. The current work proposes a memory occupancy model for LSTM-type networks to pave the way to more efficient embedded implementations. Also, it offers a sensitivity analysis of the network hyper-parameters through a grid search procedure to refine the LSTM topology network under scrutiny. Lastly, it proposes a new methodology that acts over the quantization granularity for the embedded AI implementation on wearable devices. The extensive simulation results demonstrate the effectiveness and feasibility of the proposed methodology. For the embedded implementation of the LSTM for the fall-detection problem on a wearable platform, one can see that an STM8L low-power processor could support a 40-hidden-cell LSTM network with an accuracy of 96.52%. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop