Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = very large scale integration (VLSI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 1772 KB  
Proceeding Paper
Design and Performance Analysis of Double-Gate TFETs Using High-k Dielectrics and Silicon Thickness Scaling for Low-Power Applications
by Pallabi Pahari, Sushanta Kumar Mohapatra, Jitendra Kumar Das and Om Prakash Acharya
Eng. Proc. 2026, 124(1), 38; https://doi.org/10.3390/engproc2026124038 - 19 Feb 2026
Viewed by 515
Abstract
Tunnel Field-Effect Transistors (TFETs) are being explored for ultra-low-power very-large-scale integrated circuits (VLSI) because their band-to-band tunnelling (BTBT) transport permits subthreshold swings (SS) below the 60 mV/dec thermionic limit at room temperature, along with significantly lower leakage than MOSFETs. This paper presents a [...] Read more.
Tunnel Field-Effect Transistors (TFETs) are being explored for ultra-low-power very-large-scale integrated circuits (VLSI) because their band-to-band tunnelling (BTBT) transport permits subthreshold swings (SS) below the 60 mV/dec thermionic limit at room temperature, along with significantly lower leakage than MOSFETs. This paper presents a systematic TCAD study of DG-TFETs that maps how four primary knobs–gate dielectric materials, silicon channel thickness, temperature variation, and different channel material shape key figures of merit: the ON current (ION), OFF current (IOFF), threshold voltage (VTH), SS, and the ION/IOFF switching ratio. High-k gate enhances gate-to-channel coupling and boost tunnelling efficiency; rigorous body scaling enhances electrostatic control; and targeted source-proximal doping profiles elevate ION while minimizing leakage. We also measure the trade-offs between ION, SS, and IOFF that occur when scaling is performed at the same time. This shows that careful coordination is needed instead of just tuning one parameter. This is a simulated work, and the physical models are calibrated to experimental TFET data and all parameters are checked against previously reported results. The device reaches SS = 31.4 mV/dec, VTH = 0.46 V, ION = 5.91 × 10−5 A and an ION/IOFF of about 4.5 × 1011. This shows that it can switch quickly with little leakage. The design insights that come from this work provide useful advice regarding how to choose gate dielectric material, structures, and doping strategies to add DG-TFETs to the next generation of low-power semiconductor technologies. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

16 pages, 2861 KB  
Article
Parametric Model Order Reduction for Large-Scale Circuit Models Using Extended and Asymmetric Extended Krylov Subspace
by Chrysostomos Chatzigeorgiou, Pavlos Stoikos, George Floros, Nestor Evmorfopoulos and George Stamoulis
Electronics 2026, 15(3), 640; https://doi.org/10.3390/electronics15030640 - 2 Feb 2026
Viewed by 498
Abstract
The increasing complexity of modern Very Large-Scale Integration (VLSI) circuits, combined with unavoidable variations in physical and manufacturing parameters, poses significant challenges for accurate and efficient circuit simulation. Parametric model order reduction (PMOR) provides a viable solution by enabling the construction of compact [...] Read more.
The increasing complexity of modern Very Large-Scale Integration (VLSI) circuits, combined with unavoidable variations in physical and manufacturing parameters, poses significant challenges for accurate and efficient circuit simulation. Parametric model order reduction (PMOR) provides a viable solution by enabling the construction of compact reduced-order models that remain valid across a prescribed parameter space. However, the computational cost of generating such models can become prohibitive for large-scale circuits, particularly when high-fidelity projection subspaces are required. In this work, we present an efficient PMOR framework based on the Asymmetric Extended Krylov Subspace (AEKS). The proposed approach exploits structural sparsity imbalances between system matrices to guide the subspace expansion toward computationally favorable directions, thereby significantly reducing the cost of repeated linear system solves. By integrating AEKS within a concatenation-of-basis PMOR strategy, this method enables the rapid construction of accurate parametric reduced-order models for large-scale circuit systems. The proposed AEKS-PMOR framework is evaluated on industrial power distribution network benchmarks, where it demonstrates substantial reductions in model construction time compared to conventional EKS-based PMOR, while maintaining high approximation accuracy over the entire parameter space. Full article
(This article belongs to the Special Issue Modern Circuits and Systems Technologies (MOCAST 2024))
Show Figures

Figure 1

23 pages, 2630 KB  
Article
RMLP-Cap: An End-to-End Parasitic Capacitance Extraction Flow Based on ResMLP
by Xinya Zhou, Jiacheng Zhang, Bin Li, Wenchao Liu, Zhaohui Wu and Bing Lu
Electronics 2026, 15(1), 36; https://doi.org/10.3390/electronics15010036 - 22 Dec 2025
Viewed by 569
Abstract
With continued transistor scaling and increasing interconnect density in very large-scale integration (VLSI) circuits, the parasitic capacitance of interconnect has become a major contributor to circuit delay and signal integrity degradation. Fast and accurate parasitic capacitance extraction is therefore essential in the back-end-of-line [...] Read more.
With continued transistor scaling and increasing interconnect density in very large-scale integration (VLSI) circuits, the parasitic capacitance of interconnect has become a major contributor to circuit delay and signal integrity degradation. Fast and accurate parasitic capacitance extraction is therefore essential in the back-end-of-line (BEOL) stage. Currently, 2.5D parasitic capacitance extraction flow based on the pattern matching method is widely used by commercial tools, which still suffer from lengthy pattern library construction, cross-section preprocessing, pattern mismatch, and poor accuracy for small capacitance extraction. To overcome these limitations, this work proposes an end-to-end parasitic capacitance extraction workflow, named residual multilayer perceptron interconnect parasitic capacitance extraction (RMLP-Cap), which leverages a residual multilayer perceptron (ResMLP) to enhance traditional workflow. RMLP-Cap integrates parasitic extraction (PEX) window acquisition, pattern definition, feature extraction, dataset generation, ResMLP model training, and capacitance aggregation into a unified flow. Experimental results show that RMLP-Cap can automatically define and model complex 2D patterns with 100% matching accuracy. Compared with a field solver based on the boundary element method (BEM), the ResMLP model achieves an average relative error below 0.9%, a standard deviation under 0.2%, and less than 0.5% error for small capacitances, while providing a 900% speed improvement for extraction speed. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

24 pages, 4540 KB  
Review
From Field Effect Transistors to Spin Qubits: Focus on Group IV Materials, Architectures and Fabrications
by Nikolay Petkov and Giorgos Fagas
Nanomaterials 2025, 15(22), 1737; https://doi.org/10.3390/nano15221737 - 17 Nov 2025
Viewed by 1587
Abstract
In this review, we focus on group IV one-dimensional devices for quantum technology. We outline the foundational principles of quantum computing before delving into materials, architectures and fabrication routes, separately, by comparing the bottom-up and top-down approaches. We demonstrate that due to easily [...] Read more.
In this review, we focus on group IV one-dimensional devices for quantum technology. We outline the foundational principles of quantum computing before delving into materials, architectures and fabrication routes, separately, by comparing the bottom-up and top-down approaches. We demonstrate that due to easily tunable composition and crystal/interface quality and relatively less demanding fabrications, the study of grown nanowires such as core–shell Ge-Si and Ge hut wires has created a very fruitful field for studying unique and foundational quantum phenomena. We discuss in detail how these advancements have set the foundations and furthered realization of SETs and qubit devices with their specific operational characteristics. On the other hand, top-down processed devices, mainly as Si fin/nanowire field-effect transistor (FET) architectures, showed their potential for scaling up the number of qubits while providing ways for very large-scale integration (VLSI) and co-integration with conventional CMOS. In all cases we compare the fin/nanowire qubit architectures to other closely related approaches such as planar (2D) or III–V qubit platforms, aiming to highlight the cutting-edge benefits of using group IV one-dimensional morphologies for quantum computing. Another aim is to provide an informative pedagogical perspective on common fabrication challenges and links between common FET device processing and qubit device architectures. Full article
(This article belongs to the Special Issue Semiconductor Nanowires and Devices)
Show Figures

Figure 1

18 pages, 348 KB  
Article
LLM Agents as Catalysts for Resilient DFT: An Orchestration-Based Framework Beyond Brittle Scripts
by Hailong Li, Yun Wang, Jian Liu and Haiyang Liu
Appl. Sci. 2025, 15(21), 11390; https://doi.org/10.3390/app152111390 - 24 Oct 2025
Cited by 1 | Viewed by 2271
Abstract
As the complexity of Very-Large-Scale Integration (VLSI) circuits escalates, Design-for-Test (DFT) faces significant challenges. Traditional script-based automation flows are increasingly complex and present a high technical barrier for non-specialists. In order to overcome the above issue, this paper introduces DFTAgent, a novel framework [...] Read more.
As the complexity of Very-Large-Scale Integration (VLSI) circuits escalates, Design-for-Test (DFT) faces significant challenges. Traditional script-based automation flows are increasingly complex and present a high technical barrier for non-specialists. In order to overcome the above issue, this paper introduces DFTAgent, a novel framework that leverages Large Language Models to intelligently orchestrate a DFT toolchain. DFTAgent is evaluated on the ISCAS’85, ISCAS’89, and ITC’99 benchmarks. The results demonstrate that DFTAgent successfully completes the complete ATPG task cycle, achieving fault coverage comparable to a manually scripted baseline while exhibiting significant advantages in flexibility and error handling. By abstracting complex DFT tools behind a natural language interface and a visual workflow, this approach promises to democratize access to advanced VLSI testing methodologies and accelerate design cycles. Full article
Show Figures

Figure 1

30 pages, 9797 KB  
Article
Transient Performance Improvement for Sustainability and Robustness Coverage in Hybrid Battery Management System ASIC Integration for Solar Energy Conversion
by Mihnea-Antoniu Covaci, Ramona Voichița Gălătuș and Lorant Andras Szolga
Technologies 2025, 13(10), 430; https://doi.org/10.3390/technologies13100430 - 24 Sep 2025
Cited by 1 | Viewed by 609
Abstract
Adverse climate events have recently highlighted an increasing need to deploy sustainable energetic infrastructures. The existing electric conversion circuits for solar energy provide high efficiency; however, gaps in sustainability and robustness can be identified by considering their operation during intense perturbations, potentially occurring [...] Read more.
Adverse climate events have recently highlighted an increasing need to deploy sustainable energetic infrastructures. The existing electric conversion circuits for solar energy provide high efficiency; however, gaps in sustainability and robustness can be identified by considering their operation during intense perturbations, potentially occurring for interplanetary energy transfer. Additionally, charging characteristics for energy storage units influence differently the operation life of battery arrays, with increased stability providing favorable operating conditions. Therefore, the present study develops an alternative controller for managing solar energy as well as a prototype for tracking the maximum power point, both constrained by robustness and renewability studies. For the presented design, stability analyses and simulations validated the management of electric energy from solar panels and the developed configuration resulted in improving current peak integral transient characteristics by using an alternative control method, demonstrating stability for an indefinite number of energy storage units. Furthermore, the estimation for VLSI (Very-Large-Scale Integration) of this constrained design has been concluded to potentially provide a solution with adequate performance, comparable to state-of-the-art computational circuits. However, certain limitations could arise when substituting the main computation parts with analyzed solutions and proceeding with integration-based manufacturing. Full article
Show Figures

Figure 1

33 pages, 7399 KB  
Article
A DMA Engine for On-Board Real-Time Imaging Processing of Spaceborne SAR Based on a Dedicated Instruction Set
by Ao Zhang, Zhu Yang, Yongrui Li, Ming Xu and Yizhuang Xie
Electronics 2025, 14(16), 3209; https://doi.org/10.3390/electronics14163209 - 13 Aug 2025
Cited by 2 | Viewed by 1278
Abstract
With advancements in remote sensing technology and very-large-scale integration (VLSI) circuit technology, the Earth observation capabilities of spaceborne synthetic aperture radar (SAR) have continuously improved, leading to significantly increased performance demands for on-board SAR real-time imaging processors. Currently, the low data access efficiency [...] Read more.
With advancements in remote sensing technology and very-large-scale integration (VLSI) circuit technology, the Earth observation capabilities of spaceborne synthetic aperture radar (SAR) have continuously improved, leading to significantly increased performance demands for on-board SAR real-time imaging processors. Currently, the low data access efficiency of traditional direct memory access (DMA) engines remains a critical technical bottleneck limiting the real-time processing performance of SAR imaging systems. To address this limitation, this paper proposes a dedicated instruction set for spaceborne SAR data transfer control, leveraging the memory access characteristics of DDR4 SDRAM and common data read/write address jump patterns during on-board SAR real-time imaging processing. This instruction set can significantly reduce the number of instructions required in DMA engine data access operations and optimize data access logic patterns. While effectively reducing memory resource usage, it also substantially enhances the data access efficiency of DMA engines. Based on the proposed dedicated instruction set, we designed a DMA engine optimized for efficient data access in on-board SAR real-time imaging processing scenarios. Module-level performance tests were conducted on this engine, and full-process imaging experiments were performed using an FPGA-based SAR imaging system. Experimental results demonstrate that, under spaceborne SAR imaging processing conditions, the proposed DMA engine achieves a receive data bandwidth of 2.385 GB/s and a transmit data bandwidth of 2.649 GB/s at a 200 MHz clock frequency, indicating excellent memory access bandwidth and efficiency. Furthermore, tests show that the complete SAR imaging system incorporating this DMA engine processes a 16 k × 16 k SAR image using the Chirp Scaling (CS) algorithm in 1.2325 s, representing a significant improvement in timeliness compared to existing solutions. Full article
Show Figures

Figure 1

14 pages, 2827 KB  
Article
Very-Large-Scale Integration (VLSI) Implementation and Performance Comparison of Multiplier Topologies for Fixed- and Floating-Point Numbers
by Abimael Jiménez and Antonio Muñoz
Appl. Sci. 2025, 15(9), 4621; https://doi.org/10.3390/app15094621 - 22 Apr 2025
Cited by 3 | Viewed by 3188
Abstract
Multiplication is an arithmetic operation that has a significant impact on the performance of several real-life applications such as digital signals, image processing, and machine learning. The main concern of electronic system designers is energy optimization with minimal penalties in terms of speed [...] Read more.
Multiplication is an arithmetic operation that has a significant impact on the performance of several real-life applications such as digital signals, image processing, and machine learning. The main concern of electronic system designers is energy optimization with minimal penalties in terms of speed and area for designing portable devices. In this work, a very-large-scale integration (VLSI) design and delay/area performance comparison of array, Wallace tree, and radix-4 Booth multipliers was performed. This study employs different word lengths, with an emphasis on the design of floating-point multipliers. All multiplier circuits were designed and synthesized using Alliance open-source tools in 350 nm process technology with the minimum delay constraint. The findings indicate that the array multiplier has the highest delay and area for all the multiplier sizes. The Wallace multiplier exhibited the lowest delay in the mantissa multiplication of single-precision floating-point numbers. However, no significant difference was observed when compared with the double-precision floating-point multipliers. The Wallace multiplier uses the lowest area in both the single- and double-precision floating-point multipliers. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

30 pages, 7685 KB  
Review
Recent Developments of Advanced Broadband Photodetectors Based on 2D Materials
by Yan Tian, Hao Liu, Jing Li, Baodan Liu and Fei Liu
Nanomaterials 2025, 15(6), 431; https://doi.org/10.3390/nano15060431 - 11 Mar 2025
Cited by 16 | Viewed by 5479
Abstract
With the rapid development of high-speed imaging, aerospace, and telecommunications, high-performance photodetectors across a broadband spectrum are urgently demanded. Due to abundant surface configurations and exceptional electronic properties, two-dimensional (2D) materials are considered as ideal candidates for broadband photodetection applications. However, broadband photodetectors [...] Read more.
With the rapid development of high-speed imaging, aerospace, and telecommunications, high-performance photodetectors across a broadband spectrum are urgently demanded. Due to abundant surface configurations and exceptional electronic properties, two-dimensional (2D) materials are considered as ideal candidates for broadband photodetection applications. However, broadband photodetectors with both high responsivity and fast response time remain a challenging issue for all the researchers. This review paper is organized as follows. Introduction introduces the fundamental properties and broadband photodetection performances of transition metal dichalcogenides (TMDCs), perovskites, topological insulators, graphene, and black phosphorus (BP). This section provides an in-depth analysis of their unique optoelectronic properties and probes the intrinsic physical mechanism of broadband detection. In Two-Dimensional Material-Based Broadband Photodetectors, some innovative strategies are given to expand the detection wavelength range of 2D material-based photodetectors and enhance their overall performances. Among them, chemical doping, defect engineering, constructing heterostructures, and strain engineering methods are found to be more effective for improving their photodetection performances. The last section addresses the challenges and future prospects of 2D material-based broadband photodetectors. Furthermore, to meet the practical requirements for very large-scale integration (VLSI) applications, their work reliability, production cost and compatibility with planar technology should be paid much attention. Full article
Show Figures

Figure 1

19 pages, 2058 KB  
Article
A Compact Device Model for a Piezoelectric Nano-Transistor
by L. Neil McCartney, Louise E. Crocker, Louise Wright and Ivan Rungger
Micromachines 2025, 16(2), 114; https://doi.org/10.3390/mi16020114 - 21 Jan 2025
Cited by 2 | Viewed by 1295
Abstract
An approximate compact model was developed to provide a convenient method of exploring the initial design space when investigating the performance of micro-electronic devices such as nano-scaled piezoelectronic transistors, where fast ball-park estimates can be very helpful. First of all, the compact model [...] Read more.
An approximate compact model was developed to provide a convenient method of exploring the initial design space when investigating the performance of micro-electronic devices such as nano-scaled piezoelectronic transistors, where fast ball-park estimates can be very helpful. First of all, the compact model was verified by comparing its predictions with those of accurate axi-symmetric finite element analysis (FEA) using special boundary and interface conditions that enable the replication of the analytical model behaviour. Verification is achieved for a radio frequency (RF) switch and a smaller very-large-scale integrated (VLSI) device, where percentage differences between the compact and FEA model predictions are of the order 10−4 for the RF switch and 10−5 for the VLSI device. This confirms the consistency of complex property data (especially electro-thermo-elastic constants) and geometrical parameter input to both types of models and convincingly demonstrates that the analytical models and FEA for the two devices have been implemented correctly. A second type of boundary and interface condition is also used that is designed to replicate the actual behaviour of the devices in practice. The boundary and interface constraints applied for the verification procedure are relaxed so that there is perfect interface bonding between layers. For this unconstrained case, the resulting deformation is very complex, involving both bending effects and edge effects arising from property mismatches between neighbouring layers. The results for the RF switch show surprisingly good agreement between the predictions of the analytical and FEA results, provided the thickness of the piezoelectric layer is not too thick, implying that the analytical model should help to reduce the parameter design space for such devices. However, for the VLSI device, our results indicate that the compact model leads to much larger errors. For such systems, the compact model is unlikely to be able to reliably reduce the parameter design space, implying that accurate FEA will then need to be used. Full article
(This article belongs to the Special Issue Piezoelectric Devices and System in Micromachines)
Show Figures

Figure 1

37 pages, 64440 KB  
Review
Stochastic Computing Architectures: Modeling, Optimization, and Applications
by Lin Wang, Zhongqiang Luo and Li Gao
Symmetry 2024, 16(12), 1701; https://doi.org/10.3390/sym16121701 - 21 Dec 2024
Cited by 1 | Viewed by 4856
Abstract
With the rapid development of artificial intelligence (AI), the design and implementation of very large-scale integrated circuits (VLSI) based on traditional binary computation are facing challenges of high complexity, computational power, and high power consumption. The development of Moore’s law has reached the [...] Read more.
With the rapid development of artificial intelligence (AI), the design and implementation of very large-scale integrated circuits (VLSI) based on traditional binary computation are facing challenges of high complexity, computational power, and high power consumption. The development of Moore’s law has reached the limit of physical technology, and there is an urgent need to explore new computing architectures to make up for the shortcomings of traditional binary computing. To address the existing problems, Stochastic Computing (SC) is an unconventional stochastic sequence that converts binary numbers into a coded stream of digital pulses. It has a remarkable symmetry with binary computation. It uses logic gate circuits in the probabilistic domain to implement complex arithmetic operations at the expense of computational accuracy and time. It has low power and logic resource consumption and a small circuit area. This paper analyzes the basic concepts and development history of SC and neural networks (NNs), summarizes the development progress of SC with NN at home and abroad, and discusses the development trend of SC and the future challenges and prospects of NN. Through systematic summarization, this paper provides new learning ideas and research directions for developing AI chips. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

37 pages, 1450 KB  
Article
FPGA-Based Design of a Ready-to-Use and Configurable Soft IP Core for Frame Blocking Time-Sampled Digital Speech Signals
by Nettimi Satya Sai Srinivas, Nagarajan Sugan, Lakshmi Sutha Kumar, Malaya Kumar Nath and Aniruddha Kanhe
Electronics 2024, 13(21), 4180; https://doi.org/10.3390/electronics13214180 - 24 Oct 2024
Viewed by 3111
Abstract
‘Frame blocking’ or ‘Framing’ is a technique that divides a time-sampled speech or audio signal into consecutive and equi-sized short-time frames, either overlapped or non-overlapped, for analysis. The framing hardware architectures (FHA) in the literature support framing speech or audio samples of specific [...] Read more.
‘Frame blocking’ or ‘Framing’ is a technique that divides a time-sampled speech or audio signal into consecutive and equi-sized short-time frames, either overlapped or non-overlapped, for analysis. The framing hardware architectures (FHA) in the literature support framing speech or audio samples of specific word size with specific frame size and frame overlap size. However, speech and audio applications often require framing signal samples of varied word sizes with varied frame sizes and frame overlap sizes. Therefore, the existing FHAs must be redesigned appropriately to keep up with the variability in word size, frame size and frame overlap size, as demanded across multiple applications. Redesigning the existing FHAs for each specific application is laborious, prompting the need for a configurable intellectual property (IP) core. The existing FHAs are inappropriate for creating configurable IP cores as they lack adaptability to accommodate variability in frame size and frame overlap size. Therefore, to address these issues, a novel FHA, adaptable to accommodate the desired variability, is proposed. Furthermore, the proposed FHA is transformed into a field-programmable gate array-based soft, ready-to-use and configurable frame blocking IP core using the Xilinx® Vivado tool. The resulting IP core is versatile, offering configurability for framing in numerous applications incorporating real-time digital speech and audio systems. This research article discusses the proposed FHA and frame blocking IP core in detail. Full article
(This article belongs to the Special Issue Recent Advances in Signal Processing and Applications)
Show Figures

Figure 1

16 pages, 2868 KB  
Article
Mitigating Thermal Side-Channel Vulnerabilities in FPGA-Based SiP Systems Through Advanced Thermal Management and Security Integration Using Thermal Digital Twin (TDT) Technology
by Amrou Zyad Benelhaouare, Idir Mellal, Maroua Oumlaz and Ahmed Lakhssassi
Electronics 2024, 13(21), 4176; https://doi.org/10.3390/electronics13214176 - 24 Oct 2024
Cited by 7 | Viewed by 32998
Abstract
Side-channel attacks (SCAs) are powerful techniques used to recover keys from electronic devices by exploiting various physical leakages, such as power, timing, and heat. Although heat is one of the less frequently analyzed channels due to the high noise associated with thermal traces, [...] Read more.
Side-channel attacks (SCAs) are powerful techniques used to recover keys from electronic devices by exploiting various physical leakages, such as power, timing, and heat. Although heat is one of the less frequently analyzed channels due to the high noise associated with thermal traces, it poses a significant and growing threat to the security of very large-scale integrated (VLSI) microsystems, particularly system in package (SiP) technologies. Thermal side-channel attacks (TSCAs) exploit temperature variations, risking not only hardware damage from excessive heat dissipation but also enabling the extraction of sensitive data, like cryptographic keys, by observing thermal patterns. This dual threat underscores the need for a synergistic approach to thermal management and security in designing integrated microsystems. In response, this paper presents a novel approach that improves the early detection of abnormal thermal fluctuations in SiP designs, preventing cybercriminals from exploiting such anomalies to extract sensitive information for malicious purposes. Our approach employs a new concept called Thermal Digital Twin (TDT), which integrates two previously separate methods and techniques, resulting in successful outcomes. It combines the gradient direction sensor scan (GDSSCAN) to capture thermal data from the physical field programmable gate array (FPGA), which guarantees rapid thermal scan with a measurement period that could be close to 10 μs, a resolution of 0.5 C, and a temperature range from −40 C to 140 C; once the data are transmitted in real time to a Digital Twin created in COMSOL Multiphysics® 6.0 for simulation using the Finite Element Method (FEM), the real time required by the CPU to perform all the necessary calculations can extend to several seconds or minutes. This integration allows for a detailed analysis of thermal transfer within the SiP model of our FPGA. Implementation and simulations demonstrate that the Thermal Digital Twin (TDT) approach could reduce the risks associated with TSCA by a significant percentage, thereby enhancing the security of FPGA systems against thermal threats. Full article
Show Figures

Figure 1

11 pages, 2266 KB  
Article
Gamification for Teaching Integrated Circuit Processing in an Introductory VLSI Design Course
by Ángel Diéguez, Joan Canals, Sergio Moreno and Anna Vilà
Educ. Sci. 2024, 14(8), 921; https://doi.org/10.3390/educsci14080921 - 22 Aug 2024
Cited by 3 | Viewed by 2863
Abstract
Gamification is being incorporated into university classrooms due to its educational benefits for students learning, including encouraging student behavior and engagement, and consequently improving learning outcomes. Despite gamification being increasingly used in education, little has been developed related to Very-Large-Scale Integration (VLSI). In [...] Read more.
Gamification is being incorporated into university classrooms due to its educational benefits for students learning, including encouraging student behavior and engagement, and consequently improving learning outcomes. Despite gamification being increasingly used in education, little has been developed related to Very-Large-Scale Integration (VLSI). In this article, we describe two different gamification experiences applied to integrated circuit processing and design in an introductory VLSI design course for Electronic Engineers. While gamification in universities is still not very mature and our experience spans only two academic years, we observed that, with the practice of gamifying part of our course, the topics treated in games were profoundly learned and the experience was very positive in every aspect of the teaching–learning process. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

25 pages, 10247 KB  
Article
Development of Power-Delay Product Optimized ASIC-Based Computational Unit for Medical Image Compression
by Tanya Mendez, Tejasvi Parupudi, Vishnumurthy Kedlaya K and Subramanya G. Nayak
Technologies 2024, 12(8), 121; https://doi.org/10.3390/technologies12080121 - 29 Jul 2024
Cited by 9 | Viewed by 4313
Abstract
The proliferation of battery-operated end-user electronic devices due to technological advancements, especially in medical image processing applications, demands low power consumption, high-speed operation, and efficient coding. The design of these devices is centered on the Application-Specific Integrated Circuits (ASIC), General Purpose Processors (GPP), [...] Read more.
The proliferation of battery-operated end-user electronic devices due to technological advancements, especially in medical image processing applications, demands low power consumption, high-speed operation, and efficient coding. The design of these devices is centered on the Application-Specific Integrated Circuits (ASIC), General Purpose Processors (GPP), and Field Programmable Gate Array (FPGA) frameworks. The need for low-power functional blocks arises from the growing demand for high-performance computational units that are part of high-speed processors operating at high clock frequencies. The operational speed of the processor is determined by the computational unit, which is the workhorse of high-speed processors. A novel approach to integrating Very Large-Scale Integration (VLSI) ASIC design and the concepts of low-power VLSI compatible with medical image compression was embraced in this research. The focus of this study was the design, development, and implementation of a Power Delay Product (PDP) optimized computational unit targeted for medical image compression using ASIC design flow. This stimulates the research community’s quest to develop an ideal architecture, emphasizing on minimizing power consumption and enhancing device performance for medical image processing applications. The study uses area, delay, power, PDP, and Peak Signal-to-Noise Ratio (PSNR) as performance metrics. The research work takes inspiration from this and aims to enhance the efficiency of the computational unit through minor design modifications that significantly impact performance. This research proposes to explore the trade-off of high-performance adder and multiplier designs to design an ASIC-based computational unit using low-power techniques to enhance the efficiency in power and delay. The computational unit utilized for the digital image compression process was synthesized and implemented using gpdk 45 nm standard libraries with the Genus tool of Cadence. A reduced PDP of 46.87% was observed when the image compression was performed on a medical image, along with an improved PSNR of 5.89% for the reconstructed image. Full article
Show Figures

Figure 1

Back to TopTop