Next Article in Journal
Optimal Placement of HVDC-VSC in AC System Using Self-Adaptive Bonobo Optimizer to Solve Optimal Power Flows: A Case Study of the Algerian Electrical Network
Previous Article in Journal
Study on the Design of Series-Type All-DC Wind Farms Based on Half-Bridge Voltage Balancing Circuits
Previous Article in Special Issue
Hardware Acceleration and Approximation of CNN Computations: Case Study on an Integer Version of LeNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Variation-Aware Binary Neural Network Framework for Process Resilient In-Memory Computations

by
Minh-Son Le
,
Thi-Nhan Pham
,
Thanh-Dat Nguyen
and
Ik-Joon Chang
*
Department of Electronic Engineering, Kyung Hee University, Yongin-si 17104, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3847; https://doi.org/10.3390/electronics13193847 (registering DOI)
Submission received: 12 August 2024 / Revised: 24 September 2024 / Accepted: 26 September 2024 / Published: 28 September 2024
(This article belongs to the Special Issue Research on Key Technologies for Hardware Acceleration)

Abstract

Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computation-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely, analog CIM, we can further improve the energy efficiency to process neural networks. However, analog CIMs are susceptible to process variation, which refers to the variability in manufacturing that causes fluctuations in the electrical properties of transistors, resulting in significant degradation in BNN accuracy. Our Monte Carlo simulations demonstrate that in an SRAM-based analog CIM implementing the VGG-9 BNN model, the classification accuracy on the CIFAR-10 image dataset is degraded to below 50% under process variations in a 28 nm FD-SOI technology. To overcome this problem, we present a variation-aware BNN framework. The proposed framework is developed for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory; however , it is easily extensible to BNN CIMs based on other memories. Our extensive experimental results demonstrate that under process variation of 28 nm FD-SOI, with an SRAM array size of 128×128, our framework significantly enhances classification accuracies on both the MNIST hand-written digit dataset and the CIFAR-10 image dataset. Specifically, for the CONVNET BNN model on MNIST, accuracy improves from 60.24% to 92.33%, while for the VGG-9 BNN model on CIFAR-10, accuracy increases from 45.23% to 78.22%.
Keywords: BNN; deep neural network; computation in memory; in-memory computations; SRAM; polar neural network BNN; deep neural network; computation in memory; in-memory computations; SRAM; polar neural network

Share and Cite

MDPI and ACS Style

Le, M.-S.; Pham, T.-N.; Nguyen, T.-D.; Chang, I.-J. A Variation-Aware Binary Neural Network Framework for Process Resilient In-Memory Computations. Electronics 2024, 13, 3847. https://doi.org/10.3390/electronics13193847

AMA Style

Le M-S, Pham T-N, Nguyen T-D, Chang I-J. A Variation-Aware Binary Neural Network Framework for Process Resilient In-Memory Computations. Electronics. 2024; 13(19):3847. https://doi.org/10.3390/electronics13193847

Chicago/Turabian Style

Le, Minh-Son, Thi-Nhan Pham, Thanh-Dat Nguyen, and Ik-Joon Chang. 2024. "A Variation-Aware Binary Neural Network Framework for Process Resilient In-Memory Computations" Electronics 13, no. 19: 3847. https://doi.org/10.3390/electronics13193847

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop