Next Article in Journal
Temperature, Pressure, and Velocity Influence on the Tribological Properties of PA66 and PA46 Polyamides
Next Article in Special Issue
Multiscale Modeling for Application-Oriented Optimization of Resistive Random-Access Memory
Previous Article in Journal
Assessment of Tendon Prestressing after Long-Term Service via the Barkhausen Noise Technique
Previous Article in Special Issue
Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional (3D) Vertical Resistive Random-Access Memory (VRRAM) Synapses for Neural Network Systems

1
Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
2
Medical Research Institute, Ewha Womans University, Seoul 03760, Korea
*
Authors to whom correspondence should be addressed.
Materials 2019, 12(20), 3451; https://doi.org/10.3390/ma12203451
Submission received: 11 September 2019 / Revised: 16 October 2019 / Accepted: 18 October 2019 / Published: 22 October 2019

Abstract

:
Memristor devices are generally suitable for incorporation in neuromorphic systems as synapses because they can be integrated into crossbar array circuits with high area efficiency. In the case of a two-dimensional (2D) crossbar array, however, the size of the array is proportional to the neural network’s depth and the number of its input and output nodes. This means that a 2D crossbar array is not suitable for a deep neural network. On the other hand, synapses that use a memristor with a 3D structure are suitable for implementing a neuromorphic chip for a multi-layered neural network. In this study, we propose a new optimization method for machine learning weight changes that considers the structural characteristics of a 3D vertical resistive random-access memory (VRRAM) structure for the first time. The newly proposed synapse operating principle of the 3D VRRAM structure can simplify the complexity of a neuron circuit. This study investigates the operating principle of 3D VRRAM synapses with comb-shaped word lines and demonstrates that the proposed 3D VRRAM structure will be a promising solution for a high-density neural network hardware system.

1. Introduction

In recent years, neuromorphic computing has emerged as a complementary system to the von Neumann architecture. Much of the research on neural network hardware implementation discusses how to connect large numbers of neurons and synapses. As a consequence, various memory devices such as static random-access memory, resistive random-access memory (RRAM), floating-gate (FG) memory, and phase change memory have been implemented as the synapse model in neural network hardware systems [1,2,3,4].
The most popular device-level component chosen to implement the synapses is the “memory resistor”, or memristor, because the resistance value of a memristor is a function of its historical activity. Moreover, energy efficiency is a key challenge of neuromorphic computing and RRAM is attractive for large-scale system demonstration due to its relatively lower energy consumption as compared with other synaptic devices [5]. The most common use of the memristor two-dimensional (2D)-crossbar is as a multiple memristor synapse since a single memristor cannot represent the positive and negative weights of synapses. However, 2D crossbar array synapses are not suitable for the implementation of deep neural networks (DNN) because the chip area depends on both the depth of the neural network and the number of input and output nodes.
The three-dimensional (3D) vertical resistance random-access memory (VRRAM) promises to minimize the area of a resistive memory. It can be categorized into two types based on its word line structures [6]: 3D VRRAM with a word line (WL) planar structure uses metal planes as WL electrodes, while a 3D VRRAM with a WL even/odd structure has comb-shaped WLs separated by etching. This structure is more promising than a WL plane structure for the VRRAM architecture because it has the same performance as a double cell bit [7,8]. Therefore, if a 3D VRRAM is used for synapses instead of a 2D crossbar array, as shown in Figure 1, the chip area of a DNN system can be effectively reduced. Recently, several works have evaluated the synaptic RRAM using 3D VRRAM. A high-density 3D synaptic architecture based on Ta/TaOx/TiO2/Ti RRAM is proposed as a neuromorphic computation hardware and the analog synaptic plasticity is simulated using the physical and compact models [9]. The potentiality of the VRRAM concept for various neuromorphic applications is investigated with one synapse being emulated by one VRRAM pillar [10]. Yet many of these studies have focused on experimental demonstration at a single RRAM cell level, and the idea that neuromorphic applications are possible is only presented as a concept. There are some previous studies related to 3D VRRAM with a WL planar structure. For example, the four-layer 3D RRAM integrated with FinFET (Fin Field-Effect Transistor) was developed for brain-inspired computing and in-memory computing [11], and 3D vertical array of RRAM was proposed for storing and computing large-scale weight matrices in the neural network [12]. However, a 3D VRRAM with comb-shaped WLs is more promising for a more efficient synaptic RRAM architecture because it has a double cell bit. Although research on 3D VRRAM with comb-shaped WLs has been published, it has focused on RRAM device variation, and explored the concept of many devices connected to one pillar operating as one synapse to overcome the variation [13]. Implementing a single synapse with multiple devices reduces the benefits of using 3D VRRAM. Moreover, reported previously related studies did not evaluate the circuit level properties of 3D VRRAM with comb-shaped WLs. Theoretical investigations are insufficient for exploring the relationship between synapse weight change and memory device resistance in 3D VRRAM.
In this study, we propose a new optimization method for machine learning weight changes that considers the structural characteristics of 3D VRRAM. This study investigates the operating principle of 3D VRRAM synapses with comb-shaped WLs and demonstrates that this structure is a promising synaptic model for neural network systems. The remainder of this paper is organized as follows: Section 2 describes a new 3D VRRAM crossbar array synapse incorporating a synaptic memristor model and learning operations for a guide training algorithm [14,15]. In Section 3, the accuracy of a neural network with 3D VRRAM synapses is measured by classifying 7 × 7 alphabet letter images using HSPICE circuit simulation. The conclusions are presented in Section 4.

2. Materials and Methods

2.1. A Neural Network Learning Method Using a 3D VRRAM Synapse

A neural network system design with 3D VRRAM synapses is shown in Figure 1. We evaluated the accuracy of the proposed 3D VRRAM synapses circuit by classifying 7 × 7 images representing alphabet letters as shown in Figure 2. Figure 1b shows a neural network consisting of 49 input neurons and 26 output neurons designed to classify input letter images into 26 classes as shown in Figure 1a. For the letter ‘S’, the nodes or neurons that generate the output spike are represented in gray, and increased weights in the learning process are indicated by red lines. The most common memristor application in neuromorphic systems is as the synapses in a 2D crossbar array as shown in Figure 1c. The weight of one synapse is represented by the conductance difference between two memristors because a single memristor cannot have both positive and negative weight values for a synapse [2]. For example, neuron 1 compares the total current of “positive out 1” in the red line with that of the “negative out 1” as shown in Figure 1c. If the “positive out 1” current is greater than the “negative out 1” current, neuron 1 spikes, which means the output of neuron 1 is a ‘1’. In contrast, when the “negative out 1” current is greater than the “positive out 1” current, the output of neuron 1 is ‘0’. The learning architecture for this implementation is constructed as a 49 × 52 2D memristor crossbar array.
If a 3D VRRAM is used for synapses, however, the chip area efficiency can be increased. Figure 1d shows a 3D VRRAM synapses structure with the same performance as Figure 1c. The ‘red’ and ‘blue’ word lines in Figure 1d represent “positive” and “negative” outputs, respectively. Therefore, only the area for 26 vertical pillars is needed to implement 26 classes in contrast to the need for 52 column lines in the 2D crossbar array. Moreover, the pillar structure of 3D VRRAM makes it simpler to build neuron circuits because there is no need for a circuit to a compare positive and negative current.
A “guide training” algorithm is used to verify the accuracy and the performance of the 3D VRRAM synapses in HSPICE simulation [14,15]. This is a modified reinforcement learning algorithm and it is optimized for hardware implementation because it does not include a backpropagation algorithm. The algorithm was applied to image classification using the 2D crossbar memristor synaptic circuit, and its performance has been verified by showing a high learning success rate. The initial synaptic weights were randomized before the new training event was started. The single data set of 26 images (Figure 2), one for each alphabet letter, was defined as one epoch. After training, testing was performed to classify 20 test image sets consisting of the original or inverted pixel images, as shown in Figure 3. For example, the noise 0% test set consisted of 520 original images, and the noise 4% test set consisted of 520 images with two randomly selected pixels inverted.

2.2. 3D VRRAM Synapse Operation Mechanism

For this paper, we actually simulated the 3D VRRAM structure as shown in Figure 1d, but a description of the behavior of the real structure would be very complex. Therefore, we will explain the operation of 3D VRRAM with a simple structure as shown in Figure 4.
Figure 4a,b shows a simple two-pixel image to illustrate the weight change in a 3D VRRAM synapse configured as shown in Figure 4c. To categorize an image, a spike should be generated at the corresponding output or neuron of the input image. This means that a spike will occur at the Out1 neuron when Figure 4a is an input, and it will appear at the Out2 neuron if Figure 4b is an input. To allow a 3D VRRAM to operate as synapse circuit, its ‘Out1’ current must be larger than its ‘Out2’ current when Figure 4a is the input image. Conversely, if Figure 4b is an input image, Out2 current should be larger than Out1 current.
The 3D VRRAM in Figure 4c has a total of 8 memristors between its pillars (Out1 and Out2) and odd word lines (positive word line; P1, P2) or even word lines (negative word line; N1, N2). The number of word lines indicates the number of pixels. The memristor is a two-terminal device, so the “P1-Out1” memristor existing between the P1 word line and the Out1 pillar or vertical bit line is controlled by the bias of P1 and Out1. Reduced resistance in the memristors connected to the positive word line results in an increase in pillar current, while increased resistance of the memristor connected to the negative word line reduces the pillar current.
There are various memristor models for circuit simulation [16,17,18,19,20]. We used the generalized memristor model for this work [16,17], and it was coded in Verilog-A for the HSPICE circuit simulator. Figure 4d is the nonlinear I-V characteristic and Figure 4e is the linearly modulated potentiation behavior of an experimentally measured Ta2O5 memristor device [21]. It shows that the experiment and simulation results using our model are qualitatively consistent.
The memristor current is modeled by the hyperbolic sine function, as shown in Equation (1) [15,16]. Conductance is proportional to state variable x(t), which has a value between 0 and 1.
I ( t ) = { a 1 x ( t ) sinh ( b V ( t ) ) ,     V ( t ) 0 a 2 x ( t ) sinh ( b V ( t ) ) ,     V ( t ) < 0 }
The change in the state variable over time is based on two different functions, g ( V ( t ) ) and f ( x ( t ) ) .
d x d t = g ( V ( t ) ) f ( x ( t ) )
g ( V ( t ) ) = { A p ( e V ( t ) e V p ) ,     V ( t ) > V p A n ( e V ( t ) e V n ) ,     V ( t ) < V n 0 ,     V n V ( t ) V p }
f ( x ( t ) ) = { e α p ( x x p ) w p ( x , x p ) ,     x x p 1 ,                                 x < x p }
f ( x ( t ) ) = { e α n ( x + x n 1 ) w n ( x , x n ) ,     x 1 x n 1 ,                                     x > 1 x n }
w p ( x , x p ) = x p x 1 x p + 1
w n ( x , x n ) = x 1 x n
where g ( V ( t ) ) is a function of a programming threshold on the memristor model and f ( x ( t ) ) was used to limit the motion of the state variable (xp and xn). The function w p and w n are developed to limit the range of the state variable between 0 and 1. The model parameters used in this study are listed in Table 1.
The memristor’s conductance changes from a high-resistance state (HRS) to a low-resistance state (LRS) when subjected to a voltage higher than the set voltage (= 1.2 V). A lower voltage than the reset voltage (= −1.2 V) changes the conductance of the memristor from an LRS to an HRS. The weight of a synapse or the resistance of each memristor could be changed during the network’s learning process but should be unchanged during the test process. To find the proper training voltage (Vtraining) and test voltage (Vtest), the change of resistance is simulated by applying various voltages to each memristor device. The voltage was applied from 0.5 V to 1.5 V or −0.5 V to −1.5 V at 0.25 V intervals. The unit pulse width is 10 ns and the rising and falling edge time is 0.5 ns. The line resistance of a vertical pillar is 3 Ω/cell with 20 nm class technology [8]. As shown in Figure 5a, the resistance changes only at 1.25 V and 1.5 V for five applied voltages because applying voltages greater than the set voltage (Vset) reduces resistance. Similarly, Figure 5b shows that the resistance changes at a voltage lower than the reset voltage (Vreset) but does not change at a higher voltage. Therefore, we set Vtraining = 1.5 V or −1.5 V, and Vtest = 1 V or −1 V considering the voltage drop in the crossbar array.
First, the sequence of 3D VRRAM synapse learning is as follows. Figure 6 shows the circuit diagram of Figure 4c. If the input image is Figure 4a or Figure 4b, a spike is generated at the Out1 or Out2 neuron, respectively. In this study, we adopted the “winner-take-all” method to determine the neurons in which spikes occur. Thus, a spike in Out1 means that the current flowing to this neuron is the largest among the output neuron currents. Referring to Figure 6, the current of the Out1 neuron should be larger than the current of Out2 when the input image is Figure 4a. In the guide training method, only black pixel data is used for neural network learning, changing the weight of the synapse, or the resistance of the memristor connected to the black pixel [15].
Memristors connected to word lines P1 and P2 act as positive memristors that increase the weight of synapses. Increasing synaptic weights means that the resistance of the memristors is reduced, so Vtraining = 1.5 V is applied to P1 and P2 to increase the current flowing to Out1. In contrast, the memristor connected to N1 and N2 is a negative memristor that reduces the current in the Out1 neuron, and Vtraining = −1.5 V is applied to increase the resistance of the memristor. The number of positive and negative word line pairs matches the number of pixels. For example, P1 and N1 determine the characteristics of pixel 1 of the input image.
An example of training for Figure 4a is illustrated in Figure 6a,b. The goal of the synapse learning is to lower the weight of synapses connected to black pixels, increasing the Out1 line current. In principle, all memristor devices connected to the Out1 line (P1-Out1, P2-Out1, N1-Out1, N2-Out1), which is shown in black lines in Figure 6, affect the generation of a spike when Figure 4a is the input image. However, since only pixel 1 is black in Figure 4a, the resistance of P1-Out1 and N1-Out1 is changed to increase the current of Out1 as shown in Figure 6a,b. In other words, the current of Out1 becomes larger than Out2 only when pixel 1 is black. Therefore, the resistance of “P1-Out1” should be reduced and that of “N1-Out1” should be increased to generate a spike on the Out1 neuron or increase Out1 current.
The most important thing in the 3D vertical synapse learning process is that only the weights of the black pixel memristors change during learning, leaving other memristors unchanged. Therefore, to change the weight, a voltage greater than Vset is applied between the two electrodes of the P memristor, and a voltage less than Vreset is applied to its complementary N memristor. Figure 6a,b illustrates the training of the positive and negative memristors for the Figure 4a image and the Out1 neuron. During Out1 neuron training, Out2 remains at 0.75 V, and 0 V is applied to Out1 during positive memristor training and Vtraining (= 1.5 V) during negative memristor training. Basically, Vtraining (= 1.5 V) and 0 V are applied to the positive word line and negative word line, respectively, corresponding to the black pixels of the input image. The other four memristors (P1-Out2, P2-Out2, N1-Out2, N2-Out2), which are pictured with red lines in Figure 6, generate a spike on the Out2 neuron when the input is Figure 4b. Figure 6c,d shows the training procedure for Figure 4b like the training for the Out1 neuron.
The pillar of the 3D VRRAM connected to the Out1 neuron is used in common to train the positive and negative memristors. Therefore, the two processes should be done sequentially. The bias conditions for training and testing over time are shown in Figure 7. “Pos. for Out1” and “Neg. for Out1” represent the voltages that change the resistance of the positive and negative memristors. Since there are two pixels in Figure 4a,b, training occurs in a total of four sequences in Figure 7. The learning sequence increases in proportion to the number of pixels in the input image. The number of output neurons determines the number of test sequences. For example, if the input images are Figure 4a,b, we need two output sequences in this learning simulation.
Figure 8 shows the voltages in the simplified circuit diagrams of the 3D vertical synapses during the testing procedure. Unlike the learning process, the weight of the synapse (i.e., the resistance of the memristors), should not change during the testing process. Therefore, the test voltages are set to 1 V for the positive memristor and −1 V for the negative memristor, which are smaller than the set or reset voltages. During the learning process, the voltage applied to the memristor is determined by the difference between the voltage applied to the positive or negative word line and the voltage applied to the output line. During the test, however, the output line is held at 0 V and its current is determined only by the voltage applied to the word line. It means that 1 V and −1 V are respectively applied to the positive and negative word lines corresponding to black pixels. Therefore, when a voltage corresponding to Figure 4a is applied to the positive and negative word lines, the current of the Out1 neuron becomes larger than that of Out2 neuron, corresponding to the memristor resistances changed during the training process.

3. Results

To evaluate the accuracy of the proposed 3D VRRAM synapses, a guide training algorithm was tested by classifying the alphabet in 7 × 7 letter images in an HSPICE simulation. The initial synaptic weights were randomized before the start of the new training event. The single data set of 26 images (Figure 2), one for each alphabet letter, was defined as 1 epoch. After training, testing was performed to classify 20 test image sets consisting of the original or inverted pixel images. For example, the noise 0% test set consisted of 520 original images, and the noise 4% test set consisted of 520 images with two randomly selected pixels inverted.
To confirm that the resistances were changed according to the training epoch, we applied the “S” image to the input and observed the synaptic change between the input neuron and the corresponding output neuron. Figure 9 shows the resistance change of the positive memristors according to the training epoch. There are 49 lines in the graph because the number of pixels or input neurons is 49. The training process enhances the synaptic weights of the input neurons associated with black pixels among the 49 pixels, and the enhancement of the synaptic weight means a decrease in resistance. The memristors with lowered resistance by training are shown by the red lines in Figure 9. In contrast to the positive memristors, the resistance of the negative memristors are increased by the training epoch. In Figure 10, as in Figure 9, only the memristors with increased resistance by training are shown in red.

4. Discussion

In order to determine the appropriate number of training epochs, the learning accuracy was evaluated by varying the number of training epochs from 1 to 300. Figure 11a shows the accuracy of pattern classification according to the number of training epochs. Only the original image was used in the test, and the accuracy of the pattern classification increases as the number of training epochs increases. The accuracy of the training after 100 epochs, however, is almost unchanged. Thus, we set 100 epochs as the default for neural network training simulation.
In order to verify how accurately the pattern classification can be performed even if noise is added to the input image, simulations were performed with an increasing number of inverted pixels as shown in Figure 11b. Obviously, as the noise increases in the input image, the accuracy of the pattern classification decreases. The simulation results, however, show 80% accuracy until the inverted pixel percentage increases to 12%. This means that 3D VRRAMs are usable as synapses in a neural network system. Therefore, using 3D VRRAM as the synapse structure of a neural network can greatly improve chip area utilization. In this study, we evaluated the accuracy of a neural network consisting only of input and output nodes with no hidden layers. A 3D VRRAM synapse with comb-shaped WLs structured with hidden layers is a subject for future work, and we will demonstrate the effects of 3D VRRAM synapses by performing simulations in a more diverse learning environment.

5. Conclusions

In this study, a 3D VRRAM structure was newly proposed as the synapse of a neural network system. It was concluded that 3D VRRAM implemented as synapses can increase the chip area efficiency and simplify the neuron circuits. This study investigates the operating principle of 3D VRRAM using comb-shaped WL synapses and proves that this structure has promise for a neural network system. The accuracy of a neural network with 3D VRRAM synapses was measured by classifying 7 × 7 alphabet letter images using a circuit simulator. The guide training algorithm was optimized for hardware implementation because it does not include a backpropagation algorithm. Therefore, the guide training algorithm and winner-take-all methods were used to validate the performance accuracy of the 3D VRRAM synapses in a HSPICE simulation. The simulation results showed 80% accuracy until the inverted pixel count reached 12%. This means that 3D VRRAMs are usable as synaptic mimic circuits in neural network systems. A 3D vertical synapse with an integrated 3D VRRAM structure will be a promising solution for a high-density neuromorphic chip.

Author Contributions

Conceptualization, W.S.; software, W.S., S.C., and B.K.; investigation, S.C., B.K., and J.P.; writing—original draft preparation, W.S.; writing—review and editing, W.S.; project administration, W.S.; funding acquisition, J.P.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant numbers NRF-2016R1A6A3A11931998 and 2019R1I1A1A01040652).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akopyan, F.; Sawada, J.; Cassidy, A.; Alvarez-Icaza, R.; Arthur, J.; Merolla, P.; Imam, N.; Nakamura, Y.; Datta, P.; Nam, G.-J.; et al. TruNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2015, 34, 1537–1557. [Google Scholar] [CrossRef]
  2. Prezioso, M.; Merrikh-Bayat, F.; Hoskins, B.D.; Adam, G.C.; Likharev, K.K.; Strukov, D.B. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nat. Lett. 2015, 521, 61–64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Park, Y.; Kwon, H.; Kim, B.; Lee, W.; Wee, D.; Choi, H.; Park, B.; Lee, J.; Kim, Y. 3-D Stacked Synapse Array Based on Charge-Trap Flash Memory for Implementation of Deep Neural Networks. IEEE Trans. Electron. Device. 2018, 66, 420–427. [Google Scholar] [CrossRef]
  4. Burr, G.W.; Shelby, R.M.; Nolfo, C.; Jang, J.W.; Shenoy, R.S.; Narayanan, P.; Virwani, K.; Giacometti, E.U.; Kurdi, B.; Hwang, H. Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element. In Proceedings of the IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 15–17 December 2014; pp. 697–700. [Google Scholar]
  5. Yu, S.; Gao, B.; Fang, Z.; Yu, H.; Kang, J.; Wong, H. A Neuromorphic Visual System Using RRAM Synaptic Devices with Sub-pJ Energy and Tolerance to Variability: Experimental Characterization and Large-Scale Modeling. In Proceedings of the IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 10–12 December 2012; pp. 239–242. [Google Scholar]
  6. Deng, Y.; Chen, H.-Y.; Gao, B.; Yu, S.; Wu, S.-C.; Zhao, L.; Chen, B.; Jiang, Z.; Liu, X.; Hou, T.-H.; et al. Design and Optimization Methodology for 3D RRAM Arrays. In Proceedings of the IEEE International Electron Devices Meeting (IEDM), Washington, DC, USA, 9–11 December 2013. [Google Scholar]
  7. Choi, S.; Sun, W.; Shin, H. Analysis of Cell Variability Impact on a 3-D Vertical RRAM (VRRAM) Crossbar Array Using a Modified Lumping Method. IEEE Trans. Electron. Device. 2019, 66, 759–765. [Google Scholar] [CrossRef]
  8. Choi, S.; Sun, W.; Shin, H. Analysis of Read margin and Write Power Consumption of a 3-D Vertical RRAM (VRRAM) Crossbar Array. IEEE J. Electron. Devices Soc. 2018, 6, 1192–1196. [Google Scholar] [CrossRef]
  9. Wang, I.; Lin, Y.; Wang, Y.; Hsu, C.; Hou, T. 3D Synaptic Architecture with Ultralow sub-10 fJ Energy per Spike for Neuromorphic Computation. In Proceedings of the IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 15–17 December 2014; pp. 665–768. [Google Scholar]
  10. Piccolboni, G.; Molas, G.; Portal, J.M.; Coquand, R.; Bocquet, M.; Garbin, D.; Vianello, E.; Carabasse, C.; Delaye, V.; Pellissier, C.; et al. Investigation of the potentialities of Vertical Resistive RAM (VRRAM) for neuromorphic applications. In Proceedings of the IEEE International Electron Devices Meeting (IEDM), Washington, DC, USA, 7–9 December 2015; pp. 447–450. [Google Scholar]
  11. Li, H.; Li, K.; Lin, C.; Hsu, J.; Chiu, W.; Chen, M.; Wu, T.; Sohn, J.; Eryilmaz, S.B.; Shieh, J.; et al. Four-Layer 3D Vertical RRAM Integrated with FinFET as a Versatile Computing Unit for Brain-Inspired Cognitive Information Processing. In Proceedings of the Symposium on VLSI Technology (SOVT), Honolulu, HI, USA, 14–16 June 2016. [Google Scholar]
  12. Li, Z.; Chen, P.; Xu, H.; Yu, S. Design of Ternary Neural Network With 3-D Vertical RRAM Array. IEEE Trans. Electron Device. 2017, 64, 2721–2727. [Google Scholar] [CrossRef]
  13. Gao, B.; Bi, Y.; Chen, H.; Liu, R.; Huang, P.; Chen, B.; Liu, L.; Liu, X.; Yu, S.; Wong, H.-S.P.; et al. Ultra-Low-Energy Three-Dimensional Oxide-Based Electronic Synapses for Implementation of Robust High-Accuracy Neuromorphic Computation Systems. ACS Nano 2014, 8, 6998–7004. [Google Scholar] [CrossRef] [PubMed]
  14. Jo, S.; Sun, W.; Kim, B.; Kim, S.; Park, J.; Shin, H. Memristor Neural Network Training with Clock Synchronous Neuromorphic System. Micromachines 2019, 10, 384. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, B.; Jo, S.; Sun, W.; Shin, H. Analysis of the Memristor-Based Crossbar Synapse for Neuromorphic Systems. J. Nanosci. Nanotechnol. 2019, 19, 6703–6709. [Google Scholar] [CrossRef] [PubMed]
  16. Yakopcic, C.; Taha, T.M.; Subramanyam, G.; Pino, R.E.; Rogers, S. A Memristor Device Model. IEEE Electron. Device Lett. 2011, 32, 1436–1438. [Google Scholar] [CrossRef]
  17. Yakopcic, C.; Taha, T.M.; Subramanyam, G.; Pino, R.E. Generalized Memristive Device SPICE Model and its Application in Circuit Design. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2013, 32, 1201–1214. [Google Scholar] [CrossRef]
  18. Yakopcic, C.; Taha, T.M.; Subramanyam, G.; Pino, R.E. Memristor SPICE model and crossbar simulation based on devices with nanosecond switching time. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013. [Google Scholar]
  19. Amirsoleimani, A.; Shamsi, J.; Ahmadi, M.; Ahmadi, A.; Alirezaee, S.; Mohammadi, K.; Karami, M.A.; Yakopcic, C.; Kavehei, O.; Al-Sarawie, S. Accurate charge transport model for nanoionic memristive devices. Microelectron. J. 2017, 65, 49–57. [Google Scholar] [CrossRef]
  20. Pershin, Y.V.; Martinez-Rincon, J.; Di Ventra, M. Memory Circuit Elements: From Systems to Applications. J. Comput. Theor. Nanosci. 2011, 8, 441–448. [Google Scholar] [CrossRef] [Green Version]
  21. Woo, J.; Padovani, A.; Moon, K.; Kwak, M.; Larcher, L.; Hwang, H. Linking Conductive Filament Properties and Evolution to Synaptic Behavior of RRAM Devices for Neuromorphic Applications. IEEE Electron. Device Lett. 2017, 38, 1220–1223. [Google Scholar] [CrossRef]
Figure 1. (a) The input pattern for letter ‘S’. (b) A neural network consisting of 49 input neurons and 26 output neurons (red lines = increased weights in the learning process) (c) Two-dimensional (2D) crossbar array synapses for implementing the neural network as shown in (b). (d) 3D vertical resistive random-access memory (VRRAM) synapses with the same performance as the synapses in (c).
Figure 1. (a) The input pattern for letter ‘S’. (b) A neural network consisting of 49 input neurons and 26 output neurons (red lines = increased weights in the learning process) (c) Two-dimensional (2D) crossbar array synapses for implementing the neural network as shown in (b). (d) 3D vertical resistive random-access memory (VRRAM) synapses with the same performance as the synapses in (c).
Materials 12 03451 g001
Figure 2. 7 × 7 original alphabet images.
Figure 2. 7 × 7 original alphabet images.
Materials 12 03451 g002
Figure 3. 7 × 7 inverted pixel “A” image with noise from 4% to 20%.
Figure 3. 7 × 7 inverted pixel “A” image with noise from 4% to 20%.
Materials 12 03451 g003
Figure 4. A two-pixel image where (a) pixel 1 is black and Out1 is “1”; (b) pixel 2 is black and Out2 is “1”; (c) 3D VRRAM synapse for a two-pixel image; (d) nonlinear I-V characteristic; and (e) linearly modulated potentiation behaviors of the Ta2O5 memristor device [21].
Figure 4. A two-pixel image where (a) pixel 1 is black and Out1 is “1”; (b) pixel 2 is black and Out2 is “1”; (c) 3D VRRAM synapse for a two-pixel image; (d) nonlinear I-V characteristic; and (e) linearly modulated potentiation behaviors of the Ta2O5 memristor device [21].
Materials 12 03451 g004
Figure 5. Resistance change of a memristor according to the (a) positive voltage and (b) negative voltage applied.
Figure 5. Resistance change of a memristor according to the (a) positive voltage and (b) negative voltage applied.
Materials 12 03451 g005
Figure 6. Simplified circuit diagrams of the 3D vertical synapse during the training procedure showing the voltages applied to (a) P1-Out1; (b) N1-Out1; (c) P2-Out2; and (d) N2-Out2 memristors when training Figure 4a,b.
Figure 6. Simplified circuit diagrams of the 3D vertical synapse during the training procedure showing the voltages applied to (a) P1-Out1; (b) N1-Out1; (c) P2-Out2; and (d) N2-Out2 memristors when training Figure 4a,b.
Materials 12 03451 g006
Figure 7. Input signal voltages at training and testing procedures.
Figure 7. Input signal voltages at training and testing procedures.
Materials 12 03451 g007
Figure 8. Simplified circuit diagrams of 3D vertical synapse during the testing procedure showing the test voltages applied to (a) the Out1 neuron and (b) the Out2 neuron.
Figure 8. Simplified circuit diagrams of 3D vertical synapse during the testing procedure showing the test voltages applied to (a) the Out1 neuron and (b) the Out2 neuron.
Materials 12 03451 g008
Figure 9. Resistance change of the positive memristors as a function of training epochs.
Figure 9. Resistance change of the positive memristors as a function of training epochs.
Materials 12 03451 g009
Figure 10. Resistance change of the negative memristors as a function of training epochs.
Figure 10. Resistance change of the negative memristors as a function of training epochs.
Materials 12 03451 g010
Figure 11. The accuracy of pattern classification after training according to (a) the number of training epochs and (b) the percentage of inverted pixels.
Figure 11. The accuracy of pattern classification after training according to (a) the number of training epochs and (b) the percentage of inverted pixels.
Materials 12 03451 g011
Table 1. Parameters used in the synapse guide model.
Table 1. Parameters used in the synapse guide model.
SymbolValueSymbolValue
a11 × 10−5An1 × 107
a21 × 10−5xp0.2
b2.1xn0.25
Vp1 (V)αp7
Vn1 (V)αn6
Ap3 × 106xo0.3

Share and Cite

MDPI and ACS Style

Sun, W.; Choi, S.; Kim, B.; Park, J. Three-Dimensional (3D) Vertical Resistive Random-Access Memory (VRRAM) Synapses for Neural Network Systems. Materials 2019, 12, 3451. https://doi.org/10.3390/ma12203451

AMA Style

Sun W, Choi S, Kim B, Park J. Three-Dimensional (3D) Vertical Resistive Random-Access Memory (VRRAM) Synapses for Neural Network Systems. Materials. 2019; 12(20):3451. https://doi.org/10.3390/ma12203451

Chicago/Turabian Style

Sun, Wookyung, Sujin Choi, Bokyung Kim, and Junhee Park. 2019. "Three-Dimensional (3D) Vertical Resistive Random-Access Memory (VRRAM) Synapses for Neural Network Systems" Materials 12, no. 20: 3451. https://doi.org/10.3390/ma12203451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop