**1. Introduction**

NAND flash memories have been widely used in smartphones, personal computers, data centers, etc. Thanks to these two key technologies: (1) continued scaling down process technology and (2) multilevel (e.g., MLC, TLC) cell data coding, the storage density of a NAND flash memory has been significantly increased over previous decades [1]. However, these two key technologies bring about a challenge in that the data stored in NAND flash memory may suffer from low reliability [2–4]. Furthermore, there are two major sources of noise in flash memory: cell-to-cell interference (CCI) and retention noise. Numerous works have been proposed to mitigate noises in NAND flash memory. For example, the data post compensation and predistortion technique [5] and detector design using a neighbor-apriori information technique [6] exploit the a-priori information of the neighboring cells to mitigate the CCI. However, when considering retention noise, the voltage offset of flash memory cell tends to become unknown. It may be hard to use the a-priori information of the neighboring cells to compensate for the voltage shift caused by CCI. In addition, the CCI removal technique proposed by Lin [7] suffers from a similar problem in that the proposed technique ignores the impact of noise. In addition, Reference [8] proposed a retention-aware belief-propagation (BP) decoding scheme to mitigate the retention noise effect but did not take CCI into consideration.

Against the above background, the recent advances in neural networks and machine learning provide a new perspective to increase the reliability of MLC NAND flash memory. The key idea of the neural network is to learn an optimal network model from the massive training data, instead of using a definitive algorithm that is derived from a pre-defined

**Citation:** He, R.; Hu, H.; Xiong, C.; Han, G. Artificial Neural Network Assisted Error Correction for MLC NAND Flash Memory. *Micromachines* **2021**, *12*, 879. https://doi.org/ 10.3390/mi12080879

Academic Editors: Cristian Zambelli and Rino Micheloni

Received: 30 June 2021 Accepted: 20 July 2021 Published: 27 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

model [9]. A pioneering work is reported in [10,11], which utilizes an artificial neural network to predict the threshold voltage distribution of NAND flash memory. In the pretesting, the above method assumes that the prior information of the retention time is informed in advance. When the flash controller is powered off, we cannot obtain the retention time.

In this paper, we use the neural network to learn an optimal network model to detect the bits errors in the cells that are disturbed by both CCI and retention noise and propose a neural network-assisted error correction scheme. However, it is difficult to record the retention time in a practical system, which means that accurate LLR values cannot be calculated. Therefore, we propose using relative LLR to estimate the actual LLR. The relative LLR is affected little by retention time, so we do not require retention time as an input parameter of the neural network.

In this paper, we first model the threshold voltage distribution as a Gaussian mixture model, which is fairly close to the voltage distribution of the practical NAND flash memory, and we calculate the LLR of the theoretical threshold distribution using a quantization scheme. Then, the corresponding LLR of the actual threshold distribution is mapped according to the relative position of the optimal reading reference voltage. It is found that this idea makes the relative LLR values remain relatively steady throughout retention time, which allows us to avoid using retention time as an input parameter of the neural network. Finally, using the relative LLR to estimate the actual LLR, we train the neural network and use the trained network to recovery the bits that may be wrongly detected in the soft-decision detection or hard-decision detection.

The rest of this paper is organized as follows. The flash channel model is presented in Section 2. Section 3 introduces our proposed ANNAEC scheme. Numerical simulation results are presented in Section 4. The conclusions are drawn in Section 5.

#### **2. Channel Model**

Without loss of generality, the proposed ANNAEC is performed over a model-based MLC NAND flash memory. Based on [5,8,12], we can model threshold voltage, *Vth*, by

$$V\_{th} = V + n\_{RTN} + \triangle V\_{CCI} - n\_{retention} \tag{1}$$

where *V* denotes the desired voltage level, *nRTN* denotes random telegraph noise (RTN), *VCCI* denotes the shift caused by *CCI* noise, and *nretention* denotes retention noise.

#### *2.1. The Voltage Distribution of Programmed and Erased Cell*

The number of charges in the NAND flash memory cell can be altered in the program and erase operation. It is well known that before being programmed, a flash memory cell must be erased. In the erase operation, the charges in the memory cell are removed from the floating gate, and the threshold voltage of the erased cell will be set to the lowest voltage. The threshold voltage distribution of an erased cell follows a Gaussian distribution, which is given by

$$p\_{\varepsilon}(\mathbf{x}) = \frac{1}{\sigma\_{\varepsilon}\sqrt{2\pi}}e^{-\frac{\left(x-\mu\_{\varepsilon}\right)^{2}}{2\sigma\_{\varepsilon}^{2}}} = \mathcal{N}(\mu\_{\varepsilon}, \sigma\_{\varepsilon}^{2}),\tag{2}$$

where *σ<sup>e</sup>* and *μ<sup>e</sup>* are the standard deviation and the mean of the threshold voltage of the erased cell, respectively.

According to [5,8], the threshold voltage of a programmed cell follows a Gaussian distribution shown below:

$$p\_p(\mathbf{x}) = \frac{1}{\sigma\_p \sqrt{2\pi}} e^{-\frac{(\mathbf{x} - \boldsymbol{\mu}\_p)^2}{2\sigma\_p^2}} = \mathcal{N}(\boldsymbol{\mu}\_{p\prime}, \sigma\_p^2), \tag{3}$$

where *σ<sup>p</sup>* and *μ<sup>p</sup>* ∈ {*μp*<sup>01</sup> , *μp*<sup>00</sup> , *μp*<sup>10</sup> } are the standard deviation and the mean of the threshold voltage of a programmed cell.

#### *2.2. RTN*

The electron capture and emission at the floating gate near the interface generate RTN, which is greatly impacted by flash memory P/E cycles [13]. As P/E cycles increase, the tunnel oxide of the floating gate transistor is gradually damaged and generates charge trapping in the oxide and interface states. RTN leads to a random fluctuation of cell threshold voltage and widens the voltage distribution. Hence, RTN is modeled with a Gaussian-like distribution [8], given as

$$p\_{\mathbf{r}}(\mathbf{x}) = \frac{1}{\sigma\_r \sqrt{2\pi}} e^{-\frac{\mathbf{x}^2}{2\sigma\_r^2}} = \mathcal{N}(0, \sigma\_r^2), \tag{4}$$

where *<sup>σ</sup><sup>r</sup>* <sup>=</sup> 0.00027 <sup>×</sup> *PE*0.62, denotes the noise standard deviation.

#### *2.3. CCI*

Because of the parasitic capacitance-coupling effect among adjacent cells in flash memory, the threshold voltage of the victim cell increases as the threshold voltage of an adjacent cell increases. The immediate adjacent cells are the major noise source of the CCI. We consider an all bit-line structure. As shown in Figure 1, when the (*k*+1)-th wordline (WL) has been programmed, the cell on the *k*-th WL can be programmed. Hence, the victim cell is influenced by three immediate adjacent cells. The threshold-voltage shift of the victim cell can be modeled as a linear combination of the threshold voltage changes of those immediate adjacent cells. We can estimate the threshold-voltage shift caused by CCI as

$$
\triangle V\_{\textit{victim}} = \sum\_{n} (\triangle V\_{\textit{t}}^{(n)} \cdot \gamma^{(n)})\_{\prime} \tag{5}
$$

where *<sup>V</sup>*(*n*) *<sup>t</sup>* is the change of an immediate adjacent cell, which is programmed after the victim cell and *γ*(*n*) represents the coupling ratio. We assume the vertical and the diagonal coupling ratio are *γ<sup>y</sup>* and *γxy*, respectively. According to the cell-to-cell coupling strength factor *s*, we can set *γ<sup>y</sup>* = 0.08*s* and *γxy* = 0.006*s* [12].

**Figure 1.** Illustration of the parasitic coupling capacitances among adjacent cells.

#### *2.4. Retention*

After a cell is programmed, the number of charges in the NAND flash memory cell continually reduce over time due to trap-assisted tunneling and charge detrapping [1]. Retention noise is modeled as a Gaussian distribution, i.e., *pt*(*x*) = N (*μt*, *<sup>σ</sup>*<sup>2</sup> *<sup>t</sup>* ) = <sup>√</sup> <sup>1</sup> 2*πσt e* <sup>−</sup> (*x*−*μt*)<sup>2</sup> 2*σ*2 *<sup>t</sup>* . The mean *μt*, and the standard deviation *σt*, are given by

$$
\mu\_t = \triangle V\_t [A\_t (PE)^{a\_i} + B\_t (PE)^{a\_o}] \log(1+T), \tag{6}
$$

$$
\sigma\_t = 0.3|\mu\_t|\,\tag{7}
$$

where *Vt* is the cell voltage change before and after being programmed, *T* donates memory retention time and *PE* is the number of PE cycles.

The conditional probability distribution function of the threshold voltage after being disturbed by RTN, CCI and retention are given as follows:

$$\begin{split} p(V\_{th}|k \in \{11, 01, 00, 01\}) &= \frac{1}{64} [\mathcal{N}(\mu\_k - \mu\_{t\prime}\sigma\_k^2) \\ &+ \sigma\_t^2 + \sigma\_r^2) + A + B + C]. \end{split} \tag{8}$$

$$\begin{split} A = \sum\_{\mu\_{\mathcal{P}}} [2\mathcal{N}(\gamma\_{\text{xy}}(\mu\_{\mathcal{P}} - \mu\_{\varepsilon}) + \mu\_{k} - \mu\_{t\prime}\gamma\_{\text{xy}}^{2}(\sigma\_{p}^{2} + \sigma\_{\varepsilon}^{2} + 2\sigma\_{r}^{2}) \\ + \sigma\_{k}^{2} + \sigma\_{t}^{2}) + \mathcal{N}(\gamma\_{\mathcal{Y}}(\mu\_{p} - \mu\_{\varepsilon}) + \mu\_{k} - \mu\_{t\prime}\gamma\_{\text{y}}^{2}(\sigma\_{p}^{2} + \sigma\_{\varepsilon}^{2} \\ + 2\sigma\_{r}^{2}) + \sigma\_{k}^{2} + \sigma\_{t}^{2})), \end{split} \tag{9}$$

$$\begin{split} B &= \sum\_{\mu\_p^{(1)}} \sum\_{\mu\_p^{(2)}} \sum\_{\mu\_p^{(3)}} \mathcal{N}(\gamma\_{xy}(\mu\_p^{(1)} + \mu\_p^{(2)} - 2\mu\_\epsilon) + \gamma\_\mathcal{Y}(\mu\_p^{(2)} - \mu\_\epsilon) \\ &+ \mu\_k - \mu\_\iota \left( 2\gamma\_{xy}^2 + \gamma\_\mathcal{Y}^2 \right) (\sigma\_p^2 + \sigma\_\epsilon^2 + 2\sigma\_r^2) + \sigma\_k^2 + \sigma\_t^2), \end{split} \tag{10}$$
 
$$\mathbb{C} = \sum\_{\langle \gamma \rangle} \sum\_{\langle \gamma \rangle} \mathcal{N}(\gamma\_{xy}(\mu\_p^{(1)} - \mu\_\epsilon) + \gamma\_\mathcal{Y}(\mu\_p^{(2)} - \mu\_\epsilon) + \mu\_k$$

*<sup>μ</sup>*(1) *p <sup>μ</sup>*(2) *p* <sup>−</sup> *<sup>μ</sup>t*,(*γ*<sup>2</sup> *xy* + *γ*<sup>2</sup> *y*)(*σ*<sup>2</sup> *<sup>p</sup>* + *σ*<sup>2</sup> *<sup>e</sup>* + 2*σ*<sup>2</sup> *<sup>r</sup>* + *σ*<sup>2</sup> *<sup>k</sup>* + *<sup>σ</sup>*<sup>2</sup> *<sup>t</sup>* )) + ∑ *<sup>μ</sup>*(2) *p* ∑ *<sup>μ</sup>*(3) *p* <sup>N</sup> (*γxy*(*μ*(3) *<sup>p</sup>* <sup>−</sup> *<sup>μ</sup>e*) + *<sup>γ</sup>y*(*μ*(2) *<sup>p</sup>* − *μe*) + *μ<sup>k</sup>* <sup>−</sup> *<sup>μ</sup>t*,(*γ*<sup>2</sup> *xy* + *γ*<sup>2</sup> *y*)(*σ*<sup>2</sup> *<sup>p</sup>* + *σ*<sup>2</sup> *<sup>e</sup>* + 2*σ*<sup>2</sup> *<sup>r</sup>* ) + *σ*<sup>2</sup> *<sup>k</sup>* + *<sup>σ</sup>*<sup>2</sup> *t* ) + ∑ *<sup>μ</sup>*(1) *p* ∑ *<sup>μ</sup>*(3) *p* <sup>N</sup> (*γxy*(*μ*(1) *<sup>p</sup>* <sup>+</sup> *<sup>μ</sup>*(2) *<sup>p</sup>* − 2*μe*) + *μ<sup>k</sup>* <sup>−</sup> *<sup>μ</sup>t*, 2*γ*<sup>2</sup> *xy*(*σ*<sup>2</sup> *<sup>p</sup>* + *σ*<sup>2</sup> *<sup>e</sup>* + 2*σ*<sup>2</sup> *<sup>r</sup>* ) + *σ*<sup>2</sup> *<sup>k</sup>* + *<sup>σ</sup>*<sup>2</sup> *<sup>t</sup>* ), (11)

where *μ*(1) *<sup>p</sup>* , *<sup>μ</sup>*(2) *<sup>p</sup>* and *<sup>μ</sup>*(3) *<sup>p</sup>* are the means of cells 1–3, respectively, which are shown in Figure 2, *μ<sup>k</sup>* and *σ<sup>k</sup>* are the mean and standard deviation of the victim cell.

**Figure 2.** Illustration of 15-level uniform sensing quantization for multi-level cell (MLC) flash memory.

In this paper, we set the flash memory parameters as follows: *μp*<sup>11</sup> = 1.2, *μp*<sup>01</sup> = 2.55, *μp*<sup>00</sup> = 3, *μp*<sup>10</sup> = 3.45, *σ<sup>p</sup>* = 0.05, *σ<sup>e</sup>* = 0.35, *At* = 0.000035, *Bt* = 0.000235, *α<sup>i</sup>* = 0.62 and *α<sup>o</sup>* = 0.30.

#### **3. Artificial Neural Network-Assisted Error Correction**

In this section, we first present the idea of relative LLR calculation. Then we explain why an artificial neural network is useful for NAND flash memory. Finally, we introduce our proposed ANNAEC scheme.

## *3.1. Relative LLR*

For soft decision belief-propagation (BP) decoding, a soft quantization scheme has been proposed. As an example, Figure 2 shows a 15-level uniform sensing quantization [12].

The overlap region is obtained by the entropy of the cell's threshold voltage [12,14]. When the threshold voltage falls into the range (*Rn*−1, *Rn*], where *Rn* is the *n*-th reference voltage, *R*<sup>0</sup> = −∞ and *R*<sup>16</sup> = +∞, the LLR values of the least significant bit (LSB) and the most significant bit (MSB) in the *i*-th cell can be calculated by (12) and (13), respectively:

$$LLR\_{lsb}(R\_{n-1}, R\_n) = \log \frac{\int\_{R\_{n-1}}^{R\_n} p(V\_{th}|11) + p(V\_{th}|01) \,\mathrm{d}\,\mathrm{x}}{\int\_{R\_{n-1}}^{R\_n} p(V\_{th}|00) + p(V\_{th}|10) \,\mathrm{d}\,\mathrm{x}},\tag{12}$$

$$LLR\_{msb}(R\_{n-1\prime}, R\_n) = \log \frac{\int\_{R\_{n-1}}^{R\_n} p(V\_{th}|11) + p(V\_{th}|10) \,\mathrm{d}\,\mathrm{x}}{\int\_{R\_{n-1}}^{R\_n} p(V\_{th}|01) + p(V\_{th}|00) \,\mathrm{d}\,\mathrm{x}}.\tag{13}$$

However, it may be hard to accurately calculate the LLR values due to the retention noise. Even though retention noise is modeled as Gaussian distribution, the mean and the standard deviation are random, since *Vt* is random as described in (6) and (7). Furthermore, it is difficult to obtain accurate retention time in a practical system. To deal with those problems, we can estimate LLR, based on the relative reference voltage positions, given as

$$\begin{split} &LLR'\_{lsb}(R\_{n-1} - V\_{rv} + V'\_{rv}, R\_{\rm u} - V\_{rv} + V'\_{rv}) \\ &= \log \frac{\int\_{R\_{n-1} - V\_{rv} + V'\_{rv}}^{R\_{n} - V\_{rv} + V'\_{rv}} p'(V\_{t\hbar}|11) + p'(V\_{t\hbar}|01) \,\mathrm{d}\,\mathrm{x}}{\int\_{R\_{n-1} - V\_{rv} + V'\_{rv}}^{R\_{n} + V'\_{rv}} p'(V\_{t\hbar}|00) + p'(V\_{t\hbar}|10) \,\mathrm{d}\,\mathrm{x}}, \end{split} \tag{14}$$

$$\begin{split} &LLR\_{msb}^{\prime}(R\_{n-1} - V\_{rv} + V\_{rv\prime}^{\prime} \ R\_n - V\_{rv} + V\_{rv}^{\prime}) \\ &= \log \frac{\int\_{R\_{n-1} - V\_{rv} + V\_{rv}^{\prime}}^{\mathcal{R}\_{\text{R}} - V\_{rv} + V\_{rv}^{\prime}} p^{\prime}(V\_{th}|11) + p^{\prime}(V\_{th}|10) \,\text{d}\,\text{x}}{\int\_{R\_{n-1} - V\_{rv} + V\_{rv}^{\prime}}^{\mathcal{R}\_{\text{R}} - V\_{rv} + V\_{rv}^{\prime}} p^{\prime}(V\_{th}|01) + p^{\prime}(V\_{th}|00) \,\text{d}\,\text{x}} \,\text{/} \end{split} \tag{15}$$

where *p* means that we estimate *Vt* in Equations (6) and (7) as *Vt* ≈ *μ<sup>k</sup>* − *μe*, *Vrv* and *V rv* are the reference voltages of the actual threshold distribution and the theoretical threshold distribution, respectively, as shown in Figure 3, where *Vrv* is obtained by voltage optimization [1] and *V rv* is obtained by theoretical calculations, such as minimizing entropy of the cell's threshold voltage [12,14]. In (14) and (15), we first calculate the LLR of the theoretical threshold distribution using a quantization scheme. Then, the corresponding LLR of the actual threshold distribution is mapped according to the relative position of the optimal reference voltage.

We depict the relative LLR versus data retention time in Figure 4. The relative LLR values remain relatively steady, which allows the neural network to not require retention time as an input parameter. In addition, LLR calculation is offline in a flash memory controller [15]. It may be difficult for a controller to estimate the characteristics of the memory channel because online estimation leads to a significant increase in the power consumption and read latency of the flash controller. Therefore, the proposed relative LLR can estimate the actual LLR over a time range, which can also help reduce the number of LLR tables stored in the controller.

**Figure 3.** Illustration of the statistic distribution and mathematical distribution at *s* = 1 and *PE* = 1*K*.

**Figure 4.** Plot of the relative log-likelihood ratio (LLR) versus data retention time at *PE* = 1*K*, = 0.05 and *s* = 1.

#### *3.2. Why Are Artificial Neural Networks Useful for NAND Flash Memory?*

To simplify the analysis, this subsection first discusses the case that the CCI is only generated by the vertical neighboring cell. In this case, the conditional probability distribution function of the threshold voltage, (8), is simplified to (16):

$$p(V\_{th}|k \in \{11, 01, 00, 01\}) = \frac{1}{4} [\mathcal{N}(\mu\_k - \mu\_t, \sigma\_k^2 + \sigma\_t^2 + \sigma\_r^2)]$$

$$+ \sum\_{\mu\_p} \mathcal{N}(\mu\_k + \gamma\_y(\mu\_p - \mu\_e) - \mu\_t, \sigma\_k^2 + \gamma\_y^2(\sigma\_p^2 + \sigma\_e^2))$$

$$+ 2\sigma\_r^2) + \sigma\_t^2 + \sigma\_r^2)).\tag{16}$$

In (16), it is seen that the threshold voltage distribution can be divided into four parts: the distribution of cells with CCI from "11"-state, "01"-state, "00"-state and "10"-state, which are also shown in Figure 4. In an overlap region, the bits with different CCI noise levels may have different error rates. For instance, in the overlap region between "01" state and "00"-state, the bits of the cells in "00"-state with CCI from neighboring cells in "11"-state may be wrongly detected as "1" in LSB. In general, we want to find the optimal reading reference voltage at the intersecting point of the distributions of two states, such

as the red dotted line in Figure 5. However, once we know the programmed state or the threshold voltage of the cells that donate the CCI to victim cells, the optimal reading reference voltage may change. For example, the optimal reading reference voltage should be selected by the blue dotted line in Figure 5, when the vertical neighboring cell is in the erased state.

**Figure 5.** Illustration of the distribution of NAND flash memory at *s* = 1.4 (the cell-to-cell coupling strength factor), *PE* = 1*K* and *Retention time* = 105.

In this paper, we expand the two-dimensional coordinates to three-dimensional, as shown in Figure 6a. The X-axis is the victim cell's voltage, and the Y-axis is the threshold voltage of vertical neighboring cell. By doing so, one can easily find the incorrectly detected cells, marked with red dots. Moreover, we have two important observations:


**Figure 6.** Illustration of the decision of least significant bit (LSB) in the NAND flash memory. (**a**) The conventional hard-decision plane in the three-dimensional coordinates. (**b**) The optimal plane.

These two observations reveal that the detection of bits in a cell can be transformed into a clustering problem, which is to obtain an optimal classification hyperplane. When more surrounding cells are considered, the clustering problem will become more complex and the

dimensions of the classification hyperplane will increase beyond three. To address this issue, We propose to use the neural network, which is good at solving various clustering problems.

#### *3.3. Proposed Artificial Neural Network-Assisted Error Correction (ANNAEC) Scheme*

The main idea of the proposed ANNAEC scheme is shown in Figure 7. In general, the flash memory controller uses soft-decision error correction [12], read-retry [1,16] and voltage optimization, which has been widely used in practical systems, to ensure the reliability of data stored in NAND flash memory. When these techniques are not effective in suppressing flash channel noise, the flash memory controller attempts to operate the proposed ANNAEC scheme to correct error bits. Moreover, it can reduce the power consumption and computation burden of the controller, since the cells in an overlap region take a relatively small part of the cells on a page.

**Figure 7.** Block diagram of the proposed ANNAEC scheme in NAND flash memory.

In general, the host implements data writing and reading to the NAND flash memory chip by communicating with the memory controller, which communicates with the NAND flash memory chip. First, the host transfers data to the flash controller. The flash controller then encodes the data and writes it into the NAND flash memory chip. When the host reads the data, the flash controller communicates with the NAND flash chip. During this process,

the NAND flash chip reads the data from the cell and sends it to the flash controller by reading the sensing circuit. After that, the flash controller corrects and restores the original data through the decoding algorithm and sends it to the host. The proposed a neural network assisted error correction algorithm is used as an alternative decoding algorithm. When the decoding of the flash controller fails, the neural network model is used to first correct the data and then perform decoding.

We label the positions of the cells in an overlap region, which is at the *N*-th word-line and the *M*-th bit-line in the block as (*N*, *M*), shown in Figure 7. The input parameters of the neural network are summarized in Table 1. *X*<sup>1</sup> and *X*<sup>2</sup> are the bits of cell-(*N*, *M*) in MLC memory, respectively. *X*3∼*X*<sup>8</sup> are the LLRs of LSB and MSB of the immediate adjacent cells, i.e., cell-(*N* + 1, *M* − 1), cell-(*N* + 1, *M*) and cell-(*N* + 1, *M* + 1). *X*<sup>9</sup> is the flag of page type. If the current reading page is LSB, we set *X*<sup>9</sup> to "0"; otherwise, *X*<sup>9</sup> is set to "1". *X*<sup>10</sup> is the number of PE cycles. There are two reasons for choosing those parameters: (1) the threshold voltage is difficult to be obtained in a practical system, but the LLR and bits in a cell can help to locate the range of threshold voltage; (2) the vertical and the diagonal neighboring cells contribute about 81% of the CCI [17,18].

**Table 1.** Summary of input parameters.


Afterward, we send the parameters into the back propagation neural network to correct error bits. The sigmoid function is selected as the activation function of the back propagation neural network, given as

$$f(\mathbf{x}) = \frac{1}{1 + \mathbf{e}^{-\mathbf{x}}}.\tag{17}$$

The cost function is chosen as the typical mean square error (MSE) cost function [19], given by

$$E = \frac{1}{2} [(T\_{y\_0} - y\_0)^2 + (T\_{y\_1} - y\_1)^2],\tag{18}$$

where the outputs of neural networks *y*<sup>0</sup> and *y*<sup>1</sup> are the reliabilities of "0" and "1", and *T* denotes the desired reliability in the data set. The relative LLR is calculated offline in the flash memory controller. It is difficult to recalculate the relative LLR, since the online characteristic estimation of the memory channel causes longer read latency. Since the accurate relative LLR is hard to recalculate, we update relative LLR by

$$LLR\_{update} = (-1)^{x+1} \left| LLR\_{original} \right|\_{\prime} \tag{19}$$

where *LLRoriginal* denotes original relative LLR obtained in the sensing operation, and *ε* is given by

$$
\varepsilon = \begin{cases} 1 & \text{if } y\_1 > y\_0 \\ 0 & \text{else.} \end{cases} \tag{20}
$$

Although (19) does not update the accurate LLR to decode, it can estimate the value of LLR. Moreover, (19) is used to correct the sign of LLR, which is more important than the absolute value of LLR, since fewer error signs of LLRs fewer less error bits.

## **4. Experiment Results**

#### *4.1. Training*

Throughout all experiments, we used a rate-0.9 (4544, 4096) QC-LDPC code and the BP decoding algorithm. The experimental platform is implemented in Matlab. The channel parameters, which are used to generate the training dataset, are shown in Table 2. Since the parasitic coupling capacitances of CCI are invariable in a flash memory ship, without loss of generality, we set the cell-to-cell coupling strength factor to be *s* = 1. According to the raw bit error rate (RBER), we generate the dataset at *PE* = {3000, 4000, 5000} and divide the dataset into two parts: error and correct bits, which are to be corrected, e.g., the cell-(*N*, *M*) in Figure 7. In total, the sizes of the training and validation data are 336,000 and 84,000, respectively. According to the performance of neural network versus the different numbers of hidden layer node, shown in Figure 8, the basic neural network structure is set to be {10, 3, 2}, meaning that there are 10 nodes in the input layer, 3 nodes in the hidden layer and 2 nodes in the output layer.

**Table 2.** Training dataset (*s* = 1).


**Figure 8.** Performance of neural network under the different numbers of hidden layer nodes.

#### *4.2. Performance*

In Figure 9a,b, we compare RBER and frame error rate (FER) using ANN-LDPC [11], the proposed method and the original method without the neural network versus data retention time at *s* = 1. We can observe that the proposed ANNAEC significantly reduces the RBER in comparison with the ANN-LDPC and original method.

For instance, in Figure 9a, the data retention time is about 3 × 104 h at *PE* = <sup>5000</sup> and RBER = <sup>2</sup> × <sup>10</sup>−2, using the scheme without ANNAEC. Compared to the proposed ANNAEC scheme, Figure 9b shows that for the same performance, the ANN-LDPC can make the flash memory endure up to 3 × 105 h and the proposed method provides a performance gain of approximately 67% of data retention, which makes the retention time of flash endure up to 5 × 105 h. In addition, the proposed method has a more stable error correction performance, when the memory suffers from a weak interference. Similarly, we can notice that the proposed ANNAEC improves the FER performance by up to an error rate of 1 × <sup>10</sup>−<sup>3</sup> at a retention time of 4 × <sup>10</sup><sup>6</sup> h and *PE* = 3000. The ANN-LDPC has a FER performance of approximately 5 × <sup>10</sup><sup>−</sup>3.

**Figure 9.** (**a**) Comparison of the raw bit error rate (RBER) performance of NAND flash memory with and without ANNAEC scheme versus data retention time at *s* = 1. (**b**) Comparison of the frame error rate (FER) performance of low-density parity-check (LDPC) coded NAND flash memory with and without the ANNAEC scheme versus data retention time at *s* = 1.

#### **5. Conclusions**

In this paper, we have proposed to use the relative LLR calculation to estimate the actual LLR. Furthermore, in three-dimensional coordinates, we have transformed the bit detection problem into a clustering problem, which allows us to apply an artificial neural network in the memory channel. To solve the clustering problem, we proposed an artificial neural network-assisted error correction scheme, which has been shown by experiments to be effective in correcting the error bit when the conventional method without the neural network fails to decode. Simulation results have shown that the FER performance of our ANNAEC is significantly better than that of ANN-LDPC. For example, the ANN-LDPC can make the flash memory endure up to 3 × <sup>10</sup><sup>5</sup> h, and the proposed method provides the performance gain of approximately 67% of data retention, which makes the retention time of flash endure up to 5 × 105 h. Furthermore, our proposed approach can be extended to TLC or QLC flash memories.

**Author Contributions:** Conceptualization, R.H., H.H. and G.H.; methodology, R.H. and G.H.; software, R.H.; validation, R.H. and H.H.; formal analysis, R.H., H.H. and G.H.; investigation, R.H., H.H., G.H. and C.X; writing—original draft preparation, R.H., G.H. and C.X.; writing—review and editing, H.H. and G.H.; visualization, C.X.; supervision, G.H.; project administration, R.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China under grant 61871136.

**Data Availability Statement:** The study did not report any data.

**Conflicts of Interest:** The authors declare no conflict of interest.
