*2.2. Error Correction Codes for Flash Memory*

The first-generation NAND flash memory was much simpler than the current generation. However, over time, the storage capacity of NAND flash memory has increased by moving to smaller geometries, and also storing more bits per cell. This required a much better error-correction algorithm to ensure data integrity for NAND flash devices. The P/E cycle value is about 100,000 for SLCs, 10,000 or less for MLCs, and 1000 or less for TLCs. As error rates increase rapidly, the usage of flash devices is emerging as a major issue. The advent of TLC flash and even smaller geometries will further increase the error-correction requirements in NAND flash. Figure 2 shows the raw bit error rate (RBER) for the P/E cycle of the TLC NAND flash memory [24]. As shown in the figure, when there are thousands of P/E cycles, the RBER is so high that it is known to reach the lifecycle limit.

**Figure 2.** Evaluated bit error rates according to P/E cycles for TLC NAND flash memories [24].

As a method for recovering such errors, ECC methods [3,4,25–28], such as BCH and LDPC, are embedded in the NAND flash memory controller, which can cover page-level errors for programming and retrieving data. The BCH codes form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field. An arbitrary level of error correction is possible, and it includes efficient code for un-correlated error patterns. Low-density parity check (LDPC) codes are considered one of the best choices for current flash memory controllers owing to their excellent error-correction capabilities. LDPC code decoding is an iterative process with the decoding input as certain probability information, which leads to a dependency of LDPC error correction on the accuracy of the input information [4]. ECCs for flash memory storage systems have been widely studied in both BCH and LDPC. Ref. [3] addresses applications using BCH ECC coding driver on a Linux platform, and many other chip designers have presented BCH bit error coding implementations. In contrast, LDPC receives attention from the industry, as there have been many presentations on LDPC in flash summits in the recent past [25,26]. In [27], a LDPC decoding strategy optimized for flash memory was proposed. Tanakamaru et al., have shown that LDPC codes can improve SSD lifetimes by 10 [28]. K. Zhao presents three techniques for mitigating an LDPC-induced response time delay [4].

#### *2.3. Parity with RAID*

While ECC can prevent data errors inside a page, many other approaches for preventing data errors outside a page exist. RAID [29] enhances the reliability by using redundant data. RAID creates an additional parity for clustered data to ensure error recovery. Cluster means a group of units associated with parity data. In a general RAID system, this unit is a disk; however, inside an SSD, it can be a block or a chip. The parity is generated by using bitwise exclusive-or operations between blocks that belong to the same cluster, and thus, parity updates usually require two block reads and two block writes in traditional disk-based storage systems. However, in flash-based storage devices, owing to the out-of-place characteristic of flash memory, the approach of RAID techniques is different; they mainly focus on reducing write requests for designing flash-based RAID schemes.

Previous works show that parity achieved by using the RAID5 technique can reduce the page error rate compared with ECC. However, the parity overhead of the RAID architecture has its own limitations, as a lot of reads for existing data must precede new parity calculations. There are several previous works that have discussed ways to reduce the parity overhead. Lee [8] retains the parity blocks in the buffer memory, postponing parity writes until the idle time, which can reduce read operations and write response time; however, system crash at partial stripe write points cannot be recovered as parity updates are postponed until a full stripe is written. Im [30] generates partial parity for partial stripes so that partial write is possible; however, additional non-volatile RAM (NVRAM) hardware is required as partial parity is maintained in NVRAM. Kim [10] employs variable size striping, which constructs a new stripe with data written to portions of a full stripe and writes parity for that partial stripe without any additional hardware support. However, this approach generates too much partial parity, and therefore, degrades write performance and space utilization.

#### **3. Compression-Assisted Adaptive ECC and RAID Scattering**

This Section describes, first, the adaptive ECC and RAID-scattering scheme, followed by the metadata structure and operation of FTL for RAID parity management as an implementation issue.
