Next Article in Journal
Object/Scene Recognition Based on a Directional Pixel Voting Descriptor
Previous Article in Journal
Transit Traffic Filtration Algorithm from Cleaned Matched License Plate Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Static and Dynamic Wear Leveling to Prolong the Lifespan of Solid-State Drives

Department of Electronic Engineering, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 01811, Republic of Korea
Appl. Sci. 2024, 14(18), 8186; https://doi.org/10.3390/app14188186
Submission received: 17 July 2024 / Revised: 2 September 2024 / Accepted: 7 September 2024 / Published: 11 September 2024
(This article belongs to the Special Issue Advancements in Computer Systems and Operating Systems)

Abstract

:
In order to extend the lifespan of SSDs, it is essential to achieve wear leveling that evenly distributes the accumulated erase counts of NAND blocks, thereby delaying the occurrence of bad blocks as much as possible. This paper proposes the Greedy-MP policy, integrating static and dynamic wear leveling. When a specific block exhibits excessive erasures surpassing a defined threshold, Greedy-MP initiates the migration of cold data, expected to undergo infrequent modifications, to that block. Additionally, migrated blocks are excluded as candidates for garbage collection until their erase counts reach a similar level to others, preventing premature transition into bad blocks. Performance evaluations demonstrate that Greedy-MP achieves the longest lifespan across all test scenarios. Compared to policies solely utilizing static wear leveling like PWL, it extends the lifespan by up to 1.72 times. Moreover, when integrated with dynamic wear leveling policies such as CB alongside static wear leveling like PWL, it extends the lifespan by up to 1.99 times. Importantly, these extensions are achieved without sacrificing performance. By preserving garbage collection efficiency, Greedy-MP delivers the shortest average response time for I/O requests.

1. Introduction

SSDs (Solid-State Drives) are rapidly replacing traditional hard disk drives in diverse domains such as mobile computing, PCs, consumer electronics, servers, and data centers [1,2,3,4,5]. The primary driving force behind the success of SSDs is their performance. By employing a parallel structure with multiple NAND flash memory chips, SSDs demonstrate significantly higher IOPS (Input/Output Operations Per Second) compared to hard disk drives [6,7,8,9,10,11,12]. Moreover, NAND flash memory offers advantages such as lower power consumption, resilience to shock, and compact size, all of which are inherited by SSDs.
The drawbacks of SSDs include gradual performance degradation resulting from the accumulation of input/output requests and the transformation of normal blocks into bad blocks due to NAND cell damage [13,14,15,16,17,18,19,20]. A bad block refers to a block incapable of reliably maintaining data integrity, and as the incidence of bad blocks accumulates, the entire SSD ultimately becomes inoperable [13]. The performance decline and occurrence of bad blocks in SSDs are inevitable outcomes of the physical characteristics of NAND flash memory. Nevertheless, these problems can be mitigated through effective techniques such as garbage collection and wear leveling. Considering the paramount importance of preserving data integrity and maximizing the lifespan of SSDs, particularly in sectors such as consumer electronics, this study endeavors to design an efficient wear leveling policy capable of extending SSD lifespan without compromising performance.
Wear leveling refers to maintaining the accumulated erase counts of each NAND block as evenly as possible. In NAND flash memory, when the erase count of a specific block exceeds a threshold, the block is converted into a bad block, reducing the number of available blocks [14,15,16,17,18,19,20]. Therefore, wear leveling can delay the occurrence of bad blocks as much as possible and extend the lifespan of SSDs.
Previous research on achieving wear leveling has been categorized into two main approaches: dynamic wear leveling [14,15,16,17] and static wear leveling [18,19,20]. Dynamic wear leveling policies achieve wear leveling by selecting blocks for erasure during garbage collection based on their erase counts. While selecting the block with the highest invalidation ratio may seem advantageous from a performance standpoint [15,21], it can potentially lead to an imbalance in erase counts among blocks. Therefore, dynamic policies consider both the invalidation ratio and erase count of blocks.
In contrast, static wear leveling policies do not interfere with the selection of victim blocks during garbage collection. Instead, when a specific block is at risk of becoming a bad block due to excessive erasures, they relocate cold data—data expected to rarely undergo modification—from other blocks to the worn-out block. Because the invalidation ratio of the block is expected to be low in the future, it is less likely to be selected as a victim block for garbage collection, thereby delaying its transition into a bad block.
Both dynamic wear leveling and static wear leveling contribute to improving the lifespan of SSDs; however, existing research on achieving wear leveling mostly focuses on utilizing only one of these approaches. In this paper, we design a wear leveling policy that combines dynamic wear leveling and static wear leveling. The proposed policy performs static wear leveling by relocating cold data to specific blocks when they are excessively erased compared to other blocks. Blocks that have undergone migration are managed separately and are not selected as victim blocks for garbage collection until their accumulated erase counts reach a similar level to other blocks. This prevents excessively erased blocks from prematurely transitioning into bad blocks compared to other blocks, ultimately contributing to extending the lifespan of SSDs.
Performance evaluations using server traces indicate that the proposed policy extends the lifespan of SSDs by up to 1.72 times compared to a policy solely performing static wear leveling. Furthermore, compared to a policy incorporating existing dynamic wear leveling and static wear leveling, the proposed policy extends the lifespan by up to 1.99–3.56 times. Notably, there is no occurrence of performance degradation, such as an increase in average response time for input/output requests.
This paper is organized as follows: In Section 2, we provide a detailed description of the internal structure of SSDs and explain representative wear leveling policies. Section 3 presents the design of a wear leveling policy that combines static and dynamic wear leveling. In Section 4, we describe the environment and methods used to evaluate the lifespan of SSDs and assess the lifespan extension effect of the new wear leveling policy. Finally, Section 5 presents the conclusions.

2. Background and Related Work

2.1. SSD Internals and NAND Flash Memory

SSDs use NAND flash memory chips interconnected in a parallel structure through multiple channels as storage media. Each NAND chip comprises multiple dies capable of independent NAND operations, with each die further divided into multiple planes. These planes consist of blocks, serving as the minimum units for erase operations, and each block is subdivided into pages, which represent the minimum units for read and write operations. Similar to EEPROM, NAND flash memory adheres to the erase-before-write principle, necessitating that data be written only to pages that have been previously erased. Due to the larger unit size of erase operations compared to write operations (blocks vs. pages), implementing in-place updates is challenging. Consequently, NAND-based storage devices, such as SSDs, integrate flash translation layer (FTL) firmware internally. This FTL firmware facilitates the processing of host write requests using an out-of-place update method [22,23,24,25,26].
When a write request for specific sectors arrives from the file system, the data are written to a new clean page instead of the page that was storing the existing data. At this point, the page that previously stored the data is invalidated since it no longer contains valid data. Consequently, performing the out-of-place update causes the location of valid data, i.e., the sector’s location, to change with each write, and the FTL (Flash Translation Layer) maintains a mapping table to remember the current valid locations of sectors. Since the mapping table is frequently accessed, it is generally kept in memory, and for this purpose, SSDs are equipped with a large amount of RAM.
When the proportion of clean pages falls below a certain threshold due to write request processing, garbage collection is initiated to reclaim clean pages. Garbage collection is performed in three stages: 1. selection of a victim block, 2. copying valid pages from the victim block to clean pages of other blocks, and 3. erasure of the victim block. For instance, if a NAND block consists of a total of 256 pages and a victim block has 10 valid pages, then before erasing the victim block, its 10 valid pages must be copied to clean pages of other blocks. Upon completion of the copying operation, the victim block is erased, resulting in the reclamation of a total of 256 clean pages. However, as 10 clean pages from other blocks are used to copy the valid pages from the victim block, the actual number of additional clean pages reclaimed is 246. Hence, the execution time of garbage collection and the actual number of clean pages reclaimed are inversely proportional to the number of valid pages in the victim block.
Therefore, from a performance perspective, the greedy policy [15,21], which selects the block with the fewest valid pages as the victim block, is optimal. This policy minimizes the copying of valid pages, thereby reducing the execution time of garbage collection and maximizing the actual number of clean pages reclaimed by garbage collection. However, since the selection of the victim block does not consider the accumulated erase count of each block, there is a risk of uneven erase counts among blocks.
Meanwhile, each block of NAND flash memory has a maximum threshold for erasures. The NAND cells are wrapped in an oxide insulating layer, which gradually deteriorates due to the ingress and egress of electrons during write and erase operations. Eventually, when the accumulated erase count of a block exceeds the threshold, it becomes a bad block due to the natural loss of electrons caused by the damage to the insulating layer, leading to unintentional data corruption. As bad blocks cannot reliably maintain data, they become unusable.
In general, SSDs have more storage capacity than what is exposed to the file system in order to improve the efficiency of garbage collection and to cope with the occurrence of bad blocks. This is known as overprovisioning. For example, if the overprovisioning is 10%, the actual physical capacity of the SSD might be 100 GB, but only 90 GB of storage space is exposed to the file system. Therefore, even if 10% of the blocks become bad, the SSD can still operate with the remaining 90 GB of functional blocks. However, if the bad blocks exceed 10%, the available storage space falls below what is exposed to the file system, causing the entire SSD to become inoperable. To maximize the lifespan of an SSD, it is necessary to delay the occurrence of bad blocks as much as possible, which requires achieving wear leveling that maintains an even distribution of accumulated erase counts across all blocks. While increasing the level of overprovisioning can further extend the SSD’s lifespan, it also increases the actual physical capacity required beyond what is exposed to the file system, leading to higher manufacturing costs for the SSD.

2.2. Wear Leveling Policies

Policies designed to achieve wear leveling are broadly classified into dynamic wear leveling and static wear leveling. Dynamic wear leveling policies aim to achieve wear leveling by selecting victim blocks during garbage collection. For instance, the CB (Cost Benefit) policy [15] calculates a cost-benefit metric for each block, considering factors such as the number of valid pages in the block and the elapsed time since the block was last written to (referred to as its age). The block with the lowest cost-benefit metric is then designated as the victim block. If all pages in a block are invalidated, the block is selected as the victim block regardless of its age. Otherwise, a block with a high proportion of valid pages can be selected as the victim block if its age is high, indicating that it has been unused for a long time. Consequently, a more uniform distribution of erase counts across blocks is achieved compared to the greedy policy.
The WOGC (Write Order Based Garbage Collection) policy [16] also employs a formula that considers factors such as the number of valid pages in each block, the cumulative erase count of blocks, and the chronological order in which blocks were used after being erased to select victim blocks. Similar to CB, if a block has no valid pages, it is chosen as the victim block regardless of its erase count or the write order. However, if valid pages are present, a block with a substantial number of valid pages may still be chosen as the victim block if it exhibits a low erase count and was written earlier, indicating it has been unused for a long time. Consequently, the distribution of erase counts across blocks tends to be more uniform compared to the greedy policy.
Dynamic wear leveling policies such as CB and WOGC effectively reduce the standard deviation of block erase counts by maintaining a more uniform distribution of erase counts across all blocks. However, blocks with a high number of valid pages are more likely to be selected as victim blocks, resulting in fewer clean pages being reclaimed compared to the greedy policy. This could potentially lead to more frequent garbage collections. Thus, despite achieving a lower standard deviation in block erase count distribution, the extension of SSD lifespan may be constrained by frequent garbage collections. Furthermore, the longer duration of individual garbage collection operation due to a higher number of valid pages to copy may also result in performance degradation. Additionally, since values such as the cost-benefit of all blocks must be calculated to find the minimum or maximum value when selecting a victim block for garbage collection, the worst-case time complexity for performing garbage collection is O(N) (where N is the number of blocks).
On the other hand, static wear leveling policies do not intervene in the selection of victim blocks of garbage collection. In other words, they employ a greedy policy, selecting the block with the fewest valid pages as the victim block, and tolerate some level of imbalance in erase count distribution among blocks. However, if a specific block undergoes excessive erasures compared to other blocks, surpassing a predefined threshold, cold data stored in other blocks is relocated to the worn-out block [18,19,20]. If the identification of cold data is accurate, the probability of selecting the block as the victim block decreases, as it is likely to have a higher proportion of valid pages in the future compared to other blocks. Consequently, the rate of increase in the erase count of the block is mitigated, thereby reducing the likelihood of it transitioning into a bad block.
The BET (Block Erase Table) policy [18] and the Rejuvenator [19] policies perform migration when the erase count of a victim block exceeds a fixed threshold. However, this approach can lead to excessive migration even when the erase count of blocks is relatively low compared to the tolerable maximum erase count, resulting in significant migration overhead [20]. Therefore, the PWL policy performs migration when the erase count of a victim block exceeds the average of the average erase count of all blocks and the maximum allowable erase count. Consequently, when the overall erase count of blocks is low initially, migration is rarely performed, reducing migration overhead. Conversely, as the overall erase count of blocks increases, migration is performed more frequently, delaying the transition to bad blocks and extending the lifespan of the SSD.
The primary drawback of static wear leveling policies lies in the potential inaccuracy of identifying cold data. If non-cold data are migrated to a worn-out block, the invalidation ratio of the block increases, making it more likely to be selected as the victim block for garbage collection again. Consequently, the block may transition into a bad block earlier compared to other blocks.
Both static and dynamic wear leveling techniques play crucial roles in delaying the occurrence of bad blocks, thereby extending the lifespan of SSDs. Dynamic wear leveling actively manages wear distribution during normal operations by selecting blocks with lower erase counts for garbage collection. This ensures that wear is balanced across all blocks, reducing the likelihood that any single block reaches its maximum erase limit too quickly.
Static wear leveling complements this by intervening when specific blocks show signs of excessive wear. By relocating cold data—data that are infrequently modified—to these at-risk blocks, static wear leveling prevents them from being selected for garbage collection too often, thereby delaying their transition into bad blocks.
When combined, these techniques work synergistically to ensure a more even distribution of wear across all blocks, maximizing the delay in bad block formation and enhancing the overall reliability and longevity of SSDs.

3. Greedy with Migration Pool

3.1. Overview

To harness the synergy of static and dynamic wear leveling, we propose the Greedy-MP (Migration Pool) policy, integrating both strategies. Initially, the Greedy-MP policy does not perform wear leveling, aiming to enhance the efficiency of garbage collection and reduce its frequency by employing the greedy policy. In other words, the block with the highest invalidation ratio is selected as the victim block for garbage collection.
However, if a particular block exhibits excessive erasures compared to others, exceeding a predefined threshold, static wear leveling is initiated. During this phase, cold data from other blocks are migrated to the worn-out block, and to prevent this worn-out block from being selected again for garbage collection, the migrated worn-out blocks are segregated into a distinct pool called as the migration pool. Blocks in the migration pool are exempt from being chosen as the victim for garbage collection until their erase counts reach a similar level compared to other blocks, even if their invalidation ratios are elevated. This approach effectively delays the progression of worn-out blocks into bad blocks, thereby extending the operational lifespan of the SSD.

3.2. Data Structure

The Greedy-MP policy categorizes blocks that become candidates for victim selection during garbage collection, specifically blocks with no clean pages, into two pools: the normal pool and the migration pool. The normal pool comprises blocks with relatively low erase counts, while the migration pool contains blocks where excessive erasures have occurred, leading to the migration of cold data.
Initially, whenever a block exhausts its clean pages, it is inserted into the normal pool. However, if a specific block exhibits an excessively high erase count compared to others, cold data migration is performed on that block, and the migrated block is then inserted into the migration pool. Consequently, the migration pool remains empty until the first execution of migration. However, as the frequency of migration increases over time, the size of the migration pool gradually expands while the size of the normal pool diminishes.
Both the normal pool and the migration pool, as depicted in Figure 1, consist of N + 1 linked lists each, where N represents the number of pages within a block. Specifically, blocks with zero valid pages are linked to the header with index 0, while blocks with one valid page are linked to the header with index 1, and so forth. Thus, when blocks are inserted into each pool, they are added to the corresponding list based on the number of valid pages they contain. Additionally, as pages within a block become invalidated, the block is moved to a different list within the pool based on the number of valid pages. Managing each pool with individual lists based on the number of valid pages offers the advantage of quickly finding the block with the highest invalidation ratio during garbage collection [21].
In other words, in the garbage collection process, the system checks each pool starting from index 0 to see if there are any blocks linked to that index. For example, if there is a block linked to index 0, it means that the block has no valid pages, making it a candidate for the victim block. If there are no blocks linked to index 0, the system then checks index 1 to see if any blocks are linked there. If a block is linked to index1, it is selected as a candidate for the victim block.

3.3. Migration

The Greedy-MP policy triggers migration when a specific block’s erase count exceeds a threshold calculated using (1), which mirrors the formula employed in the PWL static wear leveling policy [20]. Equation (1) dictates that during the initial phase, when the average erase count of blocks is relatively low compared to the maximum erase count, migration occurs infrequently due to the threshold being set relatively high in relation to the average block erase count. However, as the average erase count of all blocks increases over time, the gap between the threshold and the average block erase count diminishes, leading to more frequent migration. Essentially, as the average erase count of all blocks rises, so does the likelihood of bad block occurrences, necessitating more frequent migration for wear leveling.
threshold = avg _ erase + max _ erase 2 .
During migration, it is necessary to identify cold data from other blocks and copy it to the worn-out block. However, since individual pages do not store the timestamps of their last modification, accurately determining cold data is challenging. Storing the timestamp for each page would require significant memory space. Therefore, we assume that the valid data in the block with the lowest erase count are cold data. A low erase count indicates that the invalidation ratio of the block has been consistently low for a time, implying that the valid pages in that block have remained unchanged for a long time. If multiple blocks have the same lowest erase count, the block with the lowest invalidation ratio is selected for migration. Once the target block for migration is determined, its valid pages are copied to the worn-out block. This migration process continues until the clean pages of the worn-out block are exhausted. In other words, the process of finding the target block for migration is repeated until the worn-out block has no clean pages. Figure 2 illustrates the pseudocode for the migration process.

3.4. Garbage Collection

The Greedy-MP policy follows a two-step process to select victim blocks. In the first step, akin to the greedy policy, it selects the block with the highest invalidation ratio in the normal pool as a candidate victim block, termed candidate1. Subsequently, in the second step, it searches for blocks in the migration pool with higher invalidation ratios than candidate1. If no such blocks exist, candidate1 is chosen as the victim block. Conversely, if blocks with higher invalidation ratios, referred to as candidates2, are found, their erase counts are compared to that of candidate1 and the average erase count of all blocks.
If the erase count of candidates2 is higher than that of candidate1 and also exceeds the average erase count of all blocks, candidate1 is still chosen as the victim block. This is because although candidates2 may have lower invalidation ratios, they are considered more worn-out relative to other blocks. However, if any candidate2 has been erased fewer times than candidate1 or if its erase count is lower than the average erase count of all blocks, it is selected as the victim block. This decision is based on the understanding that although these blocks are in the migration pool, they may not be significantly worn-out compared to other blocks due to migration performed relatively earlier.
Consequently, migrated worn-out blocks are not selected as victim blocks until their erase counts become lower than those of candidates in the normal pool or fall below the average erase count of all blocks, irrespective of their invalidation ratios. This contributes to delay in transitioning to bad blocks.
Once selected as victim blocks, they are utilized to handle write requests from the host after being erased. Once all clean pages are exhausted due to host write requests, the block is reinserted into the normal pool regardless of any prior migration, as it is no longer considered significantly worn-out compared to other blocks at that point. Figure 3 provides a pseudocode of the victim block selection process in garbage collection.

3.5. Computation Overhead

The Greedy-MP policy first selects a candidate block for garbage collection (candidate1) from the normal pool using a greedy approach. This process starts by checking from index 0 and continues through the indices until a linked block is found, as seen in Figure 3. Once a block is linked, the search stops immediately, and that block is selected as candidate1. Therefore, when the total number of blocks is N, the worst-case time complexity for selecting candidate1 is the same as the greedy method, O(1).
However, when selecting candidate2 from the migration pool, after initially selecting a block using the greedy approach, the process must be repeated if the erase count of the selected block is greater than that of candidate1. As a result, in the worst case, all blocks in the migration pool may need to be checked. Although this is unlikely, it is theoretically possible for nearly all SSD blocks to belong to the migration pool, with their valid page count being equal to or less than that of candidate1. Therefore, the worst-case time complexity for selecting candidate2 is O(N).
Thus, the worst-case time complexity for the garbage collection operation in the Greedy-MP policy is the same as that of CB and WOGC, but it is inferior to the PWL policy, which uses a greedy approach and has a worst-case time complexity of O(1).

4. Performance Evaluation

4.1. Experimental Environment

To evaluate the lifespan extension effect of the Greedy-MP policy on SSDs, the SSDSim simulator [11] was employed. Since the SSDSim simulator does not implement wear-leveling policies, representative dynamic wear-leveling policies such as CB, WOGC, along with the PWL static wear-leveling policy, were implemented and compared with Greedy-MP.
The target SSD was modeled based on previous research [27,28,29] as follows (Figure 4): The SSD includes a controller that runs a page-mapping FTL and DRAM, where the FTL’s mapping table is stored. The DRAM is used solely for storing the mapping table and is assumed not to be used as a cache for NAND. The SSD is configured with four NAND chips, each independently connected via four parallel channels, which serve as the storage media. Each chip consists of two dies, and each die contains two planes. Each plane includes a total of 911 NAND blocks, with each block comprising 576 NAND pages. Each page is 8 KB in size. The NAND page read, page write, and block erase latencies are 45 µs, 700 µs, and 4 ms, respectively. The maximum erase count for each block is set at 100; once a block exceeds this limit, it becomes a bad block and can no longer be used. The SSD’s overprovisioning is set at 20%, meaning the actual storage capacity is 64.1 GB, while the usable capacity exported to the file system is 51.2 GB.
The SSDSim simulator receives read and write requests from the file system in sector units, which is the basic unit of read/write operations in a hard disk and is 512 bytes in size. Each request contains information such as the type of request (read/write), the starting sector number, and the number of sectors to be read or written. Using this information, the page-mapping FTL identifies the physical location (chip, die, plane, block, page) of the target sectors for each request. Since write requests are processed as out-of-place updates, they can be sent to any chip, die, and plane. To maximize the SSD’s parallel processing capability, a single write request is divided into multiple sub-write requests of page size, which are then distributed across the planes of the idle chip at that time [11]. Once the target plane for each sub-write request is determined, the data are written to a clean page within the plane, and garbage collection is triggered if the number of clean pages in any plane falls below 10% of the total.
Microsoft Research Cambridge (MSRC) traces [30] were used as the I/O request workloads. Table 1 shows the format of the requests in each trace. The MSRC traces represent the starting position (“Offset”) and size (“Size”) of the request in bytes. Since the file system’s requests are generated in sector units, both the offset and size are multiples of 512. As mentioned earlier, since SSDSim receives requests in sector units, the “Offset” and “Size” expressed in bytes were converted into sector numbers and sector counts before being passed to SSDSim.
Table 2 shows the detailed attributes of the traces. The “Entire Logical Space” refers to the size of the entire address space accessed in the trace, including any unaccessed areas (“holes”). The “Read Logical Space” denotes the size of the address space that has been read at least once. Similarly, the “Write Logical Space” denotes the size of the address space that has been written at least once. For example, in the cam01 trace, the size of the entire address space is 15.9 GB, of which only 2.1 GB has been accessed through read requests and 0.7 GB through write requests. In most traces, only a small portion of the entire address space has been written, except for the prn0 trace, where 12.4 GB of the address space was written at least once. Additionally, from the “R/W Ratio”, it can be observed that cam01 and weba exhibit a read-intensive pattern, while the remaining traces show a write-intensive pattern.

4.2. Static vs. Dynamic Wear Leveling

Before evaluating the effects of the Greedy-MP policy, we assessed the lifespan of SSDs for policies performing dynamic wear leveling alone, policies performing static wear leveling alone, and policies performing both existing dynamic and static wear leveling. The SSD was considered to have reached the end of its lifespan when overprovisioning fell below 10% due to the occurrence of bad blocks, at which point we compared the number of sectors processed by host write requests.
To assess the impact of cold data—rarely modified after writing—on each policy’s effectiveness, we first performed a sequential write initialization followed by three random write initializations on a portion (0% to 50%) of the SSD’s total exported capacity of 51.2 GB. For instance, during a 50% initialization, approximately 25.6 GB was sequentially written in 64-page units, starting from the first sector. Subsequently, three rounds of random write initialization were conducted on the same area. During each random write initialization, random writes were repeated until 25.6 GB, representing 50% of the SSD’s exported capacity, had been written. The target sector for each random write request was determined randomly, and the size of each write request was also randomly selected to range from 1 to 64 pages. Thus, a request could write as few as 1 page or as many as 64 pages at once.
As shown in Table 2, because the address space accessed by write requests in most traces is very small compared to the SSD’s exported capacity, the data written during the initialization process remains as cold data, with little to no modification afterward.
After completing the initialization process, each trace was used as input data to measure the SSD’s lifespan. Since the traces used in the experiments represent I/O data over a one-week period, running each trace only once does not result in a significant number of bad blocks on the SSD. Therefore, we repeatedly applied each trace as input until the SSD reached the end of its lifespan.
Figure 5 shows the lifespan of SSDs achieved by each policy in each trace. The relative lifespan is displayed on the Y-axis, relative to the values when no static or dynamic wear leveling is performed. So, a result of 1 means the lifespan is the same as when no wear leveling is done. The X-axis represents the proportion of the storage capacity initialized with sequential and random writes before the experiment started. In the figure, CB and WOGC represent the results obtained when only dynamic wear leveling is performed. CB+PWL and WOGC+PWL, on the other hand, depict the outcomes when the PWL policy is implemented alongside dynamic wear leveling. PWL stands for the results achieved with static wear leveling alone.
The results indicate the following facts:
  • The effectiveness of wear leveling increases with a higher proportion of cold data. This trend is observed across all policies.
  • Dynamic wear leveling policies like CB and WOGC help extend the lifespan of SSDs. Compared to when no wear leveling is performed, WOGC extends the lifespan by up to a maximum of 1.43–4.01 times, and CB extends it by up to a maximum of 1.36–2.79 times, depending on the trace. However, in cases where no initialization is performed or the initialization ratio is 10%, there were instances where the lifespan actually shortened compared to when no wear leveling was applied. This suggests that dynamic wear leveling can have a negative impact when there are no cold data or when the proportion of cold data is low.
  • Static wear leveling performed alone (PWL) also has a significant effect in extending the lifespan of SSDs. The lifespan of SSDs improved by up to 1.86–5.92 times, depending on the trace.
  • When combining dynamic wear leveling policies like CB and WOGC with static wear leveling policy (CB+PWL and WOGC+PWL), the lifespan of SSDs could be further extended compared to using CB or WOGC alone. When using CB with PWL, the lifespan extended by up to 1.20–3.20 times depending on the trace, while for WOGC, it extended by up to 1.05–1.24 times. Notably, the lifespan extension effect of using PWL with CB was greater than with WOGC. While WOGC was generally superior to CB when performing dynamic wear leveling alone, combining PWL made CB the superior policy over WOGC. This suggests that evaluating the performance of using dynamic wear leveling alone may not be appropriate.
  • While CB+PWL extended the lifespan of SSDs more than not using wear leveling or using dynamic wear leveling alone, the improvement was not significant compared to PWL. It was inferior to PWL in prn0 and proj0, and achieved similar lifespans in the other traces.
Based on the results from Figure 5, the following conclusions can be drawn. Firstly, to extend the lifespan of SSDs, static wear leveling must be performed. It was more effective to use static wear leveling in conjunction with dynamic wear leveling than to rely solely on dynamic wear leveling. Secondly, existing dynamic wear leveling policies, such as CB and WOGC, which were designed without considering static wear leveling, do not harmonize well with static wear leveling policies. In some traces, CB+PWL achieved similar or even inferior SSD lifespans compared to PWL that do not employ dynamic wear leveling. This suggests a need for dynamic wear leveling policy designed with consideration for static wear leveling, such as Greedy-MP.

4.3. Evaluation of Greedy-MP

The performance of the Greedy-MP policy, a dynamic wear leveling policy designed with consideration for static wear leveling, is compared with the PWL, CB+PWL, and WOGC+PWL policies. Figure 6 compares the SSD lifespan achieved by each policy across different traces. The X-axis in each figure represents the proportion of storage space that was initialized with sequential and random writes before the measurement began, as explained in Section 4.2. The Y-axis shows the cumulative number of sectors processed by the host’s write requests until the SSD reached the end of its lifespan. As in the experiment described in Section 4.2, we considered the SSD to have reached the end of its lifespan when overprovisioning fell below 10% due to the occurrence of bad blocks. For each initialization ratio on the X-axis, after performing one sequential write initialization and three random write initializations, each trace was repeatedly applied as input to the simulator until the SSD reached the end of its lifespan.
The results demonstrate that the Greedy-MP policy is highly effective in extending the lifespan of SSDs. For all traces and the majority of initialization ratios, the Greedy-MP policy exhibits the longest SSD lifespan. Namely, it processed the most host write requests until the SSD reached the end of its lifespan. Compared to the PWL policy, which only applies static wear leveling, the Greedy-MP policy extended the lifespan by up to 1.07–1.72 times, depending on the trace. Additionally, compared to CB+PWL, the Greedy-MP policy extended the lifespan by up to 1.17–1.99 times. Furthermore, it extended the lifespan by up to 1.27–3.56 times compared to the WOGC+PWL policy. This indicates that the policy, which delays the occurrence of bad blocks by separately managing blocks that have been excessively erased and migrated, is effective in delaying bad block occurrence.
Meanwhile, in most policies, as the initialization ratio increases, the SSD lifespan initially increases but then begins to decrease. If the initialization ratio is too low, the proportion of cold data is also very low, making it difficult to effectively utilize the PWL policy, which consolidates cold data into worn-out blocks. Conversely, if the initialization ratio is too high, a significant portion of the SSD is filled with valid data, leading to higher valid page ratios of victim blocks during garbage collection. This reduces the number of clean pages reclaimed in each garbage collection process, resulting in more frequent garbage collections. Consequently, the erase count of blocks increases at a faster rate, ultimately reducing the SSD’s lifespan.
Figure 7 displays the average number of block erasures across all blocks at the point when the SSD lifespan terminates due to the occurrence of bad blocks. The results show that the Greedy-MP policy achieves an average block erase count close to 100 for all traces and the majority of initialization ratios. Considering that the maximum erase count per block is set to 100, it can be observed that the Greedy-MP policy maintains the block erase count almost evenly. As a result, as shown in Figure 6, it significantly extends the lifespan of the SSD compared to other policies. Among the six traces used in the experiment, cam01 and weba exhibit relatively read-intensive characteristics, while the remaining traces are write-intensive (as detailed in Section 4.1 and Table 2). The results, shown in Figure 6 and Figure 7, indicate that Greedy-MP consistently achieves near-maximum block erase counts across all traces and initialization ratios, leading to a longer SSD lifespan compared to other policies.
In contrast, CB+PWL and WOGC+PWL policies exhibit very low average block erase counts under low initialization ratios, particularly in scenarios where writes occurred within a relatively small logical address space, including the read-intensive traces. This indicates ineffective wear leveling when the ratio of cold data is low. Victim block selection policies like CB or WOGC were ineffective in achieving wear leveling when cold data were not distinct. Even when the proportion of cold data was high, while wear leveling was achieved and the average block erase count approached 100, it can be observed that these policies achieve much shorter SSD lifespans compared to the Greedy-MP policy (Figure 6). In particular, the WOGC+PWL policy demonstrates this trend prominently. This is because their low efficiency of a garbage collection leads to a higher frequency of garbage collection.
The inefficiency of garbage collection in CB and WOGC-based policies is evident from Figure 8 and Figure 9. Figure 8 shows the average number of valid pages in victim blocks selected for garbage collection in the proj0 and weba traces. Both PWL and Greedy-MP policies, which are based on greedy selection of victim blocks, exhibit relatively low numbers of valid pages. In contrast, when CB and WOGC are used as victim block selection policies, the number of valid pages in victim blocks is significantly higher. Consequently, since fewer clean pages are generated per garbage collection process, more frequent garbage collections are triggered. As a result, as seen in Figure 9, the write amplification factor (WAF) increases. This means that despite processing the same amount of host write requests, more NAND page write operations are incurred internally within the SSD. With a higher number of valid pages in victim blocks, more valid pages need to be moved during each garbage collection operation, and the lower efficiency of garbage collection leads to more frequent garbage collections, thus increasing the WAF.
Moreover, in the weba trace, when the initialization ratio is low, meaning the proportion of cold data is low, the WAF of PWL is relatively high. This is due to the imbalance in block erase counts, leading to excessive migrations and ultimately premature bad block occurrence. Consequently, both the average block erase count and the lifespan of the SSD are shortened in this environment (as shown in Figure 6 and Figure 7). In contrast, the Greedy-MP policy exhibits the lowest number of valid pages in victim blocks, indicating higher efficiency in individual garbage collection operations, leading to a lower frequency of garbage collections. As a result, it achieves the lowest WAF, delays bad block occurrence, and achieves a relatively longer SSD lifespan.
Finally, Figure 10 compares the average response time of host requests across each trace. The Y-axis of the figure represents the average response time of input/output requests in milliseconds. The results indicate that the Greedy-MP policy consistently exhibits the shortest response time across all traces, alongside the PWL policy which uses a greedy approach for victim block selection. Conversely, the WOGC+PWL policy generally shows the longest response time, with the CB+PWL policy also demonstrating longer response times compared to the Greedy-MP policy. As seen in Figure 8, using WOGC or CB as victim block selection policies results in lower efficiency of garbage collection, leading to an increase in the average response time of input/output requests.
In conclusion, the proposed Greedy-MP policy shows no degradation in efficiency of garbage collection compared to the PWL policy, and hence, no significant performance degradation is observed. Additionally, it improves the WAF and enhances wear leveling, thereby extending the lifespan of the SSD.

5. Conclusions

We proposed the Greedy-MP wear leveling policy, which utilizes both static and dynamic wear leveling. Greedy-MP is based on PWL static wear leveling, where if the accumulated erase count of a specific block exceeds a threshold, migration is performed to move valid pages from the youngest block to a worn-out block. Additionally, as a victim block selection policy for garbage collection, Greedy-MP operates a greedy policy but manages worn-out migration blocks separately to avoid selecting worn-out blocks as victims of garbage collection, thus delaying the conversion of worn-out blocks to bad blocks as much as possible.
Simulation results using the SSDSim simulator and MSRC server traces demonstrated the following:
  • Greedy-MP achieved the longest lifespan across all traces. Compared to policies using only PWL, it extended lifespan by up to 1.72 times, and compared to policies using PWL with CB dynamic wear leveling, it extended lifespan by up to 1.99 times, and compared to policies using PWL with WOGC dynamic wear leveling, it extended lifespan by up to 3.56 times.
  • In Greedy-MP policy, the average erase count of all blocks approached the maximum possible value when the SSD lifespan terminated, indicating that wear leveling had almost reached its limit.
  • Greedy-MP achieved the shortest average response time of input/output requests, indicating no significant performance degradation.
  • Using static wear leveling, like PWL, alongside dynamic wear leveling, like CB or WOGC, was more effective in extending SSD lifespan than using dynamic wear leveling alone.
  • Comparing policies using dynamic wear leveling alone (CB, WOGC) to those using only static wear leveling (PWL), the latter was largely more effective in extending SSD lifespan. However, combining both can enhance wear leveling degree, suggesting that it is desirable to utilize both static and dynamic wear leveling when designing new wear leveling policies.
A drawback of the Greedy-MP policy is that the worst-case time complexity for selecting victim blocks during garbage collection is O(N), which is higher compared to the PWL policy that utilizes a greedy approach. In future research, we aim to explore data structures and methods to improve this aspect. Additionally, we plan to analyze the quantitative effects of overprovisioning on extending the lifespan of SSDs.

Funding

This study was supported by the Research Program funded by the SeoulTech (Seoul National University of Science and Technology).

Data Availability Statement

MSRC traces can be downloaded from http://iotta.snia.org/traces/block-io/388. The original SSDSim simulator code can be downloaded from https://github.com/huaicheng/ssdsim. The modified SSDSim code can be requested from the corresponding author.

Acknowledgments

The performance evaluation of this paper was conducted on the servers at the Supercomputing Center of the Seoul National University of Science and Technology.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Narayanan, D.; Thereska, E.; Donnelly, A. Migrating server storage to SSDs: Analysis of tradeoffs. In Proceedings of the Eurosys, Nuremberg, Germany, 31 March–3 April 2009; pp. 145–158. [Google Scholar]
  2. Pan, Y.; Dong, G.Q.; Wu, Q.; Zhang, T. Quasi-Nonvolatile SSD: Trading flash memory nonvolatility to improve storage system performance for enterprise applications. In Proceedings of the IEEE HPCA, New Orleans, LA, USA, 25–29 February 2012; pp. 179–188. [Google Scholar]
  3. Liu, R.; Yang, C. Optimizing NAND flash-based SSDs via retention relaxation. In Proceedings of the USENIX FAST, San Jose, CA, USA, 14–17 February 2012. [Google Scholar]
  4. Shin, I. Applying fast shallow write to short-lived data in solid-state drives. IEICE Electron. Express 2018, 13, 1–9. [Google Scholar] [CrossRef]
  5. Shin, J.Y.; Xia, Z.L.; Xu, N.Y.; Gao, R.; Cai, X.F.; Maeng, S.; Hsu, F.H. FTL design exploration in reconfigurable high-performance SSD for server applications. In Proceedings of the 23rd International Conference on Supercomputing (ICS ’09). Association for Computing Machinery, New York, NY, USA, 8–12 June 2012; pp. 338–349. [Google Scholar]
  6. Agrawal, N.; Prabhakaran, V.; Wobber, T.; Davis, J.D.; Manasse, M.S.; Panigrahy, R. Design tradeoffs for SSD performance. In Proceedings of the USENIX ATC, Berkeley, CA, USA, 14–19 June 2008; pp. 57–70. [Google Scholar]
  7. Ruan, X.J.; Alghamdi, M.I.; Jiang, X.F.; Zong, Z.L.; Tian, Y.; Qin, X. Improving write performance by enhancing internal parallelism of solid state drives. In Proceedings of the IEEE IPCCC, Austin, TX, USA, 1–3 December 2012; pp. 266–274. [Google Scholar]
  8. Park, S.Y.; Seo, E.; Shin, J.Y.; Maeng, S.; Lee, J. Exploiting internal parallelism of flash-based SSDs. IEEE Comput. Archit. Lett. 2010, 9, 9–12. [Google Scholar] [CrossRef]
  9. Winata, Y.; Kim, S.; Shin, I. Enhancing internal parallelism of solid-state drives while balancing write loads across dies. Electron. Lett. 2015, 24, 1978–1980. [Google Scholar] [CrossRef]
  10. Chen, F.; Lee, R.; Zhang, X.D. Essential roles of exploiting internal parallelism of flash memory based solid state srives in high-speed data processing. In Proceedings of the 2011 IEEE 17th HPCA, San Antonio, TX, USA, 12–16 February 2011; pp. 266–277. [Google Scholar]
  11. Hu, Y.; Jiang, H.; Feng, D.; Tian, L.; Luo, H.; Zhang, S. Performance impact and interplay of SSD parallelism through advanced commands, allocation strategy and data granularity. In Proceedings of the ACM ICS, Tucson, AZ, USA, 31 May–4 June 2011; pp. 96–107. [Google Scholar]
  12. Shin, I. Improving internal parallelism of solid state drives with selective multi-plane operation. Electron. Lett. 2018, 54, 64–66. [Google Scholar] [CrossRef]
  13. Jiao, Z.; Bhimani, J.; Kim, B. Wear leveling in SSDs considered harmful. In Proceedings of the ACM HotStorage, Virtual, 27–28 June 2022; pp. 72–78. [Google Scholar]
  14. Chiang, M.; Lee, P.; Chang, R. Managing flash memory in personal communication devices. In Proceedings of the ICSE, Singapore, 2–4 December 1997; pp. 177–182. [Google Scholar]
  15. Kawaguchi, A.; Nishioka, S.; Motoda, H. A flash-memory based file system. In Proceedings of the USENIX ATC, New Orleans, LA, USA, 16–20 January 1995; pp. 155–164. [Google Scholar]
  16. Matsui, C.; Arakawa, A.; Sun, C.; Takeuchi, K. Write order-based garbage collection scheme for an LBA scrambler integrated SSD. IEEE Trans. VLSI Syst. 2017, 2, 510–519. [Google Scholar] [CrossRef]
  17. Chen, Z.; Zhao, Y. DA-GC: A Dynamic Adjustment Garbage Collection Method Considering Wear-leveling for SSD. In Proceedings of the GLSVLSI, Virtual, China, 8–11 September 2020; pp. 475–480. [Google Scholar]
  18. Chang, Y.; Hsieh, J.; Kuo, T. Improving flash wear-leveling by proactively moving static data. IEEE Trans. Comput. 2010, 1, 53–65. [Google Scholar] [CrossRef]
  19. Murugan, M.; Du, D. Rejuvenator: A static wear leveling algorithm for NAND flash memory with minimized overhead. In Proceedings of the IEEE MSST, Denver, CO, USA, 23–27 May 2011; pp. 1–12. [Google Scholar]
  20. Chen, F.; Yang, M.; Chang, Y.; Kuo, T. PWL: A progressive wear leveling to minimize data migration overheads for NAND flash devices. In Proceedings of the DATE, Grenoble, France, 9–13 March 2015; pp. 1209–1212. [Google Scholar]
  21. Yoo, D.; Shin, I. Implementing greedy replacement scheme using multiple List for page mapping scheme. J. KIIT 2011, 17, 17–23. [Google Scholar]
  22. Ban, A. Flash File System. United States. US5404485A, 4 April 1995. [Google Scholar]
  23. Ban, A. Flash File System Optimized for Page-Mode Flash Technologies. United States. US5937425A, 10 August 1999. [Google Scholar]
  24. Kim, J.; Kim, J.M.; Noh, S.H.; Min, S.L.; Cho, Y. A space-efficient flash translation layer for compactflash systems. IEEE Trans. Consum. Electron. 2002, 2, 366–375. [Google Scholar] [CrossRef]
  25. Lee, S.W.; Park, D.J.; Chung, T.S.; Lee, D.H.; Park, S.; Song, H.J. A log buffer-based flash translation layer using fully-associative sector translation. ACM Trans. Embed. Comput. Syst. (TECS) 2007, 6, 18-es. [Google Scholar] [CrossRef]
  26. Shin, I.; Shin, Y.H. Active log pool for fully associative sector translation. IEICE Electron. Express 2014, 11, 20130942. [Google Scholar] [CrossRef]
  27. Hong, D.; Kim, M.; Cho, G.; Lee, D.; Kim, J. GuardedErase: Extending SSD lifetimes by protecting weak wordlines. In Proceedings of the USENIX FAST, Santa Clara, CA, USA, 22–24 February 2022; pp. 133–146. [Google Scholar]
  28. Kang, D.; Jeong, W.; Kim, C.; Kim, D.H.; Cho, Y.S.; Kang, K.T.; Ryu, J.; Kang, K.M.; Lee, S.; Kim, W.; et al. 256Gb 3b/Cell V-NAND flash memory with 48 stacked wl layers. In Proceedings of the IEEE ISSCC, San Francisco, CA, USA, 31 January–4 February 2016; pp. 130–132. [Google Scholar]
  29. Yamashita, R.; Magia, S.; Higuchi, T.; Yoneya, K.; Yamamura, T.; Mizukoshi, H.; Zaitsu, S.; Yamashita, M.; Toyama, S.; Kamae, N. 11.1 A 512Gb 3b/cell flash memory on 64-word-line-layer BiCS technology. In Proceedings of the IEEE ISSCC, San Francisco, CA, USA, 5–9 February 2017; pp. 196–197. [Google Scholar]
  30. Narayanan, D.; Donnelly, A.; Rowstron, A. Write off-loading: Practical power management for enterprise storage. ACM Trans. Storage 2008, 4, 1–23. [Google Scholar] [CrossRef]
Figure 1. The normal pool and the migration pool, each comprising N + 1 linked lists.
Figure 1. The normal pool and the migration pool, each comprising N + 1 linked lists.
Applsci 14 08186 g001
Figure 2. The pseudocode for the migration process.
Figure 2. The pseudocode for the migration process.
Applsci 14 08186 g002
Figure 3. The pseudocode of the victim block selection process in garbage collection.
Figure 3. The pseudocode of the victim block selection process in garbage collection.
Applsci 14 08186 g003
Figure 4. The internal block diagram of the target SSD.
Figure 4. The internal block diagram of the target SSD.
Applsci 14 08186 g004
Figure 5. Comparing the SSD lifespan achieved by dynamic wear leveling and static wear leveling. (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Figure 5. Comparing the SSD lifespan achieved by dynamic wear leveling and static wear leveling. (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Applsci 14 08186 g005
Figure 6. Comparison of SSD Lifespan (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Figure 6. Comparison of SSD Lifespan (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Applsci 14 08186 g006
Figure 7. Comparison of Average Block Erase Count at SSD Lifespan Termination (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Figure 7. Comparison of Average Block Erase Count at SSD Lifespan Termination (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Applsci 14 08186 g007
Figure 8. Comparison of Average Valid Pages in Victim Blocks of Garbage Collection (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) proj0; (b) weba.
Figure 8. Comparison of Average Valid Pages in Victim Blocks of Garbage Collection (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) proj0; (b) weba.
Applsci 14 08186 g008
Figure 9. Comparison of WAF(Write Amplification Factor) (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) proj0; (b) weba.
Figure 9. Comparison of WAF(Write Amplification Factor) (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) proj0; (b) weba.
Applsci 14 08186 g009
Figure 10. Comparison of Average Response Time of Host Requests (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Figure 10. Comparison of Average Response Time of Host Requests (PWL, CB+PWL, WOGC+PWL, GREEDY-MP). (a) cam01; (b) hm0; (c) prn0; (d) proj0; (e) stga; (f) weba.
Applsci 14 08186 g010aApplsci 14 08186 g010b
Table 1. Trace Format.
Table 1. Trace Format.
Time StampHost NameDisk NumberTypeOffsetSizeResponse Time
128166372002993000usr0Read109955328001638430123
128166372010284000usr0Write32076677122457682327
.....................
Table 2. Trace Attributes.
Table 2. Trace Attributes.
TraceEntire Logical SpaceRead Logical Space (GB)Written Logical Space (GB)R/W Ratio (%)Total Read/Written Bytes (GB)
CAM-01-SRV-lvm0(cam01)15.92.10.773/2735.3/13.1
hm_0(hm0)13.92.01.733/6710.0/20.5
prn_0(prn0)41.33.812.422/7813.1/46.0
proj_0(proj0)16.23.21.86/949.0/144.3
CAMRESSTGA01-lvm0(stga)10.86.20.433/677.3/15.1
CAMRESWEBA03-lvm0(weba)33.97.70.760/4017.4/11.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shin, I. Leveraging Static and Dynamic Wear Leveling to Prolong the Lifespan of Solid-State Drives. Appl. Sci. 2024, 14, 8186. https://doi.org/10.3390/app14188186

AMA Style

Shin I. Leveraging Static and Dynamic Wear Leveling to Prolong the Lifespan of Solid-State Drives. Applied Sciences. 2024; 14(18):8186. https://doi.org/10.3390/app14188186

Chicago/Turabian Style

Shin, Ilhoon. 2024. "Leveraging Static and Dynamic Wear Leveling to Prolong the Lifespan of Solid-State Drives" Applied Sciences 14, no. 18: 8186. https://doi.org/10.3390/app14188186

APA Style

Shin, I. (2024). Leveraging Static and Dynamic Wear Leveling to Prolong the Lifespan of Solid-State Drives. Applied Sciences, 14(18), 8186. https://doi.org/10.3390/app14188186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop