1. Introduction
Low-power embedded devices typically utilize NAND or NOR flash memory for storing boot records, bootloaders, firmware images, communication stacks, and other essential software components. The remaining free space in these memories can be used for storing device configurations, sensor data, or communication parameters. When devices collect a significant amount of data, an external flash memory might be necessary. Using a filesystem is essential for organizing these data within the device’s memory.
In addition to outlining key ideas and concepts, this article presents an improved version of the configurable flash filesystem (CFFS) [
1]. The following contributions are presented in the article:
Reliability features introduced into CFFS ensure the detection of corrupted data and help detect when a flash memory starts to fail.
Multi-memory support ensures that the CFFS is the only permanent storage needed in the device’s firmware, even if it has one or more external memories.
Real-time operating system portability makes CFFS compatible with modern embedded development frameworks that utilize RTOS-es like Zephyr RTOS or FreeRTOS.
The original design aimed to maximize memory utilization for sensor data, automatically organize data into files, and reduce flash wear by incorporating circular buffer files. The main drawback of the original design was the lack of built-in reliability mechanisms, as sensor data collection included its own integrity checks. However, when CFFS was tested in industrial applications [
2] on different devices, the absence of basic reliability features became a significant issue. The first challenge was adding reliability to CFFS while maintaining high memory utilization. The next two challenges stemmed from deployment on various devices (nRF52, ESP32, STM32) with external memories and real-time operating systems. CFFS needed to support multiple physical memories simultaneously while ensuring portability across operating systems.
The improved design of our filesystem library is efficient and adaptable for low-power devices. It utilizes pre-allocated file regions in its configuration that can be quickly calculated and reconfigured at runtime, allowing for flexible storage management. By minimizing metadata to only essential error codes, it reduces overhead while ensuring reliability. Additionally, its simple architecture inherently lowers flash wear and results in a small memory footprint, making it well-suited for resource-constrained environments.
The rest of the article is organized as follows.
Section 2 reviews related works that form the basis of our approach and discusses the trade-offs involved in designing filesystems for low-power devices with limited resources.
Section 3 presents the enhanced design of our configurable flash filesystem.
Section 4 provides an overview of the hardware and software used in the experiments presented in
Section 5. Finally,
Section 6 discusses the results and concludes the article.
2. Background and Related Work
Design of a filesystem for low-power embedded devices includes multiple challenges. The first challenge is related to the flash memory wear. Repeated erase cycles of memory cells cause gradual degradation, ultimately leading to cell failure. The authors of [
3] list erase cycle limits for single-level cell (SLC), multi-level cell (MLC), and triple-level cell (TLC) NAND flash memories. The SLC limit is ~100,000 erases, the MLC limit is 5000–10,000, and the TCL limit is 500–1000 erases per flash block. This requires filesystems designed for these memories to include wear leveling, a mechanism designed to spread flash block erases among blocks as equally as possible. Wear leveling can be active, where the filesystem handles the distribution of write operations across flash pages, or passive, where it is managed by the underlying flash memory controller. Another significant challenge is garbage collection (GC) [
4]. Some filesystems handle updates to existing data by copying the updated data to a new location and marking the old data as invalid. In such cases, the filesystem must implement an algorithm that periodically erases blocks containing invalid data to free up space for new writes. In many filesystems, wear leveling and GC are managed by the flash translation layer (FTL), which acts as an abstraction layer between the filesystem and raw flash memory. FTL not only distributes wear evenly but can also provide crash recovery mechanisms by maintaining metadata and journaling techniques to restore consistency after unexpected power loss [
5,
6,
7].
Flash memories used in electronic devices span a wide range of storage capacities. Based on this, filesystems for these memories can be split into two main groups:
High-level filesystems—larger capacity memories, ranging from hundreds of megabytes to terabytes;
Low-level filesystems—small-capacity memories, ranging from a few kilobytes to tens of megabytes.
The article primarily focuses on filesystems used in embedded devices. Depending on the application, these devices can greatly vary in memory capacity. High-capacity memories are common in consumer electronics, satellite systems, and automotive infotainment systems. On the other hand, low-capacity memories are typical in wireless sensor networks, environmental monitoring, and industrial automation.
2.1. High-Level Filesystems for Embedded Devices
Some embedded devices with large memory capacities can use standard filesystems such as NTFS [
8], ext4 [
9], Btrfs [
10], exFAT [
11], or even F2FS [
12], which are commonly found in personal computers or storage devices. However, many embedded applications require specific features, optimizations, or adjustments, making modified or custom filesystems a better choice.
Yet Another Flash Filesystem (YAFFS) [
13,
14] was one of the first filesystems designed directly for flash memory and usable in embedded devices. Key features include out-of-place updates, GC, wear leveling, and bad block management. Journaling Flash Filesystem 2 (JFFS2) [
15] is a log-structured filesystem with compression support. The main drawback of both YAFFS and JFFS2 was slower initialization [
16]. The Unsorted Block Image Filesystem (UBIFS) [
17] improved on this by introducing a tree-based structure with journaling, providing faster mount times, better scalability, and more efficient handling of large NAND flash storage [
18]. Flashlight [
19] is a filesystem that adopts a hybrid log-structured and block-based approach, optimizing performance and metadata management while maintaining power failure resilience. The authors of [
20] developed a filesystem that utilizes the filesystem control block (FSCB) to scan fixed memory blocks instead of scanning the entire memory during initialization. Experiments with this solution have shown faster initialization and better memory management than YAFFS. The Low-Overhead Flash Filesystem (LOFFS) [
21] was designed to improve performance and reduce memory footprint by using hybrid file structures. It uses a path tree for efficient directory management, and its improved version, the Extensible Low-Overhead Flash Filesystem (ELOFS) [
22], introduces active GC to further improve system performance and reduce flash memory wear. Some embedded devices may use ferroelectric RAM (FRAM) for permanent storage. The Intermittent Non-Volatile Memory Filesystem (iNVMFS) [
23] is designed to take advantage of FRAM features, ensuring data consistency and integrity during power failures. It minimizes memory overhead, optimizes writing performance, and extends memory lifespan. There are also various methods and algorithms that can be incorporated into existing filesystems to improve performance, GC, wear leveling, and reduce memory footprint [
24,
25,
26,
27].
2.2. Low-Level Filesystems for Embedded Devices
Many high-level filesystems for embedded devices optimize metadata and minimize memory footprint while ensuring data integrity, efficient wear leveling, and garbage collection. However, as memory capacity decreases, further compromises are needed to reduce both CPU computation requirements and power consumption. Minimizing the number of flash operations is essential in low-level filesystems, as each writing and erasing cycle contributes to both energy usage and flash wear [
28]. These filesystems may offer fewer features and use simplified wear leveling and GC algorithms. Some even attempt to eliminate GC and wear leveling entirely. The lack of directories and other features, as well as pre-allocated regions for various data types, is common in low-level flash filesystems. Reducing metadata further optimizes power efficiency, ensuring that a filesystem remains lightweight while extending the lifespan of the flash memory. The advantage of such filesystems is that their reduced feature set results in a smaller memory footprint in the compiled firmware, making them well-suited for small chips with limited resources.
The file allocation table (FAT) filesystem was originally designed for floppy disks with relatively small capacity. While it is a simple filesystem, porting it to flash memory requires additional features to compensate for the lack of wear leveling [
29]. The HT filesystem (HTFS) [
30] is a simpler filesystem that splits memory into three regions—filename block, macro block, and data block. The filename block stores file names, the macro block stores file metadata, and the data block stores raw data. The HTFS keeps track of operations and estimates the remaining lifetime of memory sectors. Slim filesystem (SlimFS) [
31] requires file regions to be defined in source code. Initialization is very fast, and SlimFS supports three file types—binary, text, and circular text files. A circular text file can be written indefinitely. When the reserved space is full, the oldest data are erased.
Software development kits (SDKs) and frameworks for embedded devices often include a library or a general-purpose filesystem. Flash data storage (FDS) [
32] is a library by Nordic Semiconductor that stores data sequentially in non-volatile flash memory. Each data entry is tagged with a file and record identifier. Deleted records are marked as invalid, and FDS includes a GC algorithm to reclaim memory. The little filesystem (LittleFS) [
33] and serial peripheral interface flash filesystem (SPIFFS) [
34] are filesystems that provide additional features, including directory support. They offer power-loss resilience, wear leveling, and GC, making them ideal for demanding applications on resource-constrained devices. However, this also results in increased metadata and more erase operations over time. This points to the need for our filesystem design—a lightweight, multiplatform filesystem that organizes data in flash memory with minimal metadata and passive wear-leveling and can be reconfigured at runtime. Based on these requirements, we developed CFFS, which is described in the following section. In
Section 5, we compare CFFS with FDS, SPIFFS, and LittleFS.
3. Configurable Flash Filesystem Design
CFFS is designed to maximize the utilization of assigned memory regions for application data. It is a logical library that can be easily ported to any hardware platform if the necessary platform-specific code for flash operations is provided. This section describes filesystem design essentials as well as extended features of the CFFS.
3.1. Filesystem Design Essentials
HTFS and SlimFS with pre-allocated memory regions have shown fast initialization. It is a very good feature for low-power sensor devices. CFFS adopts this but introduces a configuration that can be changed during runtime. The configuration is a simple byte array of variable length that stores file types, file identifiers, and requested file sizes in flash blocks. The configuration can be hardcoded, read from flash, or even include multiple versions, allowing the firmware to switch between them as needed. A typical case of use is a sensor node capable of measuring temperature and humidity. The initial firmware includes a configuration with files for both measurements. If humidity data are no longer needed, the device can switch to a configuration with only the temperature file, and the space previously allocated for humidity values is merged into the temperature file.
Four different file types are supported, with each one designed for a specific purpose. The paper with original design [
1] describes in detail how each file works, including a detailed description of the file region and content pointers. The supported file types are as follows:
Static file—single block rewritten each time a file is updated, suitable for sporadically changed configuration values;
Appendable file—a region that is sequentially filled with data until it reaches full capacity;
Circular file—a region that is sequentially filled with data, and when full capacity is reached, the oldest block is erased, allowing the file to be written into indefinitely;
Classic file—behaves as a normal file, can be written at any position, but should be used with caution, as frequent rewrites of the same block can cause faster wear.
CFFS can also split one physical memory region into multiple logical drives. In such a case, each drive includes its own configuration and files. This can be used to further logically organize the data within the memory. A structure containing drive information is allocated for each drive, along with structures for file information. The drive includes a pointer to an array of file information structures and a pointer to the flash memory API, which handles interaction with the flash memory. A basic diagram is shown in
Figure 1.
The drive configuration consists of information about its files. For each file, its type, ID, and requested size are specified, while the remaining file structure details are determined during drive initialization. A file’s requested size may be zero, indicating that it does not have a predefined storage allocation. During initialization, the drive first assigns memory regions to files with a nonzero requested size. The remaining space is then evenly distributed among files with a requested size of zero. Let
N denote the total number of files on a drive, and
B the total number of blocks available for the drive. The set of files with a specified size is denoted as
F, while the set of files with size zero is denoted as
Z. The requested size of file
i is represented as
Si. The total number of blocks allocated to files with a specified size,
BF, is given by the following:
The remaining blocks
BZ, available for files with a requested size of zero, are as follows:
Each file in
Z receives at least
Q blocks from the remaining blocks
BZ:
After assigning
Q blocks to each file in
Z, there may still be
BR remaining blocks due to division:
The
BR blocks are distributed among the first
BR files in
Z, so the final number of assigned blocks
Sj for each file
j ∈
Z is as follows:
These calculations define the file regions within the drive. The file content pointers are determined by reading the first word of each block in the assigned file region and identifying the end-of-file marker (an empty word). Once detected, the previous block is scanned to determine the exact boundary of the file content.
The first advantage of this design is the elimination of separate metadata for tracking file content across scattered blocks. Each file is assigned a contiguous block region. The user directly manages what is written into the allocated files, eliminating the need for invalidation and GC later. The second advantage is that when a device collects data indefinitely, the circular file type inherently provides passive wear leveling. No additional filesystem operations are required, other than those related to file operations.
3.2. Error Detection
The design described in the previous subsection does not include any mechanism to detect corrupted data unless it is included in the application data. As mentioned in the introduction, the challenge is to add reliability features while maintaining high memory utilization for application data.
Any filesystem should include a configuration option to enable the write-then-read feature. When enabled, CFFS reads the written data immediately after a successful write operation. It compares the write and read buffers to verify that the data were written correctly. Since read operations are generally fast in flash memories, this provides a good trade-off for the early detection of corrupted data.
Other than this basic feature, CFFS includes an abstraction layer to provide it with an error-detection algorithm. In this article, we are using a simple cyclic redundancy check (CRC) algorithm. Other algorithms, like MD5 checksum, can be provided. There are two options that can be configured for each CFFS drive:
File integrity check (
Figure 2a)—The file type includes a flag to specify whether the file should include an error check. If enabled, two blocks are reserved at the end of the file’s allocated memory region to store error check codes in a circular manner. The code is recalculated after each write or erase operation. Static files store a single error code value at the end of their data. This option is better for standard use.
Drive integrity check (
Figure 2b)—Error codes are tracked for all files and combined into a single value. Space is allocated at the end of the memory region allocated for the drive, and the final error code values are stored in it in a circular manner. The code is updated after each write or erase operation in any file. Error codes are stored in one place, but if an error is detected, the data within the whole drive can be flagged as corrupted. This option is designed to detect that the flash memory is starting to fail.
When integrity checks are enabled, the blocks reserved for error codes may wear out faster than the rest of the file region blocks. Let
BE be the number of blocks allocated for error codes,
BS the block size, and
ES the size of an error code. If a flash block can endure
Emax erase cycles before failure, the maximum number of records
Rmax that can be written before the error check region starts to wear out is given by the following:
This formula determines the write endurance of a file with integrity checks enabled (BE = 2 in this case). It can also be used to estimate the optimal number of flash pages to reserve for error codes when enabling the drive integrity check.
Although not completely resilient, these configurable options, along with the write-then-read feature, enable basic error detection for CFFS while not sacrificing too much space for error codes. The comparison between error detection turned off and on, as well as comparison with other filesystems, is shown in the ‘Results’ section.
3.3. Multiple Physical Memory Support
A device with multiple physical memories requires separate drivers for each type of memory. Flash operations for the internal memory of a system-on-a-chip (SoC) or microcontroller are typically managed through a non-volatile memory controller, while external memory may be accessed via a serial peripheral interface (SPI) bus. Each memory driver must include, at a minimum, functions to initialize, write, read, and erase the memory. These drivers are usually provided by the device or memory manufacturers.
To support multiple flash memories, every CFFS drive is associated with a specific set of flash API functions. The drive definition includes its physical address, size, and a pointer to the corresponding flash API. At its core, the flash API is a structure containing multiple function pointers that reference the appropriate flash memory driver functions for a given physical memory. This design ensures that each drive interacts exclusively with its assigned memory region, maintaining clear separation between different memory types.
The basic flash API interaction between firmware, CFFS, and flash memories is shown in
Figure 3. An example of how CFFS can be configured to split memory regions of multiple memories into drives is shown in
Figure 4.
3.4. Filesystem Portability
During the development of the CFFS, we have identified three main concepts for the design of an easily portable filesystem:
The first concept allows developers to pass custom data through all filesystem layers, enabling its use when the filesystem calls back to the firmware. These data may include function pointers, custom parameters required by the flash API, or callback functions executed after flash operations complete. The filesystem should not modify these data. For devices without an operating system, it is very important that the filesystem yields CPU control while waiting for flash operations to be completed. Passing custom data makes it possible to give control back to firmware on any hardware platform while the filesystem is busy.
The second concept is essential for ensuring filesystem portability. The filesystem relies on functions that must be provided for each memory it interacts with. As mentioned in
Section 3.1 and shown in
Figure 1, each drive is assigned a set of function pointers for flash memory operations. During drive initialization, these pointers are mapped to the flash API functions of the corresponding physical memory. In object-oriented languages like C++, this can be implemented using abstract classes.
When designing a filesystem, it is good practice to align its functions with standard filesystems. Both functionally and by naming conventions. The third concept aims to improve portability to real-time operating systems. One example is the Zephyr RTOS filesystem API. It is a POSIX-like interface with many functions following the naming convention of the high-level filesystems. CFFS is designed in a way that a simple wrapper can align it with the Zephyr RTOS. For example, the calling function ‘fs_mount’ of Zephyr RTOS API will trigger ‘cffs_init_drive’, ‘cffs_compare_drive_config’, and, optionally, ‘cffs_configure_drive’ from the CFFS API. Calling Zephyr’s ‘fs_write’ function will call the similarly named ‘cffs_write’ function of the CFFS API. Unsupported features (directories, symbolic links, etc.) will result in error codes. The basic interaction between Zephyr RTOS and CFFS is shown in
Figure 5.
4. Materials and Methods
The CFFS [
1] is a library designed to manage the storage, organization, reading, and erasing of data in standard NOR and NAND flash memories commonly used in low-power devices. It automatically allocates memory regions for different files while tracking their contents and sizes. The filesystem is closed-source, but its design and concepts are described in detail in [
1] and in this article. In
Section 5, we compare it to three other filesystems:
LittleFS—available on GitHub [
33]—branch master, commit ‘d01280e’;
SPIFFS—available on GitHub [
34]—branch master, commit ‘0b2e129’;
FDS [
32]—part of nRF5 SDK by Nordic Semiconductor, we used version v17.1.0.
Both LittleFS and SPIFFS were simulated on a personal computer. They both include emulation of block devices:
FDS is a library included in the Nordic Semiconductor nRF5 SDK and is highly integrated with other drivers and libraries within the SDK, making it difficult to simulate on a personal computer. For testing FDS, we recommend using Nordic Semiconductor development kits such as the PCA10040 or PCA10056. In our experiments with the FDS, we used SDK version v17.1.0 and the PCA10056 development kit. The PCA10056 was selected because it offers 1 MB of flash storage, whereas the PCA10040 has only 0.5 MB. Since our experiments in
Section 5 require 0.5 MB solely for the filesystem, the PCA10056 was the more suitable choice. Our test firmware was based on the FDS example located in the ‘examples/peripheral/flash_fds’ directory.
As mentioned, CFFS is a logical library that can be ported to various hardware platforms. It can also be simulated on a personal computer by allocating a RAM region to emulate the filesystem, defining the flash block size, and setting the smallest writable unit size (word size). The virtual flash API for this region uses standard functions like ‘memset’ and ‘memcpy’ from the C Standard Library, defined in the ’string.h’ header. The experiments in
Section 5 were simulated similarly to LittleFS and SPIFFS. To provide an example, during a write operation, CFFS calls the ‘platform_write’ function from the flash API assigned to the drive. The flash API, implemented for each hardware platform and its physical memory, then calls the respective underlying write function. For example:
The resulting contents of the flash memory or emulated region will be identical, provided that the memory or emulated region has the same size, block size, and smallest writable unit.
The data used for the written data chunks in
Section 5 were randomly generated. Memory was considered full when the filesystem returned the corresponding error code upon a write request. In LittleFS, GC was performed automatically after file deletion, whereas in SPIFFS and FDS, it had to be triggered manually. The number of erased blocks was tracked using the emulated block devices in LittleFS and SPIFFS. For FDS, we modified its underlying library, fstorage, to count and log the number of erase operations executed.
5. Results
In this section, we compare CFFS to three other filesystems described in
Section 2.2: LittleFS, SPIFFS, and FDS. The experiments focus on memory utilization for application data and flash memory wear by tracking the number of erase operations. A 0.5 MB flash memory region is used for testing, consisting of 128 blocks, each 4096 bytes in size. We also compare the performance of CFFS with and without error check codes enabled.
5.1. Filling Memory with Data
In the following experiments, we continuously fill the flash region with fixed-size data chunks. We tested three chunk sizes: 8 bytes, 768 bytes, and 6912 bytes. Additionally, we evaluated two scenarios: one with a single allocated file and another with 10 files. Once the filesystem reports full memory, the number of data chunks successfully written is calculated. Memory utilization is then determined as the difference between the total data written and the total memory capacity. Final memory utilization results for the single-file tests are shown in
Figure 6, while results for the 10-file tests are presented in
Figure 7.
In the single file tests, CFFS utilizes memory the best with both configurations, reaching more than 97% of the memory utilized for application data. In the 10 file tests, the utilization of the CFFS with error checks (marked as CFFS CRC) drops because there are more blocks allocated for error codes. The utilization of smaller data chunks is still very good. It is worth noting that FDS does not support data chunks greater than the block size, so while its utilization grows significantly with data chunk size, it cannot store larger data chunks.
The decrease in memory utilization when multiple files require error checking is compensated for by reduced memory wear. As shown in the following tables, CFFS CRC performs significantly fewer erase operations than LittleFS and SPIFFS, even avoiding erasures entirely during tests with larger data chunks.
Table 1 presents the number of erase operations in single-file tests, while
Table 2 shows the results for 10-file tests. All filesystems perform fewer erasures with larger data chunks. For LittleFS and SPIFFS, the chunk size has a significant impact on the number of erase operations. In contrast, FDS and CFFS do not perform any erasures, and CFFS with error checking enabled (CFFS CRC) erases a few pages when using 8-byte chunks, as it recalculates the error code after each write.
5.2. Writing a Large Amount of Data
The following experiment measures the number of flash erase operations performed by each evaluated filesystem while continuously writing 3 MB of data in 8-byte and 768-byte chunks. The experiment with 6912-byte chunks was not conducted, as FDS does not support this size. In CFFS, a circular file was used, allowing older data to be automatically overwritten when the memory became full. In other filesystems, once the memory was full, all chunks were deleted or invalidated, triggering garbage collection. The results in
Table 3 demonstrate how CFFS minimizes erase operations during prolonged device operation. While LittleFS and SPIFFS may distribute erase operations more evenly, they execute an excessive number of them. Although FDS performs fewer erase operations for 768-byte chunks, it is less efficient for smaller data chunks, which are common in sensor nodes.
5.3. Full Drive Integrity Check
This experiment demonstrates how the space allocated for error codes affects memory wear when full CFFS drive integrity checks are enabled. After each write or erase operation, the error code is recalculated and written. If too few blocks are allocated for error codes, these blocks can wear out quickly. In the experiment, we allocated between 2 and 107 blocks for error codes and compared the maximum possible memory utilization and the number of erased pages. The results are shown in
Figure 8. When an excessive number of blocks—between 107 and 43—were allocated for error codes, no erase operations were required while filling the memory. However, data memory utilization was limited to only 66%, making this approach inefficient. When fewer than 43 blocks were allocated for error codes, the number of erase operations began increasing linearly. At the lowest allocation of two blocks, memory utilization reached 98%, but both error code blocks were erased ~30 times (61 erases total). The size of the error code region should be carefully chosen based on the device’s intended use.
5.4. Conclusions
This section presented multiple experiments evaluating CFFS, both with and without error checking, in terms of memory utilization and flash wear. While CFFS offers a more limited feature set than LittleFS and SPIFFS, it significantly reduces hardware requirements. Instead of traditional GC algorithms, CFFS uses circular buffers with automatic deletion of the oldest data, providing an alternative approach to GC and passive wear-leveling. This reduces the computational overhead compared to filesystems with GC and active wear-leveling. Frequent data erasure and block movement not only consume CPU time but also accelerate flash wear.
Table 4 compares the tested filesystems across key characteristics, including garbage collection, wear-leveling, memory utilization, and flash wear. The absence of GC and the use of passive wear leveling in CFFS reduce its CPU time and energy consumption while maintaining high memory utilization and minimizing flash wear.
6. Discussion
This article focuses on low-level filesystems commonly used in resource-constrained, low-power devices. These devices typically have limited memory allocated for data, making efficient memory utilization crucial. The article discusses the challenges involved in developing such filesystems, along with key ideas and concepts to consider during the development process. By following the ideas and concepts presented in this article, the resulting filesystem implementation will be efficient in terms of memory utilization, portable across different hardware platforms, and lightweight, making it well-suited for low-power devices. This approach ensures that the filesystem meets both performance and flexibility requirements without unnecessary overhead.
The CFFS presented in this article focuses on maximizing memory utilization and reducing flash memory wear. It includes an optional error detection feature that can be configured per file or per drive. The experiments highlight its advantages compared to other tested filesystems, along with some trade-offs. The main advantages include very high memory utilization and significant reduction in erase operations, particularly for devices with long lifetime requirements. The simplicity of the filesystem results in a smaller memory footprint, as it lacks high-level features such as directories, symbolic links, and access control, making it more suitable for small chips with limited memory capacity. One trade-off is reduced memory utilization when reliability mechanisms are enabled, though these can be activated only for files that require it. The error detection mechanisms are simple, which means faster implementation and negligible decrease in performance, but the filesystem is not completely resilient.
Overall, CFFS represents an improvement over filesystems that allocate memory regions for data in their source code, as it can be reconfigured at runtime. It also eliminates the need for metadata other than error codes, avoids the requirement for garbage collection, and incorporates wear leveling natively in its circular files. The CFFS drive design, utilizing an abstract flash API with function pointers, enables the filesystem to support multiple physical memories on the same device. The filesystem provides standard file-related functions, ensuring seamless integration with real-time operating systems. This design is well-suited for sensor nodes and gateways in wireless sensor networks, data collection and analysis in edge computing devices, data storage for configurations of control systems, and many other applications in embedded systems and industrial automation.
Author Contributions
Conceptualization, O.K.; software, O.K.; resources, M.B.; writing—original draft preparation, O.K.; writing—review and editing, P.M.; supervision, L.M.; project administration, G.G.; funding acquisition, P.M. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by M-ERA.NET 3/2021/295/BattPor “Inline evaluation of Li-ion battery electrode porosity using machine learning algorithms” and the Slovak Scientific Grant Agency VEGA under the contract 2/0135/23 “Intelligent sensor systems and data processing”. Funded by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under the project No. 09I05-03-V02-00055.
Data Availability Statement
The design and concepts underlying the development of CFFS are published in [
1] and in this article. The filesystem itself is closed source. The other tested filesystems are described and referenced in the ‘Materials and Methods’ section. The data chunks used in the experiments were randomly generated.
Acknowledgments
R-DAS has provided the framework for the research, development and testing of the presented solution as well as every support needed for the successful achievement of our goals.
Conflicts of Interest
O.K., M.B., G.G and L.M. were employed by the company of R-DAS, s.r.o. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
API | Application programming interface |
Btrfs | B-tree filesystem |
CFFS | Configurable flash filesystem |
CPU | Central processing unit |
CRC | Cyclic redundancy check |
ELOFS | Extensible low-overhead flash filesystem |
exFAT | Extended file allocation table |
ext4 | Fourth extended filesystem |
F2FS | Flash-Friendly Filesystem |
FAT | File allocation table |
FDS | Flash data storage |
FRAM | Ferroelectric random-access memory |
FSCB | Filesystem control block |
FTL | Flash translation layer |
GC | Garbage collection |
HTFS | HT filesystem |
iNVMFS | Intermittent non-volatile memory file system |
JFFS2 | Journaling Flash Filesystem version 2 |
LittleFS | Little filesystem |
LOFFS | Low-overhead flash filesystem |
MD5 | Message digest algorithm |
MLC | Multi-level cell |
NAND | Flash memory type for high-density storage and sequential access |
NOR | Flash memory type with fast random access, often used for code execution |
NTFS | New technology filesystem |
POSIX | Portable operating system interface |
QSPI | Quad serial peripheral interface |
RAM | Random access memory |
RTOS | Real-time operating system |
SDK | Software development kit |
SLC | Single-level cell |
SlimFS | Slim filesystem |
SoC | System-on-a-chip |
SPI | Serial peripheral interface |
SPIFFS | Serial peripheral interface flash filesystem |
TLC | Triple-level cell |
UBIFS | Unsorted Block Image Filesystem |
YAFFS | Yet another flash filesystem |
References
- Kachman, O.; Gyepes, G.; Balaz, M.; Majer, L.; Malik, P. Configurable Flash Filesystem for Low-Power Sensor Devices. In Proceedings of the 2021 International Conference on Engineering and Emerging Technologies (ICEET), Istanbul, Turkey, 27–28 October 2021. [Google Scholar] [CrossRef]
- Kachman, O.; Findura, J.; Balaz, M.; Gyepes, G.; Majer, L.; Vojs, M. Intelligent Monitoring System for Universal Data Collection and Analysis. In Proceedings of the 2022 32nd International Conference Radioelektronika (RADIOELEKTRONIKA), Kosice, Slovakia, 21–22 April 2022. [Google Scholar] [CrossRef]
- Yan, H.; Huang, Y.; Zhou, X.; Lei, Y. An Efficient and Non-Time-Sensitive File-Aware Garbage Collection Algorithm for NAND Flash-Based Consumer Electronics. IEEE Trans. Consum. Electron. 2019, 65, 73–79. [Google Scholar] [CrossRef]
- Yang, M.-C.; Chang, Y.-M.; Tsao, C.-W.; Huang, P.-C.; Chang, Y.-H.; Kuo, T.-W. Garbage Collection and Wear Leveling for Flash Memory: Past and Future. In Proceedings of the 2014 International Conference on Smart Computing, Hong Kong, China, 3–5 November 2014. [Google Scholar] [CrossRef]
- Chae, S.-J.; Mativenga, R.; Paik, J.-Y.; Attique, M.; Chung, T.-S. DSFTL: An Efficient FTL for Flash Memory Based Storage Systems. Electronics 2020, 9, 145. [Google Scholar] [CrossRef]
- Alahmadi, A.; Chung, T.S. Crash Recovery Techniques for Flash Storage Devices Leveraging Flash Translation Layer: A Review. Electronics 2023, 12, 1422. [Google Scholar] [CrossRef]
- Tran, V.D.; Park, D.-J. A Survey of Data Recovery on Flash Memory. Int. J. Electr. Comput. Eng. 2020, 10, 360–376. [Google Scholar] [CrossRef]
- Karresand, M.; Dyrkolbotn, G.O.; Axelsson, S. An Empirical Study of the NTFS Cluster Allocation Behavior Over Time. Forensic Sci. Int. Digit. Investig. 2020, 33, 301008. [Google Scholar] [CrossRef]
- Mathur, A.; Cao, M.; Bhattacharya, S.; Dilger, A.; Tomas, A.; Vivier, L. The New ext4 Filesystem: Current Status and Future Plans. Proceedings of the Linux Symposium 2007.
- Rodeh, O.; Bacik, J.; Mason, C. BTRFS: The Linux B-Tree Filesystem. ACM Trans. Storage 2013, 9, 9. [Google Scholar] [CrossRef]
- Heeger, J.; Yannikos, Y.; Steinebach, M. An Introduction to the exFAT File System and How to Hide Data Within. J. Cyber Secur. Mob. 2023, 11, 239–264. [Google Scholar] [CrossRef]
- Lee, C.; Sim, D.; Hwang, J.-Y.; Cho, S. F2FS: A New File System for Flash Storage. In Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST’15), Santa Clara, CA, USA, 16–19 February 2015; pp. 273–286. [Google Scholar]
- Manning, C. How Yaffs Works. 2017. Available online: https://yaffs.net/sites/default/files/downloads/HowYaffsWorks.pdf (accessed on 6 June 2024).
- Jung, J.; Jang, J.; Cho, Y.; Han, H.; Jeon, G.; Cho, S.-J.; Jang, M.; Kim, J.Y. A Fast Mount Mechanism for YAFFS2. In Proceedings of the 27th Annual ACM Symposium on Applied Computing (SAC’12), Trento, Italy, 26–30 March 2012; pp. 1791–1795. [Google Scholar] [CrossRef]
- Woodhouse, D. JFFS2: The Journalling Flash File System, Version 2. Available online: https://sourceware.org/jffs2/jffs2.pdf (accessed on 27 March 2025).
- Park, S.H.; Lee, T.H.; Chung, K.D. A Flash File System to Support Fast Mounting for NAND Flash Memory Based Embedded Systems. In Embedded Computer Systems: Architectures, Modeling, and Simulation; Vassiliadis, S., Wong, S., Hämäläinen, T.D., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4017. [Google Scholar] [CrossRef]
- Schierl, A.; Schellhorn, G.; Haneberg, D.; Reif, W. Abstract Specification of the UBIFS File System for Flash Memory. In FM 2009: Formal Methods; Cavalcanti, A., Dams, D.R., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5850, pp. 187–202. [Google Scholar] [CrossRef]
- Olivier, P.; Boukhobza, J.; Senn, E. On Benchmarking Embedded Linux Flash File Systems. SIGBED Rev. 2012, 9, 43–47. [Google Scholar] [CrossRef]
- Kim, J.; Shim, H.; Park, S.-Y.; Maeng, S.; Kim, J.-S. FlashLight: A Lightweight Flash File System for Embedded Systems. ACM Trans. Embed. Comput. Syst. 2012, 11, 18. [Google Scholar] [CrossRef]
- Cho, Y.J.; Jeon, J.W. Design of an Efficient Initialization Method of a Log-Based File System with Flash Memory. In Proceedings of the 2008 6th IEEE International Conference on Industrial Informatics, Daejeon, South Korea, 13–16 July 2008. [Google Scholar] [CrossRef]
- Zhang, R.; Liu, D.; Chen, X.; She, X.; Yang, C.; Tan, Y.; Shen, Z.; Shao, Z. LOFFS: A Low-Overhead File System for Large Flash Memory on Embedded Devices. In Proceedings of the 2020 57th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 20–24 July 2020. [Google Scholar] [CrossRef]
- Zhang, R.; Liu, D.; Chen, X.; She, X.; Yang, C.; Tan, Y.; Shen, Z.; Shao, Z.; Qiao, L. ELOFS: An Extensible Low-Overhead Flash File System for Resource-Scarce Embedded Devices. IEEE Trans. Comput. 2022, 71, 2327–2340. [Google Scholar] [CrossRef]
- Wu, Y.-J.; Kuo, C.-Y.; Chang, L.-P. iNVMFS: An Efficient File System for NVRAM-Based Intermittent Computing Devices. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 3638–3649. [Google Scholar] [CrossRef]
- Ou, Y.; Wu, X.; Xiao, N.; Liu, F.; Chen, W. HIFFS: A Hybrid Index for Flash File System. In Proceedings of the 2015 IEEE International Conference on Networking, Architecture and Storage (NAS), Boston, MA, USA, 6–7 August 2015. [Google Scholar] [CrossRef]
- Jiang, J.; Yang, M.; Qiao, L.; Wang, T.; Liu, H.; Nian, J. DHIFS: A Dynamic and Hybrid Index Method with Low Memory Overhead and Efficient File Access. In Proceedings of the 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Melbourne, Australia, 17–21 December 2023. [Google Scholar] [CrossRef]
- Yan, H.; Yao, Q. An Efficient File-Aware Garbage Collection Algorithm for NAND Flash-Based Consumer Electronics. IEEE Trans. Consum. Electron. 2014, 60, 623–627. [Google Scholar] [CrossRef]
- Ye, X.; Zhai, Z. Cold-Warm-Hot Block Wear-Leveling Algorithm for a NAND Flash Storage System. In Proceedings of the 2017 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, China, 11–13 November 2017. [Google Scholar] [CrossRef]
- Olivier, P.; Boukhobza, J.; Senn, E.; Ouarnoughi, H. A Methodology for Estimating Performance and Power Consumption of Embedded Flash File Systems. ACM Trans. Embed. Comput. Syst. 2016, 15, 79. [Google Scholar] [CrossRef]
- Kim, Y.; Shin, D. Improving File System Performance and Reliability of Car Digital Video Recorders. IEEE Trans. Consum. Electron. 2015, 61, 222–229. [Google Scholar] [CrossRef]
- Karacali, H.; Yıldırım, T. Design and Implementation of Basic Log Structured File System for Internal Flash on Embedded Systems. In Proceedings of the 2022 7th International Conference on Computer Science and Engineering (UBMK), Diyarbakir, Turkey, 14–16 September 2022. [Google Scholar] [CrossRef]
- Boubriak, A.; Cooper, A.; Hossack, C.; Permogorov, D.; Sherratt, R.S. SlimFS: A Thin and Unobtrusive File System for Embedded Systems and Consumer Products. IEEE Trans. Consum. Electron. 2018, 64, 334–338. [Google Scholar] [CrossRef]
- Nordic Semiconductor. Flash Data Storage (FDS). 2024. Available online: https://docs.nordicsemi.com/bundle/sdk_nrf5_v17.1.0/page/lib_fds.html (accessed on 14 June 2024).
- van Cutsem, G.; Foll, B.J.L. LittleFS Design. 2017. Available online: https://github.com/littlefs-project/littlefs/blob/master/DESIGN.md (accessed on 8 June 2024).
- Andersson, P. SPIFFS: SPI Flash File System. 2014. Available online: https://github.com/pellepl/spiffs (accessed on 8 June 2024).
Figure 1.
Basic structures used to represent drives, files, and flash memory APIs. A drive can contain multiple files (1–1..*, one-to-many relationship) and uses a single API to access specific physical memory. A flash API can be used by multiple drives.
Figure 1.
Basic structures used to represent drives, files, and flash memory APIs. A drive can contain multiple files (1–1..*, one-to-many relationship) and uses a single API to access specific physical memory. A flash API can be used by multiple drives.
Figure 2.
Two configurable options to include error detection in CFFS: (a) any file type can be configured to include error detection code in the region allocated for the file. Can be used to detect faulty application data; (b) a single error code is tracked for the whole drive. This can be used to detect when the flash memory has reached the end of its lifespan.
Figure 2.
Two configurable options to include error detection in CFFS: (a) any file type can be configured to include error detection code in the region allocated for the file. Can be used to detect faulty application data; (b) a single error code is tracked for the whole drive. This can be used to detect when the flash memory has reached the end of its lifespan.
Figure 3.
Firmware services interact with CFFS through its API, which executes operations by invoking firmware functions that perform flash operations. Each physical memory has a dedicated set of functions forming its API, and every CFFS drive is associated with one such flash API.
Figure 3.
Firmware services interact with CFFS through its API, which executes operations by invoking firmware functions that perform flash operations. Each physical memory has a dedicated set of functions forming its API, and every CFFS drive is associated with one such flash API.
Figure 4.
Example use of CFFS. The filesystem itself is a small library (green block) compiled into the device’s firmware. In Figure, 3 separate regions (purple blocks) are provided for it, 1 in on-chip memory and 2 in external memory. In each provided region, a drive is configured with different files and file types (orange block – static file, yellow block – appendable file, blue blocks – circular files).
Figure 4.
Example use of CFFS. The filesystem itself is a small library (green block) compiled into the device’s firmware. In Figure, 3 separate regions (purple blocks) are provided for it, 1 in on-chip memory and 2 in external memory. In each provided region, a drive is configured with different files and file types (orange block – static file, yellow block – appendable file, blue blocks – circular files).
Figure 5.
Zephyr RTOS filesystem API and its interaction with CFFS through the CFFS API.
Figure 5.
Zephyr RTOS filesystem API and its interaction with CFFS through the CFFS API.
Figure 6.
Final memory utilization for application data after filling a single file in each tested filesystem with fixed-size data chunks until no more data can be stored.
Figure 6.
Final memory utilization for application data after filling a single file in each tested filesystem with fixed-size data chunks until no more data can be stored.
Figure 7.
Final memory utilization for application data after filling 10 files in each tested filesystem with fixed-size data chunks until no more data can be stored.
Figure 7.
Final memory utilization for application data after filling 10 files in each tested filesystem with fixed-size data chunks until no more data can be stored.
Figure 8.
Trade-off between memory utilization and the number of erased blocks when calculating error codes for a whole CFFS drive.
Figure 8.
Trade-off between memory utilization and the number of erased blocks when calculating error codes for a whole CFFS drive.
Table 1.
Number of erase operations performed by each filesystem while filling a single file.
Table 1.
Number of erase operations performed by each filesystem while filling a single file.
Chunk Size | LittleFS Erases | SPIFFS Erases | FDS Erases | CFFS Erases | CFFS CRC Erases |
---|
8-byte | 64,447 | 20,375 | 0 | 0 | 61 |
768-byte | 796 | 134 | 0 | 0 | 0 |
6912-byte | 200 | 9 | - | 0 | 0 |
Table 2.
Number of erase operations performed by each filesystem while filling 10 files.
Table 2.
Number of erase operations performed by each filesystem while filling 10 files.
Chunk Size | LittleFS Erases | SPIFFS Erases | FDS Erases | CFFS Erases | CFFS CRC Erases |
---|
8-byte | 61,891 | 10,691 | 0 | 0 | 40 |
768-byte | 758 | 31 | 0 | 0 | 0 |
6912-byte | 190 | 5 | - | 0 | 0 |
Table 3.
Number of erased blocks during continuous writing of 3 MB of data.
Table 3.
Number of erased blocks during continuous writing of 3 MB of data.
Chunk Size | LittleFS Erases | SPIFFS Erases | FDS Erases | CFFS Erases | CFFS CRC Erases |
---|
8-byte | 396,707 | 218,193 | 1816 | 768 | 1151 |
768-byte | 4886 | 2272 | 697 | 768 | 771 |
Table 4.
Comparison of key characteristics of the tested filesystems.
Table 4.
Comparison of key characteristics of the tested filesystems.
Characteristics | LittleFS | SPIFFS | FDS | CFFS | CFFS CRC |
---|
Garbage collection | Automatic | Automatic | Manual | Not required | Not required |
Wear-leveling | Active | Active | Active | Passive | Passive |
Memory utilization (small data chunks) | High | Low | Low | High | High |
Memory utilization (big data chunks) | High | Medium | High * | High | High |
Flash wear | High | Medium | Low | Low | Low |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).