Next Article in Journal
Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System
Previous Article in Journal
Is Reinforcement Learning Good at American Option Valuation?
Previous Article in Special Issue
Hardware Model Checking Algorithms and Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance of Linear and Spiral Hashing Algorithms

by
Arockia David Roy Kulandai
†,‡ and
Thomas Schwarz
*,‡
Department of Computer Science, Marquette University, Milwaukee, WI 53233, USA
*
Author to whom correspondence should be addressed.
Current address: Vice Principal - SF, St. Xavier’s College (Autonomous), Ahmedabad 380009, Gujarat, India
These authors contributed equally to this work.
Algorithms 2024, 17(9), 401; https://doi.org/10.3390/a17090401
Submission received: 28 June 2024 / Revised: 2 September 2024 / Accepted: 2 September 2024 / Published: 7 September 2024
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)

Abstract

:
Linear Hashing is an important algorithm for many key-value stores in main memory. Spiral Storage was invented to overcome the poor fringe behavior of Linear Hashing, but after an influential study by Larson, it seems to have been discarded. Since almost 50 years have passed, we repeat Larson’s comparison with the in-memory implementation of both to see whether his verdict still stands. Our study shows that Spiral Storage has slightly better lookup performance but slightly poorer insert performance. However, Spiral Hashing has more predictable performance for inserts and, in particular, better fringe behavior.

1. Introduction

Key-value stores have become a mainstay of data organization for big data. For instance, Amazon DB is a pioneering NoSQL database built on this concept. A key-value store implements a map or dictionary and can be implemented in different ways. B-tree-like structures allow for range queries, whereas dynamic hash tables have simpler architectures and better asymptotic performance of O ( 1 ) for inserts, updates, and reads. For instance, Linear Hashing, the best-known version of dynamic hash tables, is used internally by both Amazon’s DynamoDB and BerkeleyDB. These data structures were originally conceived to implement indices for database tables and continue to be used for this purpose. They were developed in a world of anemic CPU performance and slow and small storage systems, but they have been successfully used in the world of big data.
In this article, we compare Linear Hashing (LH) with its competitor, Spiral Hashing, proposed at roughly the same time in the late 1980s. Both store key-value pairs in containers—so-called buckets. Originally, each bucket resided in a page or several pages of storage, i.e., in those days, exclusively on a magnetic hard drive. However, nowadays, in-memory implementations of LH are common. Hash-based structures can also be employed in distributed systems. The key, together with the current state of the file, is used to calculate the bucket to which the key-value pair belongs. A file usually accommodates an increase in the number of records by splitting a single bucket into two or more buckets. In this way, it maintains a bound on the expected number of records per bucket. It can also react to a decrease in the number of records by merging buckets.
LH adjusts to growth in the number of records by splitting a predetermined bucket, following a constant ordering of buckets. Thus, the state of an LH file is determined entirely by the number of buckets, N. The last log 2 ( N ) or log 2 ( N ) + 1 bits of the hash of a key determine where the record—the key-value pair—is located. The main criticism of LH is the cyclic worst-case performance for inserts and lookups. The ingenious addressing mechanism of Spiral Storage, or Spiral Hashing (SH) as we prefer to call it, avoids this cyclic behavior. Like LH and other hash-based schemes, it stores key-value pairs in buckets. It uses a more complex address calculation to even out worst-case performance. In a well-received study [1], P.-Å. Larson compared the performance of LH, SH, an unbalanced binary tree, and a scheme built on double hashing and came to the conclusion that LH always outperforms SH (and the other two schemes). This reflects the cost of the more complex addressing scheme in SH. However, the environment has changed considerably for both schemes. More data are stored in memory or in the new NVRAM technologies. Processor speeds have increased much faster than memory access times or storage access times. We decided to test whether Larson’s verdict still stands. It does not!
In the following, we first review the standard versions of LH and SH. We then perform a theoretical and experimental fringe analysis for bulk insertions into both data structures. Even if a large bulk of data is inserted, and thus a large number of splits occur, LH’s performance remains cyclic. Then, we explain our implementations of LH and SH as main memory data structures. Next, we review the differences between in-memory LH and SH and the in-storage versions. The next section gives our experimental results. We then investigate the consequences of uneven hashes. We conclude that Spiral Hashing is a viable alternative to Linear Hashing with the potential to control tail latency resulting from key-based operations.

2. Background

Hashing schemes implement key-value stores and provide the key-based operations of inserts, updates, deletions, and reads. A hash record consists of a key and a value. Hashing places the record in a container at a location calculated from the key. Typically, the location can contain more than a single record, in which case the container is called a bucket. If a hashing scheme has a fixed number of buckets, then the number of records in a bucket would depend on the total number of records, as would the complexity of accessing a record in the bucket. Dynamic hashing schemes, therefore, adjust the number of buckets to the number of records. Typically, the number of buckets is increased by splitting a bucket and decreased by merging two buckets. In this manner, the average number of records in a bucket is bound by a constant. A key-based operation (such as a lookup, modification of the value, or an insert) first finds the bucket of the affected record. An insert adds the record to the bucket, e.g., by pushing on a stack. A lookup, delete, or update finds the record in the bucket and performs the operation. By splitting, the data structure guarantees that the expected number of records in a bucket is bounded by a constant. The bucket access happens in constant time. The expected time for lookups, deletions, or updates is linear in the average number of records visited, which is bound by a constant. As a result, the key-based operations are performed in the expected time complexity O ( 1 ) . For this to work, we need to have good hash functions. A horrible hash function, for example, would map every key to 0, so that all records are stored in Bucket 0. A lookup for a non-existent record would then scan through all the records, resulting in a time complexity of O ( n ) .
Originally, classic dynamic hashing schemes like extensible hashing [2], LH [3], and SH [4,5] stored buckets in the blocks of Hard Disk Drives. Nowadays, the buckets can also be containers in different nodes of a distributed system, containers in different storage systems such as NVRAM or flash memory, or containers in the main memory. Containers in the main memory can store their records in a linked list or an array. In fact, main memory data structures have gained enormous importance, and we focus on them.
If the buckets of a dynamic hashing scheme are pages in a Solid State Drive (SSD) or a Hard Disk Drive (HDD), then a bucket can only contain a fixed number of records. If the number of records assigned to a bucket exceeds the capacity of the bucket, then some records are stored in an overflow bucket. The number of overflow buckets is usually small. This is not an issue if buckets reside in the main memory.

2.1. Linear Hashing

Linear Hashing (LH) is a dynamic hashing scheme that provides stable performance and good space utilization. LH is widely used in disk-based database systems, such as Berkeley DB and PostgreSQL [6].
The number of buckets in an LH file increases linearly with the number of records inserted. The core strategy of LH is that it splits and merges buckets in a fixed order. In contrast, Fagin’s Extendible Hashing [2] splits the bucket that has the most records. As a result, LH uses a simple addressing calculation scheme based on a very simple file state. Litwin, the inventor of LH, provided an option for starting an LH file with more than one bucket [3]. To the best of our knowledge, this option is never used in practice. We simplify the implementation and presentation by assuming that the LH file always starts with only one bucket. If, however, we want to start with N buckets, we simply calculate the data structure after N 1 splits before inserting records.
To adjust the number of buckets to the number of records, LH uses splits (and merges, the reverse operation) based on triggers. One possible strategy, applicable to a disk-based system, is to use the insertion of a record into an overflowing bucket as a trigger. A much more common strategy maintains a count of the current number of records, a count of the current number of buckets, and defines the load factor as the average number of records per bucket. Whenever this load exceeds a threshold, a split is triggered. Whenever this load falls below a slightly smaller threshold, the last two split buckets are merged. The freedom to choose different triggers (resulting in different behavior when using less-than-perfect hash functions and in different expected numbers of records per bucket) is another difference from Extendible Hashing.
LH buckets are consecutively numbered from Bucket 0 to Bucket  b 1 . Internally, LH maintains a file state. Because of our assumption that we start with only one bucket, the file state is determined simply by the current number b of buckets. We use b to calculate two file-state properties, the level l and the split pointer s, as
l = log 2 ( b ) , s = b 2 l
so that always
b = 2 l + s , s < 2 l .
We use these for the addressing mechanism. If the number of buckets is incremented, then Bucket s splits into Bucket s and Bucket s + 2 l . The split pointer s is then incremented. If s is equal to 2 l , then the level is incremented, and s is reset to zero. LH thus uses only the level and the split pointer for addressing bucket splits and merges.
To calculate the bucket in which a record with key c resides, LH uses a family of consistent hash functions. For this purpose, we can use a long hash function h of the key. We then define a partial hash function h i as
h i ( c ) = h ( c ) ( mod 2 i ) ,
so that h i ( c ) is made up of the last i bits of h ( c ) . They can be quickly extracted by using a bit-wise AND operation with a mask 2 i + 1 1 . If the LH file has level l, then the partial hash functions h l and h l + 1 are used. To be more precise, the address a of a record with key c is the bucket in which the record should be placed. It is calculated from the split pointer s and the level l by
a = h l ( c ) if   h l ( c ) < s , h l + 1 ( c ) else .
Extendible hashing always splits the bucket that overflows but needs to maintain a special directory structure that reflects the history of splits. LH always splits the bucket using the number given by the split pointer s. Thus, the series of bucket numbers to split is
0 , 0 , 1 , 0 , 1 , 2 , 3 , 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 0 , 1 , ,
which is made up of ranges from 0 to 2 l 1 . The advantage of LH is the almost trivial file state that an LH file has to maintain, but an overflowing bucket might need to wait before a split remedies the overflow. During a split, all records in the bucket to be split are rehashed with the new partial hash function h l + 1 and accordingly either remain in the same bucket or are moved to the new bucket. We can also implement merge operations to adjust to a decrease in the number of records. A merge always undoes the last split.
We use a numerical example to illustrate LH and its operations. Figure 1 (top) shows the state of an LH file with five buckets. Since 5 = 2 2 + 1 , the split pointer is one and the level is two. Instead of showing complete records, the figure only shows the keys, which we assume to be natural numbers between 0 and 100. Our hash function h is simply the identity in this example. Thus, h i ( c ) for a key c is just c   ( mod 2 i ) or, equivalently, the last i bits of c. We started out with one bucket, Bucket 0. All records in this bucket were placed there because h 0 is always 0. The file has now grown to five buckets, so we use h 2 and h 3 for addressing. There are currently 15 records in five buckets, so the load factor is 3.
To calculate the address of the record with key 84, we calculate h 2 ( 84 ) = 84 ( mod 4 ) = 0 . Since this address is smaller than the split pointer, we recalculate with the next hash function and obtain h 3 ( 84 ) = 84 ( mod 8 ) = 4 . This means that the record with key 84 is in Bucket 4. An alternative reasoning involves looking at the binary representation of 84, which is 0b1010100. The address is formed by either the last two or three bits because the level is two. The record with key 48 has an address calculated from h 2 ( 48 ) = 48 ( mod 4 ) = 0 , h 3 ( 48 ) = 48 ( mod 8 ) = 0 , and, therefore, resides in Bucket 0.
To continue the example, we assume that we insert a record with key 12. Since h 2 ( 12 ) = 12 ( mod 4 ) = 0 and h 3 ( 12 ) = 12 ( mod 8 ) = 4 , the record is placed into Bucket 4. With the insertion of this record, the load factor, i.e., the average capacity of the buckets, exceeds 3, which, in our example, triggers a split. The bucket to be split is the one given by the split pointer, namely Bucket 1. In an Extensible Hashing scheme, we would split because we try to insert into an overflowing bucket and then split that bucket. However, LH uses a predefined order of splits, and the turn of Bucket 3 will come only after another intervening split. LH creates a new Bucket 5. It then rehashes all the records in Bucket 1 using the new file state. As a simple optimization, the address recalculation can skip the evaluation using h 2 since we know that its value has to be 1 because otherwise, the record would not reside in Bucket 1. Thus, calculating h 3 ( 53 ) = 53 ( mod 8 ) = 5 , h 3 ( 29 ) = 29 ( mod 8 ) = 5 , and h 3 ( 45 ) = 45 ( mod 8 ) = 5 has LH move the three records into the new bucket. If we were to recalculate the addresses of all other records in the data structure, they would not change, as the only change in the execution of the addressing calculation is the comparison with the split pointer. The lower part of Figure 1 shows the state of the LH file after the split as well as the split.

2.2. Spiral Hashing

A drawback of LH is the discrepancy between buckets that are already split and those that are not yet split at the same time. The former buckets tend to have twice as many records as the latter. As we will see in Section 3, the number of records rehashed when a file is split depends on the split pointer and varies between B / 2 and B, where B is the maximum expected number of records in a yet-to-be-split bucket. To avoid this type of behavior, Spiral Hashing (SH) was invented by Martin [4,5,7]. SH intentionally distributes the records unevenly into Buckets s, s + 1 , s + 2 , …, 2 s 1 , where s is the file state, as shown in Figure 2. If desired, a simple address translation keeps the buckets in the range 1, 2, …, s . We will not be using this final translation step here. When the file grows, the contents of Bucket s are distributed into the new buckets, Bucket 2 s and Bucket 2 s + 1 . Afterward, Bucket s is deleted. If the hash used is completely uniform, then the probability that a record is allocated to Bucket i is p i = log 2 ( 1 + 1 / i ) . This remarkable feat is achieved with the logarithmic address calculation given in Figure 3. SH can be easily modified to generate d new buckets for each freed one. It is also possible to use an approximate address calculation. However, modern chip architectures with their efficient floating-point units obviate the need for speeding up the address calculation.
We now reuse our previous example to show the evolution of an SH file in Figure 4. LH uses keys that are unsigned integers, but SH uses keys that are floating-point numbers in [ 0 , 1 [ = { x | 0 x < 1 } . Therefore, we divide the keys by 100 so that key 40 is translated to key 0.40. Using the SH addressing algorithm in Figure 3, we obtain the picture in Figure 4 (top). Since s = 5 , the range of buckets is from Bucket 5 to Bucket 9. We can see that the records inside a bucket are all in a sub-interval of [ 0 , 1 [ . Therefore, without calculation, we can guess that the record with key 12, translated to 0.12, is either in Bucket 8 or Bucket 9. After insertion of the record with key 12 0.12 , the average load of the buckets is again higher than the threshold of 3, and Bucket 5 is replaced with Buckets 10 and 11. The mathematics of the address calculation algorithm again guarantees that no records outside of the split bucket are moved when we recalculate with the adjusted s.

2.3. Difference between In-Memory and In-Storage Hash Tables

In the years since the conception of Linear, Spiral, and Extendible Hashing, computer organization has undergone much development. These hashing schemes had to use storage and then Hard Disk Drives (HDDs) because the main memory was too precious and too limited. A hard disk stored data in blocks (then 512 B) and access to a block took maybe 15–30 ms. Nowadays, memory is cheaper and much more extensive. In addition, storage only pretends to be writable to random blocks. HDD technology uses shingled recording that does not allow in-place updates. With the help of internal caching and a log-structured organization of tracks into bands, an HDD provides an interface that seems to allow in-place updates of blocks [8].
The main alternative to magnetic storage technology is Solid State Drives (SSDs). The hard-drive industry has prioritized storage capacity over speed. Similarly, SSDs are increasingly made up of multi-bit cells that have an even more limited number of write-erase cycles than single-bit cells. An SSD stores data in pages that have to be written all at once, as is also the case for the blocks in an HDD. An SSD organizes a certain number of pages into an erase block. An overwrite of a page is only possible after a previous erasure. The endurance, i.e., the number of write-erase cycles, is limited and depends on the technology used and, more importantly, the number of bits per cell.
An in-storage hash table frequently writes to its bucket and hence to the blocks/pages containing the buckets. If the storage device is a shingled HDD, an updated block is appended to a log stored in a number of contiguous tracks. To avoid the accumulation of blocks with outdated data, the log is continually cleared by moving active data blocks to the tail of the log-skipping data blocks with outdated or deleted data. While this write amplification is an inconvenience, the consequences for an SSD are worse. To update a page stored in an SSD, the page needs to be erased. If the SSD has an internal flash translation layer, the newly written page still has the same page address. However, internally, the old page has been marked as deleted and will eventually be erased with all the other pages in the same erase block. As some of these pages might contain valid data, this data is copied to other pages in a different erase block. These copies constitute write amplification, where a single page overwrite results in several pages being written. As SSD manufacturers push storage capacity at the cost of lowering endurance by using cells that store more than one bit, write amplification becomes a concern.
To allow more efficient use of shingled HDDs and multi-bit SSDs, zoned namespaces were introduced to replace and supplement the Logical Block Address (LBA) interface [9,10]. Zoning allows a user to address a moderate to a large number of zones, but writes to these zones are sequential. This interface addresses the limitations of both current HDD and SSD technologies. Unfortunately, hash tables are not well suited for this interface; they prefer a paged environment. We can draw a parallel to B-trees: they are no longer used in their traditional form but use techniques such as Log-Structured Merge (LSM) trees to minimize changes to stored data at the cost of parallel searches. We can conclude that in-storage hash tables are workable for HDDs without the zoned namespace interface, but they need additional modifications for SSDs and HDDs with the zoned namespace interface.
In-memory hash tables can be used for a wide variety of purposes, such as enabling equi-joins in a database, caching URLs in a web browser, and, in short, anywhere associative arrays, dictionaries, or sets are needed. They, and not in-storage hash tables, are our main concern here. Larson’s experiments were conducted in a very different environment where the in-memory data structure had to be small (if they were large, virtual memory would have stored them on disk anyway). His experiments included the access times of storage in his performance results. Despite this, Spiral Hashing performed considerably worse than Linear Hashing.

3. Fringe Analysis

Fringe analysis [11] analyzes the behavior of a data structure under mass insertion. For instance, Glombiewski, Seeger, and Graefe showed that a steady stream of inserts into a B-tree can create “waves of misery”, where restructuring in the B-tree can lead to surges in the amount of data moved [12]. In the case of LH and SH, unevenness in the size of buckets determines fringe behavior.
This unevenness also affects the tail latencies of key-based operations. For a lookup, the hash structure visits, on average, half the records in a bucket for a successful lookup and all the records if no record with the given key exists. An insert does not need to visit any other record in the target bucket but can lead to a split, which requires visiting all the records in the bucket being split.
Linear Hashing is attractive because of its conceptual and architectural simplicity. As we have seen, the average number of records per bucket varies between buckets that have already been split in an epoch (when the level is constant) and those that are yet to be split. Figure 5 provides an example with level 9 and split pointer 342, and a total of 8000 records. Buckets 0 to 341 on the left and Buckets 512 to 853 on the right have, on average, about half as many records as Buckets 342 to 511. On the left, these numbers were obtained using random numbers as stand-ins for the hash of keys, whereas on the right, we used the first 8000 words in alphabetical order from an English dictionary as keys. The word list was compiled by J. Lawler of the University of Michigan. We then calculated the SHA1 hash of these keys and used the first 16 hexadecimal digits, transformed into an integer, as the keys. Remarkably, and by sheer accident, in this example, the hash of the keys is slightly smoother than the hash using random numbers. The figure illustrates that with small bucket sizes, the variability in the number of keys is large; however overall, the maximum number of records in a bucket yet to split is about twice as large as the maximum number of records in a bucket that has already been split in the current epoch. Only at the end of the epoch, when the split pointer is zero, will all buckets have the same expected number of keys. In general, with level l and split pointer s, the buckets numbered 0 to s 1 and 2 l to 2 l + s 1 have already split and have, on average, half as many records as the remaining buckets.
We can analyze the fringe behavior of Linear Hashing analytically. We assume that the LH structure maintains a maximum ratio of records to buckets. If the number of records is r and B is this ratio, i.e., the average number of records per bucket, then the number of buckets is r / B . The level of the LH file is then λ = log 2 ( r / B ) . The expected number of records in an unsplit bucket is then r 2 λ . With every B inserts, a split occurs, so on average
r B · 2 λ
records are rehashed. About half of them are moved to the new bucket. The average number of records rehashed follows a saw-tooth curve, increasing from 1 to 2, starting at B times a power of two and being reset at the next power of two. Figure 6 shows the results. We provide an example of an experiment with a bucket capacity of 10 in Figure 7. In this experiment, we created a list of insertion operations that trigger a split and captured the size of the bucket that needs to be split. Recall that the split operation recalculates the address for each of the keys of records in the bucket and moves about half to the new bucket. This is easily implemented by creating two containers, one each for the new and one for the old bucket. We then moved pointers to the records according to the calculated address into these containers, and at the end, inserted the new bucket into the data structure and replaced the old bucket with the newly created bucket.
We also conducted a related experiment. We added n records to an initially empty LH file. Our parameter n varied from 800 to 33,000. We used a target bucket capacity of 10, causing the level of the LH file to change from 6 to 11. We then added 1000 additional records and calculated the number of records whose addresses were recalculated during a split. We used two different types of split triggers. The more popular version calculates the average number of records per bucket from the total number of records and the number of buckets and triggers a split whenever this value exceeds the target bucket capacity. Alternatively, a split is triggered whenever the bucket where the record is placed reaches capacity. In this more aggressive version, splits are more frequent and buckets tend to contain fewer records. Recall that the bucket triggering a split is usually not the one that is split. We repeated the process of loading n records into the data structure and then inserting 1000 new records while counting the number of records accessed during a split. The initial n records were the same. We report our results in Figure 8. Using the normal split trigger, LH shows a saw-tooth pattern oscillating between 1 and 2 times the number of records touched per insert. However, the more aggressive trigger shows a somewhat more chaotic behavior with different peak locations and higher variability.
Spiral Hashing’s selling point is its avoidance of this fringe behavior. The probability of a record belonging to Bucket i is log 2 ( 1 + 1 / i ) . The size of the bucket that is split is always rather close to 1.44 times the average. Thus, instead of a cyclic fringe behavior, SH shows near-constant movement, independent of the number of records, but it is still subject to random fluctuations. In Figure 8, we can see that the use of the normal and the more aggressive triggers results in similar, near-constant behavior. However, the more aggressive trigger results in greater variability.
Spiral Hashing assigns a record to Bucket i with probability p i = log 2 ( 1 + 1 / i ) for i between s and 2 s 1 . The lowest-numbered bucket has the largest expected number of records and is always the next one to be split. However, it is often not the bucket with the actual largest number of buckets. The number of records in a bucket follows a binomial distribution, allowing us to calculate the exact probability that one bucket contains more records than another. There does not, however, seem to be a simple, closed formula for this probability. Instead of using the Central Limit Theorem and approximating with the normal distribution, we used the Las Vegas method to determine the probability that the next bucket to be split actually has the largest number of records, as we would prefer. However, this was not the case, as our results in Figure 9 show. As the nominal bucket capacity increases and the number of buckets remains small, the probability that Bucket s has the most records increases, but for larger structures, that probability diminishes quickly.

4. Implementations

We wanted to compare the efficiency of the two data structures. The advantage of LH is its simplicity and elegance and, in comparison with SH, its addressing mechanism that avoids floating-point calculations. The advantage of SH is its better fringe behavior and tail latencies for key-based operations. In fact, we discovered one key (an unsigned int in C++) and one file state where the SH addressing function, when implemented in a straightforward manner with C++ functions, gave a wrong address because of a rounding error. This single instance forced us to include an additional check in the SH addressing mechanism. Because we wanted to compare the relative merits of both data structures, we decided to implement a threaded, in-memory data structure with short records. Alternatives would be a distributed data structure (such as LH* [13]) or, more prominently, an in-storage data structure. Our choice should highlight the relative differences between the two data structures.
Larson and Ellis-Schlatter proposed a structure that allocates buckets in bunches to save memory space [1,14]. Instead of following their scheme, we relied on the efficiency of the implementations of containers in the C++ standard library. Our buckets are C++ vectors themselves and are organized inside another C++ vector. For concurrency, we used locks. There is a global lock for the file state, which becomes a bottleneck under a heavy load of inserts. Each bucket has an individual lock.
An LH insert or lookup gains a non-exclusive (also known as a read) lock on the file state. After determining the address of the bucket, the operation gains an exclusive lock on the bucket. It then inserts or retrieves a pointer to the record, depending on the nature of the operation. An insert can trigger a split. The split operation first gains an exclusive lock on the file state. It then acquires another exclusive lock on the bucket to be split. After rehashing the bucket into a new bucket and an old bucket, we replace the old bucket with the new bucket, add the new bucket, delete the old bucket, and release the lock on the file state. The SH inserts and lookups follow the same approach.

5. Experimental Results

We tested the efficiency of the two data structures by varying two parameters: the number of threads and the bucket capacity, i.e., the average number of records per bucket. Of course, the use of threads and locks generates quite a bit of overhead. If we were to perform other work within a thread, this overhead would be less visible. As it is, each thread only accesses the data structure, and lock contention is maximized.
We loaded our data structure with 1,000,000 records. For the first experiment, we then inserted another 1,000,000 records consisting of a random unsigned integer as the key and a short string. These records were stored in a C++ standard library vector, and each thread was given an equal-sized region of this vector to insert into. This write-only workload led to many splits and contention in the file state. For the second experiment, we again preloaded the data structure with 1,000,000 records with random keys. We then filled a vector with 1,000,000 random keys. Each thread then performed lookup operations for regions of the same size in this vector. This operation does not acquire exclusive locks on the file state and the timing is much better. However, most of the time, the lookup operation accesses all records in a bucket. Each experiment was performed 100 times (in a new process).
We varied the number of threads used and the nominal bucket capacity. We used the load factor (the ratio of the number of records to the number of buckets) as the split trigger. Thus, a hash table with 1,000,000 records and a bucket capacity of 3 has 333,334 buckets, and when it grows to 2,000,000 records, it has 666,666 buckets.
We performed our experiments on a Mac Powerbook with an M2 Max processor and 64 GB of memory. The results on an earlier Mac Powerbook with an Intel processor exhibited the same behavior. In both cases, the OS assigned only one CPU to the process.
Because we used random keys, each run of the program had a unique workload. This seemed to be the dominant factor in the variation of the execution time, as the violin plots in Figure 10 indicate. We can see that the distribution of the execution times has a long tail but otherwise resembles that of a normally distributed random variable. While the run times are not normally distributed, they justify the use of the mean in reporting our results.
We depict these results in Figure 11 and Figure 12. We also used the non-parametric Mann–Whitney U-test to ascertain the significance of the observed differences. Not surprisingly, they were highly significant, as Figure 13 shows. The values shown give the probability that the differences in the run times for LH and SH are accidental.
Our experiments show that Linear Hashing is faster than Spiral Hashing under a write load, whereas Spiral Hashing is usually faster under lookups. The different complexities of the addressing mechanisms do not seem to be relevant. Our experimental setup was very close to that of Larson’s. One difference is that, for many decades, we have had libraries that generate good random numbers and hashes. Larson used three small files: one of user names, one of an English dictionary, and one of call numbers that he converted to numeric keys with a simple conversion function. If we were to use a modern hash function on files of these types to obtain keys, then we would have keys indistinguishable from random numbers. He used a random number generator to select sets of keys for the measurement of lookup times. The second difference is that we used the C++ standard library for an efficient implementation of vectors, something that was not available to Larson. He found that address calculation (0.16 ms/key for LH and 0.24 ms/key for SH) was a major difference, with the timing of splits coming second. The architecture and performance of computers have tremendously changed, with one major difference being that floating-point operations are now much faster. A third difference is that in the intervening years, the access speed of storage has almost doubled, similar to the access speed of DRAM, whereas the performance of CPUs has increased tremendously. The main processor clock of a VAX runs at 3.125 Mhz, whereas our Mac has a clock of 3.75 GHz with eight cores and eight threads per core and large internal parallelism.

6. Imperfect Hash Functions

Hash-based schemes assume close-to-perfect hash functions. This means that each hash should appear with equal probability. For example, we can use a no-longer-secure but fast hash function such as MD5 or any of the hashes currently considered secure. In “real life”, this might not be the case. For example, MongoDB, a NoSQL document database, has IDs that look like hashes but consist of three parts, including a date-time stamp and a counter. A programmer could easily assume that the MongoDB ID is a perfect hash, but this is not true. The result could be an increase in the number of buckets with many more records than expected. Key-based operations accessing these records would take quite a bit more time than expected.
Because LH and SH calculate addresses from a hash in very different ways, they react to less-than-perfect hash functions in different ways. For example, time stamps and counts used as keys change the lower bits, which would allow LH to function well, whereas SH would not. SH separates records into different buckets based on high-order bits. Even if we took the reciprocal of the counts or time stamps, we would place records in only a small number of buckets. However, if we reversed the bit string of the count and cast it as the fraction of a floating-point number, we would obtain an equally good fit for SH. We give an example in Figure 14, where we reuse the sizes from our previous example and have 854 buckets. We inserted keys ranging from 1 to 8540 into the data structure and recorded the number of records in the bucket. For LH, we just used the count, whereas for SH, we used the procedure described above. As we can see, the number of records per bucket is almost ideal. So, in this one case, an imperfect hash function yields an almost perfect distribution.
To generate a biased hash function for SH, we take a random number, translate it to a double-precision floating-point number, raise this number to the power of α , and then translate it back to an unsigned integer. If we perform the same procedure for LH, the altered hash functions work just as well as the unaltered hash function. This is because, in SH addressing, the leading bits are more important, whereas in LH addressing, the trailing bits are more important. We therefore reverse the bit string for LH.
We show the results of the altered hash function in Figure 15 for α = 2 , 3 / 2 , 1 , 2 / 3 , and 1 / 2 . We simulated the hash scheme for 200 buckets and an average of 1000 records per bucket (for a total of 200,000 records). The y-axis shows the number of records per bucket in multiples of 1000, i.e., the relative frequency. For LH, we see a much greater frequency for Bucket 0 and α > 1 and increased variance for LH when α < 1.0 . For SH, the top frequency is much larger when α > 1 .
The larger spread of frequency means that some split or lookup operations will have unusually long latencies as they hit exceedingly large buckets. Our experimental setup can only measure average latencies, which are barely affected. We report our experiment in Table 1, using the same setup as for the performance numbers. We provide the total time in milliseconds for inserting 1,000,000 records into a data structure already containing 1,000,000 records, or for looking up 1,000,000 random keys in a data structure of 1,000,000 records. For each situation, we used 100 measurements, as before. The resulting timings barely changed and were often not different between the unbiased hash ( α = 1 ) and the biased hashes. In Table 1, we provide the p-values, indicating whether the results are statistically significantly different from the unbiased hash case. This is frequently not the case, but other than observing that the more disturbed hashes ( α = 0.5 or α = 2 ) tend to show more variation, we are unable to draw any firm conclusions. This is not surprising, as we calculated averages over a large number of operations.
We can conclude, however, from Figure 15 that less-than-perfect hashes will have a significant influence on tail latencies. If we look at the maximum number of records in each bucket, we see that this number can be much higher. The difference in the addressing schemes is reflected in the influence of α . α > 1 is bad for LH, and α < 1 is bad for SH.

7. Conclusions and Future Work

Performance-based judgments need to be revised whenever the underlying technology changes. A reassessment of Spiral Hashing and Linear Hashing was overdue, and we undertook it in this article. Our experimental setup is essentially the same as that of Larson’s. There are only three main differences: storing data in memory versus storing data on a hard disk, better generation of random keys, and the efficient implementation of C++ vectors. Therefore, our results reflect the differences in computer architecture and organization. Over the years, access times to memory and storage have changed by only a small factor, whereas the speed of computation, especially floating-point computations, has increased by a very large factor. As a result, Spiral Hashing now provides a viable alternative to Linear Hashing even as an in-memory structure. While the addressing mechanism of Spiral Hashing has not changed, its speed now comes close to that of the less complicated addressing mechanism of Linear Hashing. As Spiral Hashing has better fringe behavior and better tail latencies for key-based operations, the only reason to stick with Linear Hashing for an in-memory data structure is the existence of an established code base.
If the buckets are stored in pages in storage, the better fringe behavior should be even more attractive. In Spiral Hashing, the biggest bucket is often the next bucket to be split. This behavior avoids page overflows, which are usually handled by adding overflow pages. The same observation applies to distributed systems. A distributed version of Spiral Hashing needs to solve the problem of distributing file information to clients that have an outdated view of the file state s. If we can keep the empty buckets 1, …, s 1 around, a misdirected request will access an empty bucket and can obtain a better file state from there. Unlike LH*, the information that can be gathered from the fact of misdirection only shows that s has changed, but gives no clues about its true value. This is an avenue of interesting research that we plan to undertake next.
Our implementation used locks. Lock-free implementations of Linear Hashing exist [15,16]. Shalev’s implementation realizes that an in-memory LH hash file is essentially a single, linked list, with buckets forming sub-lists. The list is ordered by the keys in reverse order. Thus, we can simply adjust a lock-free linked-list implementation. A bucket split results from the insertion of a new pointer to the leading record in the bucket, dividing the previous sub-list corresponding to the bucket. A similar observation holds for SH. An implementation is left to future work.

Author Contributions

Conceptualization: A.D.R.K. and T.S., methodology: A.D.R.K. and T.S., software (performance): A.D.R.K., software (fringe analysis): T.S., analysis: A.D.R.K. and T.S., writing: T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

This project used no external data. All experiments were described and are repeatable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Larson, P.-Å. Dynamic hash tables. Commun. ACM 1988, 31, 446–457. [Google Scholar] [CrossRef]
  2. Fagin, R.; Nievergelt, J.; Pippenger, J.; Strong, H.R. Extendible hashing—A fast access method for dynamic files. ACM Trans. Database Syst. (TODS) 1979, 4, 315–344. [Google Scholar] [CrossRef]
  3. Litwin, W. Linear hashing: A new tool for file and table addressing. VLDB 1980, 80, 1–3. [Google Scholar]
  4. Martin, G.N.N. Spiral Storage: Incrementally Augmentable Hash Addressed Storage; Theory of Computation Report—CS-RR-027; University of Warwick: Coventry, UK, 1979. [Google Scholar]
  5. Mullin, J.K. Spiral storage: Efficient dynamic hashing with constant performance. Comput. J. 1985, 28, 330–334. [Google Scholar] [CrossRef]
  6. Wan, H.; Li, F.; Zhou, Z.; Zeng, K.; Li, J.; Xue, C.J. NVLH: Crash consistent linear hashing for non-volatile memory. In Proceedings of the 2018 IEEE 7th Non-Volatile Memory Systems and Applications Symposium (NVSMA), Sapporo, Japan, 28–31 August 2018; pp. 117–118. [Google Scholar]
  7. Chu, J.-H.; Knott, G.D. An analysis of spiral hashing. Comput. J. 1994, 37, 715–719. [Google Scholar] [CrossRef]
  8. Amer, A.; Long, D.; Miller, E.; Pâris, J.-F.; Schwarz, T. Design issues for a shingled write disk system. In Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), Village, NV, USA, 3–7 May 2010; pp. 1–12. [Google Scholar]
  9. Stavrinos, T.; Berger, D.S.; Katz-Bassett, E.; Lloyd, W. Don’t be a blockhead: Zoned namespaces make work on conventional SSDs obsolete. In Proceedings of the Workshop on Hot Topics in Operating Systems, Ann Arbor, MI, USA, 1–3 June 2021; pp. 144–151. [Google Scholar]
  10. NVM Express: Zoned Namespace Command Set Specifications. Revision 1.1. 18 May 2021. Available online: https://nvmexpress.org/ (accessed on 1 September 2024).
  11. Baeza-Yates, R.A. Fringe analysis revisited. ACM Comput. Surv. (CSUR) 1995, 27, 109–119. [Google Scholar] [CrossRef]
  12. Glombiewski, N.; Seeger, B.; Graefe, G. Waves of misery after index creation. In Datenbanksysteme für Business, Technologie und Web; Lecture Notes in Informatics; Gesellschaft für Informatik: Bonn, Germany, 2019; pp. 77–96. [Google Scholar]
  13. Litwin, W.; Neimat, M.A.; Schneider, D.A. LH*—A scalable, distributed data structure. ACM Trans. Database Syst. (TODS) 1996, 21, 480–525. [Google Scholar] [CrossRef]
  14. Schlatter Ellis, C. Concurrency in linear hashing. ACM Trans. Database Syst. (TODS) 1987, 12, 195–217. [Google Scholar] [CrossRef]
  15. Shalev, O.; Shavit, N. Split-ordered lists: Lock-free extensible hash tables. J. ACM (JACM) 2006, 53, 379–405. [Google Scholar] [CrossRef]
  16. Zhang, D.; Larson, P.-Å. Lhlf: Lock-free linear hashing. In Proceedings of the 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, New Orleans, LA, USA, 25–29 February 2012; poster paper. pp. 307–308. [Google Scholar]
Figure 1. Linear Hashing example with b = 5 and b = 6 .
Figure 1. Linear Hashing example with b = 5 and b = 6 .
Algorithms 17 00401 g001
Figure 2. Spiral Hashing with S = 7 before and after splitting.
Figure 2. Spiral Hashing with S = 7 before and after splitting.
Algorithms 17 00401 g002
Figure 3. Address calculation in Spiral Hashing for key key and file state S.
Figure 3. Address calculation in Spiral Hashing for key key and file state S.
Algorithms 17 00401 g003
Figure 4. Spiral Hashing example with b = 5 and b = 6 .
Figure 4. Spiral Hashing example with b = 5 and b = 6 .
Algorithms 17 00401 g004
Figure 5. Theoretical and expected number of records per bucket in an LH file. On the left, we used random numbers, and on the right, we used parts of the SHA1 hash of the first 8000 words in a list of English words.
Figure 5. Theoretical and expected number of records per bucket in an LH file. On the left, we used random numbers, and on the right, we used parts of the SHA1 hash of the first 8000 words in a list of English words.
Algorithms 17 00401 g005
Figure 6. Number of records rehashed during a split in LH. Theoretical result for Linear Hashing when buckets are split based on the ratio of records to buckets.
Figure 6. Number of records rehashed during a split in LH. Theoretical result for Linear Hashing when buckets are split based on the ratio of records to buckets.
Algorithms 17 00401 g006
Figure 7. Number of records rehashed during a split in LH. Experimental results for random keys when splits are triggered by an overflowing bucket.
Figure 7. Number of records rehashed during a split in LH. Experimental results for random keys when splits are triggered by an overflowing bucket.
Algorithms 17 00401 g007
Figure 8. Number of records rehashed when adding 1000 records to a Linear Hashing and Spiral Hashing file with n records, n [ 2 10 200 , 2 15 + 200 ] .
Figure 8. Number of records rehashed when adding 1000 records to a Linear Hashing and Spiral Hashing file with n records, n [ 2 10 200 , 2 15 + 200 ] .
Algorithms 17 00401 g008
Figure 9. Probability that the lowest-numbered bucket under Spiral Hashing indeed has the largest number of records.
Figure 9. Probability that the lowest-numbered bucket under Spiral Hashing indeed has the largest number of records.
Algorithms 17 00401 g009
Figure 10. Violin plot of the run times of LH and SH under inserts with 10 threads and a bucket capacity of 10.
Figure 10. Violin plot of the run times of LH and SH under inserts with 10 threads and a bucket capacity of 10.
Algorithms 17 00401 g010
Figure 11. Run-time means for the insert experiment for various numbers of threads and bucket capacities.
Figure 11. Run-time means for the insert experiment for various numbers of threads and bucket capacities.
Algorithms 17 00401 g011
Figure 12. Run-time means for the lookup experiment.
Figure 12. Run-time means for the lookup experiment.
Algorithms 17 00401 g012
Figure 13. p-values for the null hypothesis of equal lookup times using the Whitney–Mann U-test.
Figure 13. p-values for the null hypothesis of equal lookup times using the Whitney–Mann U-test.
Algorithms 17 00401 g013
Figure 14. Number of records per bucket for LH and SH when using counter values as a key.
Figure 14. Number of records per bucket for LH and SH when using counter values as a key.
Algorithms 17 00401 g014
Figure 15. Number of records per bucket for LH and SH under uneven hash functions.
Figure 15. Number of records per bucket for LH and SH under uneven hash functions.
Algorithms 17 00401 g015
Table 1. Average operation times and their significance levels.
Table 1. Average operation times and their significance levels.
Spiral Hashing Inserts
α ThreadsTimep-value α ThreadsTimep-value
0.550.8860.9780.5201.8450.781
0.66750.8640.0220.667201.8390.781
150.8761.0001201.8721.000
1.550.8650.0491.5201.8770.973
250.8580.0052201.8390.333
Linear Hashing Insert
0.550.7740.0540.5201.7030.335
0.66750.7740.0260.667201.6760.023
150.7891.0001201.7411.000
1.550.7690.0111.5201.6870.071
250.7720.0942201.8360.020
Spiral Hashing Lookup
0.550.417< 10 11 0.5200.3220.016
0.66750.413< 10 4 0.667200.430< 10 28
150.4151.0001200.3231.000
1.550.424< 10 28 1.5200.3190.947
250.424< 10 28 2200.328< 10 6
Linear Hashing Lookup
0.550.4260.0030.5200.3240.3667
0.66750.430< 10 18 0.667200.3240.083
150.4241.0001200.3231.000
1.550.427< 10 5 1.5200.3220.647
250.428< 10 15 2200.3230.793
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kulandai, A.D.R.; Schwarz, T. Performance of Linear and Spiral Hashing Algorithms. Algorithms 2024, 17, 401. https://doi.org/10.3390/a17090401

AMA Style

Kulandai ADR, Schwarz T. Performance of Linear and Spiral Hashing Algorithms. Algorithms. 2024; 17(9):401. https://doi.org/10.3390/a17090401

Chicago/Turabian Style

Kulandai, Arockia David Roy, and Thomas Schwarz. 2024. "Performance of Linear and Spiral Hashing Algorithms" Algorithms 17, no. 9: 401. https://doi.org/10.3390/a17090401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop