The data structure of van Emde Boas is by every definition a classic:
- it is solving a fundamental problem optimally (see my paper with Mikkel)
- it is very simple and elegant
- it has 101 known variants
- it has been extremely impactful in wildly different research directions (see cache oblivious layout, Tango trees).
- it is important in practice (see my last post about IP lookup and routing).
- and of course, it got Knuth excited ;)
On March 22, 1977, as I was drafting Section 7.1 of The Art of Computer Programming, I read four papers by Peter van Emde Boas that turned out to be more appropriate for Chapter 8 than Chapter 7. I wrote a five-page memo entitled ``Notes on the van Emde Boas construction of priority deques: An instructive use of recursion,'' and sent it to Peter on March 29, with copies also to Bob Tarjan and John Hopcroft.
We want to search predecessors among a set S of n integers of w bits each. We will build a data structure that has O(n) space, and O(lg w) query time. (The "universe" of the input is U=2w, so the bound is often cited as O(lglg U), which seems more impressive and less practical due to the double log; since the structure is neither "impressive", nor impractical, I don't like this notation.)
Another thing that I don't like is that people take the word "integers" literally. It actually means "values stored with precision w". If you read about how floating point numbers are represented in computers (I think there's an IEEE standard), you realize that comparing two floating point numbers is exactly the same as comparing two integers with the same bit representation. So forget that your data was floating point, and simply pretend it was integers as far as predecessor is concerned. [[ Personal note: I made quite an impression among Romanian IOIers when I realized this in the 9th grade :) It allowed everybody to use radix sort for real numbers, and everybody knew radix sort is simpler and faster than comparison-sorting. ]]
Longest common prefix. Predecessor search is equivalent to finding the longest common prefix between the query q and a value is S (where numbers are seen as strings of bits). To realize that, think of the binary trie of depth w that represents our universe. The longest common prefix p is a node in this trie. If q continues with a one, the predecessor is the maximum value in S with prefix p, which can be precomputed and stored at every node. If q continues with a zero, the successor is the minimum value in S with prefix p, and once we have the successor, we can use a linked list to get the predecessor.
Here is my artful depiction of the first case (query continues to the right of the longest common prefix):
Now, we're looking for the (length of the) longest common prefix. What strategy do we know for finding a number? Binary search. So we're going to binary search for the length of the longest common prefix, i.e. binary search on the vertical of the trie.
To do that, we construct a dictionary (aka hash table) in which we store every prefix of every value in S. If we conjecture the longest common prefix has length l, we take the first l bits of q, and see if this prefix exists in the dictionary. If it does, the longest common prefix is at least l, otherwise it is at most l.
The space. What I described above takes space O(nw), because we store every prefix of every number. We can fix that with a simple, yet very effective idea: bucketing.
Group every w consecutive values into a bucket, and let S' be the set of the smallest value in each bucket, |S'| = n/w. We first build the above data structure to search for predecessors among S'. The space is O((n/w) w) = O(n). After finding the predecessor in S', we know the correct bucket where the answer is, and we can binary search inside that bucket in additional time O(lg w).