Thursday, August 9, 2007

Binary Search Trees (IV): Splay Trees

Though we can't prove splay trees are anywhere close to dynamic optimality, they are undobtedly the star of the game, and their "cool factor" has driven most of the early research on BSTs.

The splay tree is an amazing algorithm that does something extremely simple, and seems to get everything right without trying at all. If you haven't seen splay trees, go read what they do. The algorithm is described entirely by 2 drawings.

Top reasons to get excited about splay trees (these are all proven theorems):

1. static optimality: they run in O(first-order entropy of the access sequence).

This means splay trees are as good as the best static tree (see post I). If we knew the access frequencies in advance, we could build the optimal static tree by Knuth's dynamic program. But splay trees don't know the frequencies, don't try to learn them in any explicit way, and still match the entropy.

This has profound consequences in applications, since it means we can assume trees are "biased" in whatever way is best for us. Example: for dynamic connectivity in forests (a data structure used by flow algorithms), replacing red-black trees by splay trees automatically reduces the bounds from O(lg2n) to O(lg n).

2. dynamic finger:
the cost is O(sum lg |xi - xi-1|).

So the smaller the jump between consecutive accesses, the faster you run. The practical consequence: you can forget about linked lists. Even if you tend to access consecutive elements, the splay tree is just as good (and of course better if you occasionally want to jump a longer distance).

Now, it is possible to build trees with the express purpose of achieving dynamic finger bounds, but they are rather nontrivial to implement. Proving that splay trees achieve dynamic finger gives a simple algorithm, and moves the entire burden on theory. Of course it is quite a burden... The proof is one of those famous long-and-difficult proofs of Computer Science [Cole: On the Dynamic Finger Conjecture for Splay Trees, SICOMP 2000; STOC'90]. Notice the gap between the conference and journal version to see what I mean by long and difficult.

3. working set: Say you are going to access some x. If, since the last time you accessed x, you have only accessed k distinct elements, the amortized running time of this operation is O(lg k). This should be a practical scenario -- it is exactly the reason computers have caches (to improve the case when a few elements are accessed frequently).

Proofs for 1. and 3. are in the original paper of [Sleator, Tarjan] and are surprisingly cute and simple. NB: this paper is famous for frustrating students, by combining something really cool with lots of uncool things. You probably want to ignore everything outside pages 4-9.

Splay-tree theory. Going beyond these results, there is a whole industry of getting excited about splay trees. Maybe a landmark which is somewhere in sight but not too close, is to prove a 2-finger bound: take two access sequences with a small finger bound, and interleave them arbitrarily; now show that splay trees are roughly as fast as on the individual sequences. You can generalize this to k-fingers, combining with the working-set property, along the lines "the running time is small if you access elements close to something in cache". I think the correct generalization is in [Iacono: Alternatives to splay trees, SODA'01], but it is not yet known if there exists a dynamic tree that achieves the bound. (Open problem!)

To get back to the splay industry, almost all work is technically very difficult; in addition, some of it is quite insightful, and appeals to people who like combinatorics. A nice example is Seth Pettie's recent proof [arXiv:0707.2160] of an O(lg alpha*(n)) running time for the 2-finger bound in a special case when each finger only moves by ±1 in each step. (This is called the deque bound for obvious reasons.) Yeap, this is really the inverse of Ackermann's function, starred.

My own take on the industry is that, while splay trees are cool and such, we must remember they are still just one particular algorithm. We can have fun proving these results, but I don't think they will make it into Encyclopedia Informatica. [[ But they will of course be referenced in wikipedia :)... someone should update the page given Seth's result. ]]

7 comments:

Anonymous said...

Hello.
Is there any thing analog to optimal dynamic BST in the integer of cache oblivious models.

Anonymous said...

Hello.
Sorry i meant "or cache oblivious model"

Mihai said...

Anonymous, you are asking a fabulous question that I plan to address in a post coming very soon.

Basically, it depends on what problem you want to use BSTs for. If you are using them for searching, say, you can always use hashing and get a result in constant time. Even if you want to use comparisons, there is no notion of optimality outside the BST model. (Even if you consider an augmented BST with nonlocal pointers, no algorithm can be competitive.)

However, it turns out that for other problems, optimality makes sense even in realistic models like the RAM or cache-oblivious. Stay tuned for a couple of days and I promise to discuss this in detail.

rgrig said...

Personally I think that looking at particular problems like splay trees is good because (1) splay trees are useful in practice and because (2) looking at particular problems provides insight for attacking general ones. Point (1) is related to my dislike of the (too) wide separation between the academic community and the industry.

Mihai said...

If anybody knows the status of splay trees in practice, I would love to know. I am not convinced that they are so useful and widely used.

The only "success" story I know is that they were used in the Windows NT kernel, until they were taken out due to their inefficiency. Apparently, changing your tree is much more expensive today than when splay trees were invented -- (1) the continuous changes cause too many synchronization locks when the system is running on multicore CPUs; (2) modern L1+L2 caches deal much better with reads than writes.

Anonymous said...

rgrig, there are very few algorithms/data structures that are interesting enough to be studied at all. Nearly all algorithms are designed to specifically accommodate their worst case analysis and are usually more complicated than they need to be. A few exceptions to this rule are union-find, the simplex algorithm, and splay trees. There are others but the list pretty short.

rgrig said...

Mihai, it's true I don't know a place in practice where splay trees and did not experiment with them myself. I guess I was under the impression that they are used from the (rather theory orientated) presentations I've read. After you comment I searched for some articles. At least this one suggests they are not great.

Seth, perhaps we have different notions of interestingness. I do not think the analysis of an algorithm designed for its analysis to be interesting. On the other hand, the analysis of an algorithm that solves a problem inspired by practice, is interesting.