At the request of Sanjeev Arora, I prepared the following lecture notes about O(lglg n)-competitive binary search trees (our Tango trees).

The main innovation is the proof of the Wilber-1 bound. As opposed to the somewhat hairy proof in Wilber's paper and in our paper's appendix, I show Wilber-1 by arguing about the problem geometry and maintaining partial sums (forget trees!). Hopefully this gives a more intuitive proof. In any case, it shows the students something very important in its own right.

If you read these notes, I am interested in anything you have to say, including things you didn't understand, or suggestions for clarifications.

## 5 comments:

I spent about half an hour reading the first two pages and here's some notes.

1. When computing OPT can I pick any initial state of the tree? It seems so.

2. I can see that OPT(S(1),..S(n)) is Theta(n) because any OPT for n elements is Omega(n) and you can start with n->left==NULL for all nodes and rotate to the left for each search. But why is E[OPT(X)]=Omega(m lg n)? I don't know why but it seems to indicate that there are 'many' 'hard' instances.

3. I guess O(lg n) competitive is trivial using a static tree.

4. What does "approx/competitive ration" mean? [and 5 minutes later] ah, he's talking of the complexity of the problems, not the algorithms as I was thinking.

At the beginning of page 2: I guess we want to prove that the approx problem is not (asymptotically) harder than the competitive problem and we'll look at upper bounds for approx and lower bounds for competitive... but I'm not really sure if this is the plan.

5. "lower bound on computation" as opposed to "lower bound on memory" or what?

6. Since the Wilber I section mentions "lower bound" and I believe the plan is what I said above, then it's probably a lower bound for the competitive (on-line) problem.

7. I don't understand what "switching left to right" means.

Before getting to "Implementation details" I thought that Tango trees work like this: They are represented by a set of 'paths', each path being an ordered map from values to the path starting in the black sheep (that is, non-preferred child). You keep a pointer to the path starting in the 'root' of the 'lowerbound tree'. To search for x you start with the 'root path' and go thru it sequentially until you realize that you need to visit a black sheep. At that point you reorganize the paths and continue. The phrase (split; concatenate; concatenate) suggests the same king of 'reorganization' as I had in mind. Other than that, I can't really understand the other parts of the picture and it makes me think that perhaps tango trees work in a completely different way than I thought before getting to the picture.

Hmm, now that I think about it, what I just said isn't O(lg lg n). You have to replace the 'sequential' part with something that better exploits the structure of the path. I guess that's why "impl notes" mentions "successor(x)". Aha.. I think I got it. Here's what I got wrong initially: paths are NOT ordered by position in the lowerbound tree but by VALUE. Each element in the path keeps TWO pointers (not one as I thought) just like in the lowerbound tree. Now the picture makes more sense.

----

OK. Now I'll stop thinking out loud.

1. When computing OPT can I pick any initial state of the tree?Sure, it only affects the bound by O(n), which is a lower order term.

2. But why is E[OPT(X)]=Omega(m lg n)?Basic information theory... This is a comparison-based data structure, so to find a random element you need Omega(lg n) comparisons.

3. I guess O(lg n) competitive is trivial using a static tree.Yes.

At the beginning of page 2: I guess we want to prove that the approx problem is not (asymptotically) harder than the competitive problem and we'll look at upper bounds for approx and lower bounds for competitive... but I'm not really sure if this is the plan.Not really... We look at lower bounds for the running time of any algorithm (even in the offline version), and then try to get an upper bound for the online version (an actual online tree).

5. "lower bound on computation" as opposed to "lower bound on memory" or what?As opposed to lower bounds on the optimum of some approximation problem. These bounds are obtained as the optimum of an LP in many cases, but also by ad hoc observations about the problem.

6. Since the Wilber I section mentions "lower bound" and I believe the plan is what I saidabove, then it's probably a lower bound for the competitive (on-line) problem.

It's a lower bound for any algorithm in the BST model, including offline.

7. I don't understand what "switching left to right" means.As you look at the accesses, one on the left side is followed by one on the right side :)

Here's what I got wrong initially: paths are NOT ordered by position in the lowerbound tree but by VALUE.Correct. This is a requirement for any binary search tree, so we have to stick to it. (If you ordered by position, how would you search for your element by comparisons?)

This is a comparison-based data structure, so to find a random element you need Omega(lg n) comparisons.I thought that's true for the worst-case only; I see now that it's true for average too.

We look at lower bounds for the running time of any algorithm (even in the offline version), and then try to get an upper bound for the online version (an actual online tree).Then I'd rephrase the top of page 2 to say this explicitly: "We start with a lower bound on the offline problem (which is also a lower bound on the online problem) and then show a solution to the online problem that is only O(lg lg n) worse than the lower bound."

As opposed to lower bounds on the optimum of some approximation problem.OK, I get it. But then it's not "as opposed to" but rather "here, equal" as you write in the notes. For some reason I find the notes confusing (even if they are correct). I'd rather continue with: "note that a lower bound on the offline problem is a lower bound on the approximation optimum". It's probably because "lower bound = lower bound on computation" made me think "so he's saying that 'on computation' will be implicit from now on; but what else than computation could it be? memory?" and this led me astray.

As you look at the accesses, one on the left side is followed by one on the right side :)My fault: I didn't look at figure 1 (because it said it's on the "next page" and after a quick glance I didn't see it there: it's on page 4)

If you ordered by position, how would you search for your element by comparisons?The point were I gone astray was when I thought that you don't keep the "lowerbound tree" explicitly but only implicitly in the paths. I'd say "We store: 1. (the static) lower bound tree, 2. (the dynamic) preferred paths, as dynamic BSTs..." to make it clear that both are stored.

I wonder what happens if you keep paths in Tango trees. It seems you get O(m lg lg n lg lg lg n) for random sequences but

mayget O(lg lg lg n) for S(1), ..., S(n).Anyway, back to the notes. They seem very personal to me. They remind me of my notes in college: they made perfect sense to me but no one wanted to copy them because they were almost incomprehensible to anyone else. And I was almost offended when I was told that they are hard to understand :)

To summarize my suggestions:

1. Give a better plan of what will follow at the beginning of page 2. (Here: better = more explicit)

2. Make it clear that you store both one static tree and a few dynamic paths.

3. Perhaps mention why you can concat paths fast. (If I'm not mistaken the reason is that elements of one are entirely between two elements of the other.) [warning: i didn't think about point 3 carefully]

I meant O(n lg lg lg n) for search(1), ..., search(n).

The point were I gone astray was when I thought that you don't keep the "lowerbound tree" explicitly but only implicitly in the paths.We never keep the lowerbound tree explicitly, though it is easier to think that we do. (In the end, everything is a proper BST, so we can't go on keeping arbitrary stuff on the side.)

I wonder what happens if you keep paths in Tango trees.You do not beat the Omega(lglg n) competitive ratio, unfortunately.

Post a Comment