Friday, March 20, 2009

The LSD Proof

From my colleagues among Israeli faculty, I hear strange tales about dealing with people who want to be your PhD students. It seems so many students are enthusiastic about doing a PhD (with theory faculty!), that hazing them becomes a good idea. As the stories go, you should tell your candidate students "Go read this paper, and come back once you've understood it."

The trick, of course, lies in your choice of the paper. If you ever want to do this to somebody who dreams of working in communication complexity, you might try this lower bound for lopsided set disjointness. The proof is straightforward to me, but it is fairly long and diverse, throwing all the tricks in the book at this problem. If you want to work in communication complexity, it makes sense to first reach the level where the proof is straightforward to you.

The story of set disjointness stretches back to the dawn of communication complexity. An optimal deterministic lower bound is easy to prove (see my earlier post). From what I understand, in the late 80's, proving a randomized lower bound was a major problem. The issue is that you cannot use independent distributions for Alice's and Bob's inputs, since two random sets will intersect with overwhelming probability. 

A randomized lower bound was obtained in 1992 by [Kalyanasundaram, Schnitger] and by [Razborov]. The technique was explained in an information-theory view by [Bar-Yossef, Jayram, Kumar, Sivakumar], who also found new interesting applications.

The basic idea is to consider a distribution in which the two sets never intersect. Let the universe be split into n blocks. For each block, we will either choose Bob's element publicly, or choose Alice's element publicly. The other player gets a private random input that is different from the public choice, making the sets disjoint. Under this distribution, the blocks are independent. Thus, if the protocol communicates o(n) bits in total, I can find a block about which it communicates o(1) bits of information. I will now alter the distribution, making it independent on that block alone. The protocol is still the same, so it will have large error because it cannot spot the difference.

In STOC'95, [Miltersen, Nisan, Safra, Wigderson] introduced asymmetric communication complexity (in which Alice and Bob have inputs of significantly different sizes), and showed how asymmetric lower bounds imply interesting data structure lower bounds. They proved an optimal deterministic lower bounds for asymmetric set disjointness, and left the randomized case as an "interesting" open problem. I suspect, however, that they simply did not have the time or motivation to work out how the previous techniques applied in the asymmetric case.

In my FOCS'06 paper with Alex Andoni and Piotr Indyk, we showed the first exciting application of lopsided set disjointness (LSD) to data structures: it implied that data structures with O(1) query time cannot solve (1+ε)-approximate nearest neighbor with less than n^Ω(1/ε2) space. This application only required a weaker version of an LSD lower bound, which could be proved with independent inputs given to Alice and Bob.

The really exciting development was my FOCS'08 paper, which showed that many interesting lower bounds could be obtained by just the right reductions from LSD. At this stage, I needed the optimal LSD lower bound --- so I just had to sit down and claim that the proof would appear in the journal version.

Ridden by guilt (and reminders from Jeff, acting as editor of the special issue), I had to finally write down the proof. Once you really, really understand the old symmetric technique, and you're confortable with the standard tricks from communication lower bounds, this proof is not too exciting. I'm glad to be done with it.

No comments: