Tuesday, December 22, 2009

Blog happenings

As you may have noticed, this blog has been on a bit of a hiatus. On the one hand, my expectations for politics-by-blog have decreased to a minimum and my boredom with the theory community has grown. On the other hand, I have not had enough time for exposition-by-blog --- or, better said, I discovered that my thought process is too obsessive and erratic to leave room for regular expository interruptions.


In an attempt to pull myself from the blog hiatus, let me respond to two recent blog posts, one guest post by Mikkel on My Biased Coin, and one post by Lipton.

My own view of Mikkel's post reads like: "Hey, remember that big-big fuss from 1-2 years ago about simplicity as a virtue? How come this was hijacked by people wanting to introduce new models all the time, and didn't help the papers on real algorithmic problems, where simplicity matters the most? Do we need to start our own «Innovations in Real Computer Science» conference, or can we have a session at SODA instead? "

My own favorite example of a book algorithm is the "mod-3" suffix tree construction of [Kärkkäinen, Sanders ICALP'03]. This is an amazingly simple algorithm (much simpler than anything before), and truly feels like the "ultimate" algorithm for suffix tree construction. It also demonstrates a rare property: dividing into 3 parts in a recursive construction can work much better than dividing into 2. This algorithm should (and probably will) be in any Intro to Algorithms course.

So, was this algorithm the Best Paper in STOC/FOCS? Not nearly --- in fact most STOC/FOCS people probably never heard of it. I also know from inside sources that the paper was viewed as somewhat borderline at ICALP (which is not too surprising --- as screwed up as US conferences are, we still don't compare to the European ones).

***

Moving on to Lipton's blog, we notice that most discussion is confused by the difference between running time and some (problem-dependent) optimization function.

Some people claim that they are different because of Moore's law (your slower algorithm will be fast enough soon), or the fact that CPU cost is not as real as, say, the cost of shipping goods around by trucks in your optimization problem. I disagree with both of these views.

It's irrelevant if Moore's law will make your algorithm fast enough in 2 years; by then, the competition may make you bankrupt. And if you really believe in thinking long term, you should probably be working on quantum computers and worrying about the speed of light (we all know that a memory of N bits has diameter at least N1/3, so memory accesses cannot be "constant time"). Sure, you may not say anything relevant for 50+ years, but then you will finally be avenged --- and you may be as influential as Babbage's computer was in the design of Java. My response to this long run thinking is a well-known quote from economics: "People don't eat in the long run." (Harry Hopkins)

How "real" CPU cost is will vary. If you're compressing Google's logs by 10%, you're likely to save 10% of a pretty big cost, a worthy goal. In time-constrained systems (like ATT routers), this cost is not smooth, but exhibits a phase transition. If you make this algorithm 30% faster, we can provide this extra functionality that competitors don't have, and convince some companies to switch to our network. If not, not. Adding 30% more computing power is essentially not an option.

So, no, the difference between running times and other optimization criteria is not that it is less of a cost. The difference is that we understand running times a whole lot better. Look, we spent decades understanding the usefulness and the shortcomings of asymptotic analysis, refining our computer model (Word RAM + external memory issues), etc. When we talk about running time, we usually know what we're talking about.

On the other hand, every combinatorial optimization problem is a modelling challenge. Constraints are not quite like this, the graph is not quite general, this cost function is not quite right etc etc. Each important problem needs (and deserves) considerable modelling work before our theoretical predictions are as good as we may want.

But what do we do at STOC/FOCS? We model the problem in the simplest way, ignoring applications (which we are largely unaware of). Then, we prove Θ(lg n) approximation bounds (complete with PCP lower bounds) for the dollar cost of shipping goods by trucks (a problem now rebranded with a fancy name). Now just wait for those Amazon guys to come asking about how to improve their 5% approximation, and we can tell them the problem is perfectly understood!

Basically, a lower bound greater than 2 on a dollar cost is a formal proof that your formalization of the problem is wrong.

So, should we get our hands really dirty and look at the problems in such detail? I think if you're going to be passionate about approximation algorithms, you should do it at least a few times. (I, for one, did a lot of performance-oriented programming before my current work on theory of running times...)

I hope that if our approximation algorithms community gets a lot closer to the OR community, good things will happen. On the one hand, they might give us a break with their "understanding the nature of computation" claims, when they realize they have only a vague idea of what computation really is. On the other hand, nice theory might yet be developed which will talk about 3% improvements...

Friday, December 4, 2009

Talks

Update: The 2nd talk is happening 6pm-8pm in Amfiteatrul Pompeiu (= amfiteatrul de la etajul 2).

I am giving two talks in Bucharest this coming week:
  • a more practical one at Poly on Monday (December 7) at 6pm in room EC 101 (Fac. Automatică şi Calculatoare)
  • a more theoretical one at UniBuc on Friday (December 11). Room and time to be announced here, but it will be earlier in the afternoon.
Abstracts below. Don't forget to vote on Sunday.

  1. Funcţii de hash tabulare

    Implementarea tabelelor de hash folosind căutare liniară (linear probing) este mai eficientă decât alte implementări dacă funcţia de hash este suficient de alteatoare. Din păcate, linear probing se comportă foarte prost în conjuncţie cu funcţiile de hash bazate pe înmulţire (cele mai folosite în practică), şi ca atare, acest algoritm este deseori evitat.

    Voi descrie o funcţie de hash foarte simplă, care este la fel de rapidă ca înmulţirea pe procesoarele actuale, dar se bazează pe indexarea în tabele precalculate. O analiză matematică ne demonstrează că această funcţie garantează timp de rulare constant pentru tabelele de hash.

  2. Rezultate negative pentru structuri de date

    Cum demonstrăm că anumite rezultate algoritmice sunt imposibil de obţinut? Spre exemplu, cum demonstrăm că nu există nicio structură de date cu spațiu liniar care poate suporta range queries în timp constant? În acest curs, voi descrie o demonstrație completă a acestui rezultat, trecând prin mai mulți pași simpli, dar interesanți.