Tuesday, September 4, 2007

Chernoff about STOC/FOCS

There is a constant stream of attacks on the CS publication model -- especially by Math purists, I feel. I have certainly seen a lot of commentary after the recent Koblitz nonsense. Though I have many qualms with our publication model, I strongly believe Math is a bad example of an ideal. Here's my analysis of it.

In Math, it's mainly about the results. Math is old, it knows exactly what it's doing, it has these problems that it's been obsessing about for ages, and its painfully slow intake of any real-world influence makes it very stationary. Thus, what matters most is that they get some new results. The best strategy is to stress young people out, and make them come up with one new and hard contribution. Sure, many fail and go to Wall Street. But rationally for "the good of Mathematics", the results matter and people don't. (There is of course a lot of revering of old Mathematicians with some amazing result, which is again rational because that's exactly how you motivate young people to try to have a similar achievement).

In CS, I believe, it's about the people. We:

  • are a new field, and still sorting out fundamental aspects from nonsense.
  • need people with strong influence and personality to shape and guide a large community in good research directions.
  • are very closely related to one of the world's prime industries. Thus, the reality of this industry is our reality -- think of how theoretical research in parallel computers died with the temporary explosion in performance of single-processor machines, and how research in streaming flourished with dot-com businesses. We cannot and will not be a stationary field, and our in-take of real-world desiderata is high.
Thus, we are not in the business of squeezing one major publication out of PhD students and junior faculty. We are in the business of selecting the best people to stay in the field, and continue to do brilliant and relevant work, by a changing standard of relevance.

And how do you select the best people? Well, Chernoff bounds tell you that you should not rely on one trial. You should rely on several works (publications) to assess how fit the person is for this type of research. If we expect students to publish 3-5 STOC/FOCS papers by the time they graduate, and maybe 5 more as junior faculty, that's already enough to have reasonable confidence in this publication measurement. We then have a pretty good idea of how a person thinks, and what his skillset is.

4 comments:

Anonymous said...

One can see a shift in U.S. politics over the last few decades from an institution where the main arguments were over the merits of ideas to one today where the main arguments are more concerned with the merits of people. I think we're worse off for it.

Cheap shot aside, even good people are ephemeral. Good ideas and results are everlasting.

MiP said...

even good people are ephemeral. Good ideas and results are everlasting.

I agree (though not in an strictly absolutist way). I'm just saying that in the current setup in CS, we expect our tall mountains to see much more than one sunrise in their careers. That's why we are optimizing by keeping around the tallest mountains through a decently reliable process.

BTW, the fact that we have a more reliable and friendly process leads to incomparably lower levels of frustration among grad students.

Anonymous said...

"BTW, the fact that we have a more reliable and friendly process leads to incomparably lower levels of frustration among grad students."

I'm not sure about this. I think this is true for the "problem-solver" types because they can publish many papers. Things are different for the "theory-builder" types who cannot publish as much because our community usually considers something publishable if it solves a very concrete problem.

Anonymous said...

Theory-builders do face extra challenges, but it seems to me it has more to do with lack of established practices as to how to evaluate a new high level proposal. Currently we have well established guidelines for evaluating a paper which consists of a sequence of theorem-proof results. The community (includnig myself) wouldn't even know how to begin evaluating a paper titled, say, "Why we should replace complexity classes with language superfamilies". Furthermore as it is there is no natural forum for such a paper.