Since I am asked to clarify my opinion on conferences, here goes...
1. Efficient dissemination
We have many, many researchers, who publish many, many papers. We don't write good surveys for most things, and certainly we don't keep them up to date. How do we navigate the ensuing madness in a somewhat efficient manner?
Well, we decide to rank some directions as more central to our pursuits, and we start judging some works as more important in those directions. We open some publication venues where we try to select only top contributions in the top directions, and create incentives for people to publish there (so we can ensure the venue gets the top quality stuff).
Then, if you want to stay up to date on some important directions, you only have to follow the big publication venues. If you're trying to stay up to date on directions not judged important, it is still hard, but remember, we're kind of trying to dissuade you from working on those directions! (God forbid, we're not stopping you, but you can't optimize in all directions simultaneously.)
Of course, what we think is cool and important is very open to interpretation, and essentially a social phenomenon. When two poles are forming in a community, they will each designate another venue as the cool place to publish, and the communities splits. It seems this is happening with STOC/FOCS versus SODA. To some extent, this is unfortunate, and to some extent, it is unavoidable.
Purists who reject rankings of conferences (what makes you think SWAT is less cool than SODA!) are the equivalent of anarchists in political thinking, who think we don't need to try to generate social efficiency by some central mechanism. In many cases, they have valid objections (and I have often been labeled an anarchist, in the political sense). But I think it is hard to take the dogma entirely seriously.
If you've submitted a cool paper to a non-top conference, don't worry too much. It will "make it" eventually (e.g., our data-structures course gave some good coverage to a very cool paper from SWAT, by Rasmus).
But remember that you've just generated some inefficiency for everybody. We will hear about your result later, feel less motivated to read it, and your recognition for having come up with the cool idea will be slow in coming.
If you regularly submit cool papers to non-top conferences, you should realize you're making your own life harder. Really, you're not going to single-handedly change the perception of the conference! Don't fool yourself. What you may be doing is allowing the community to forget about your favorite research directions (so it kind of "drops out of status"). You're also making life very hard on your students, a few years down the road.
Like I said before, how we evaluate each other's work is in large part a social phenomenon. Make an effort to keep your favorite area "in the news," even if this means taking some occasional rejections from big conferences. Presumably, you've spent a lot of effort solving something you care about. Spend a bit of effort in making it known, even if it's out of the way.
People don't think your results are cool? Start giving some talks. Write a survey. Write better introductions. Blog. Implement and show them how cool it is. Whatever, just don't hide behind statements like "a SWAT paper can be even better than a SODA paper." Yes, nobody said it can't. That wasn't the point.
2. Conferences as a measure of your quality
The root of all evil is that conferences are being used to measure your quality as a researcher. This really tends to piss people off, and trigger emotional reactions like "wait, but SWAT is a really good conference, my paper there is great." If you've read the above, you'll notice nobody said your paper is not great. It's just that it's great for reasons other than the label.
But while you're trying to shoot down ranking researchers by conferences, understand that at some point you just absolutely have to be ranked (hiring, promotion, getting a grant, getting a prize, students deciding what university to go to etc etc etc), and consider the alternatives for such a ranking:
- the idiot who speaks most, is affable, and sucks up to the big shots, wins despite the fact that he's not producing science at the same level you are.
- the dean has to rank you by reading your papers. The guy who writes the boldest unsubstantiated and ridiculous claims in the abstract wins. Your deep technical idea is not appreciated because the dean doesn't have the context of your subfield, nor the time to read your paper in detail.
- the guy who can get the best recommendation wins. The guy whose adviser writes on every other letter that this is the best student to come out of his university in the past 20 years wins. (Oh wait, that might not be so far from actual fact...)
- we rank people by number of publications, appearing anywhere. Science stops because we all stop thinking and start writing.
- we rank people by journal publications. Journals are, and must be slow in theory (we're presumably checking correctness). This makes it unreliable for comparing graduating students, who presumably have 1-2 journal publications at best. Even worse, remember that journal acceptance is determined by one dude (the editor), who's not even anonymous, based on some inconclusive statement from another dude. Conferences have several people looking at your paper. Even better, they get to look at a large sample of papers, and have a constraint on the number of papers accepted, which makes the decision more well-grounded.
The purist objection is that we give credit for unrefereed work. But remember:
- journals refereeing can and has failed to spot errors;
- if your paper turns out to be wrong, you lose a whole lot more than the credit you temporarily got;
- if there are doubts about correctness and you haven't published a journal paper in X years, credit kind of vanishes;
- if nobody cares enough to read your paper and spot a mistake, you weren't getting any credit to begin with.
If you're at a point where ranking is irrelevant to you, congratulations! You are free to concentrate on science, and ignore this nonsense. But you should still make some effort to submit papers to the right tier in the conference hierarchy, to make it easier for the rest of us (see above).
3. Publishing papers "somewhere"
There are a few good reasons to submit papers to a non-top conference:
- you have an intriguing idea, but you don't know how to achieve anything great with it. You want to tell it to people --- maybe somebody is inspired, and you work together to do something greater.
- you have a topic that is not making it to top conferences, but you think is cool and deserves more attention. You hope that people in the audience will be impressed, and once the paper is published, it will slowly gain attention and signal a return of your cool topic. And you've already done the other things, like give talks, write surveys etc.
- you need a paper to travel to the conference, but your reason to travel is actually to meet the other people there (a very worthy goal). Maybe your grant agency doesn't let you travel otherwise, or maybe you need an excuse for your class / family / etc.
- you've worked on something due to irresistible curiosity, but it didn't turn out to be great. At the same time, you're kind of aware that you need to pad your CV.
Well, I did work on them (sometimes quite a bit), so I felt the need to write them, and put them somewhere.
The alternative was putting them in a journal or on arXiv. But a journal takes too much effort for something non-great --- not worth it. On the other hand, arXiv makes people nervous. The fact that I have a bunch of papers in "refereed" and non-bogus venues make the hiring committees more relaxed. I expect I will be submitting a whole lot more to arXiv when I get more senior.
But the wrong thing to do is fool yourself that you will build a great career by generating enough papers at medium-level conferences (or even worse, pressure people into believing that you've had a great career doing that). You may get some mileage out of aiming for medium-quality results, but it won't be a great career.
An even worse (and dangerous) thing to do is to try to develop a bad conference into a "community" centered around mediocrity and making its own members feel good about themselves. Fortunately, we've at least kept the algorithmic conferences safe in this regard. I think essentially everybody agrees that SWAT/WADS/etc are not communities distinct from SODA.
4. Yeah, but how do you rank conferences?
I'll post about this in a few days, since commentators seem to care enough to ask. Yes, I know I'll be flamed at, and don't care all that much. If you're honest to yourself, you know my opinions are not too far off the mainstream; it's just that I tend to express them more directly.