Tuesday, March 4, 2008

Conferences

Since I am asked to clarify my opinion on conferences, here goes...

1. Efficient dissemination

We have many, many researchers, who publish many, many papers. We don't write good surveys for most things, and certainly we don't keep them up to date. How do we navigate the ensuing madness in a somewhat efficient manner?

Well, we decide to rank some directions as more central to our pursuits, and we start judging some works as more important in those directions. We open some publication venues where we try to select only top contributions in the top directions, and create incentives for people to publish there (so we can ensure the venue gets the top quality stuff).

Then, if you want to stay up to date on some important directions, you only have to follow the big publication venues. If you're trying to stay up to date on directions not judged important, it is still hard, but remember, we're kind of trying to dissuade you from working on those directions! (God forbid, we're not stopping you, but you can't optimize in all directions simultaneously.)

Of course, what we think is cool and important is very open to interpretation, and essentially a social phenomenon. When two poles are forming in a community, they will each designate another venue as the cool place to publish, and the communities splits. It seems this is happening with STOC/FOCS versus SODA. To some extent, this is unfortunate, and to some extent, it is unavoidable.

Purists who reject rankings of conferences (what makes you think SWAT is less cool than SODA!) are the equivalent of anarchists in political thinking, who think we don't need to try to generate social efficiency by some central mechanism. In many cases, they have valid objections (and I have often been labeled an anarchist, in the political sense). But I think it is hard to take the dogma entirely seriously.

If you've submitted a cool paper to a non-top conference, don't worry too much. It will "make it" eventually (e.g., our data-structures course gave some good coverage to a very cool paper from SWAT, by Rasmus).

But remember that you've just generated some inefficiency for everybody. We will hear about your result later, feel less motivated to read it, and your recognition for having come up with the cool idea will be slow in coming.

If you regularly submit cool papers to non-top conferences, you should realize you're making your own life harder. Really, you're not going to single-handedly change the perception of the conference! Don't fool yourself. What you may be doing is allowing the community to forget about your favorite research directions (so it kind of "drops out of status"). You're also making life very hard on your students, a few years down the road.

Like I said before, how we evaluate each other's work is in large part a social phenomenon. Make an effort to keep your favorite area "in the news," even if this means taking some occasional rejections from big conferences. Presumably, you've spent a lot of effort solving something you care about. Spend a bit of effort in making it known, even if it's out of the way.

People don't think your results are cool? Start giving some talks. Write a survey. Write better introductions. Blog. Implement and show them how cool it is. Whatever, just don't hide behind statements like "a SWAT paper can be even better than a SODA paper." Yes, nobody said it can't. That wasn't the point.

2. Conferences as a measure of your quality

The root of all evil is that conferences are being used to measure your quality as a researcher. This really tends to piss people off, and trigger emotional reactions like "wait, but SWAT is a really good conference, my paper there is great." If you've read the above, you'll notice nobody said your paper is not great. It's just that it's great for reasons other than the label.

But while you're trying to shoot down ranking researchers by conferences, understand that at some point you just absolutely have to be ranked (hiring, promotion, getting a grant, getting a prize, students deciding what university to go to etc etc etc), and consider the alternatives for such a ranking:

  • the idiot who speaks most, is affable, and sucks up to the big shots, wins despite the fact that he's not producing science at the same level you are.

  • the dean has to rank you by reading your papers. The guy who writes the boldest unsubstantiated and ridiculous claims in the abstract wins. Your deep technical idea is not appreciated because the dean doesn't have the context of your subfield, nor the time to read your paper in detail.

  • the guy who can get the best recommendation wins. The guy whose adviser writes on every other letter that this is the best student to come out of his university in the past 20 years wins. (Oh wait, that might not be so far from actual fact...)

  • we rank people by number of publications, appearing anywhere. Science stops because we all stop thinking and start writing.

  • we rank people by journal publications. Journals are, and must be slow in theory (we're presumably checking correctness). This makes it unreliable for comparing graduating students, who presumably have 1-2 journal publications at best. Even worse, remember that journal acceptance is determined by one dude (the editor), who's not even anonymous, based on some inconclusive statement from another dude. Conferences have several people looking at your paper. Even better, they get to look at a large sample of papers, and have a constraint on the number of papers accepted, which makes the decision more well-grounded.

    The purist objection is that we give credit for unrefereed work. But remember:
  1. journals refereeing can and has failed to spot errors;
  2. if your paper turns out to be wrong, you lose a whole lot more than the credit you temporarily got;
  3. if there are doubts about correctness and you haven't published a journal paper in X years, credit kind of vanishes;
  4. if nobody cares enough to read your paper and spot a mistake, you weren't getting any credit to begin with.

If you're at a point where ranking is irrelevant to you, congratulations! You are free to concentrate on science, and ignore this nonsense. But you should still make some effort to submit papers to the right tier in the conference hierarchy, to make it easier for the rest of us (see above).


3. Publishing papers "somewhere"

There are a few good reasons to submit papers to a non-top conference:
  • you have an intriguing idea, but you don't know how to achieve anything great with it. You want to tell it to people --- maybe somebody is inspired, and you work together to do something greater.

  • you have a topic that is not making it to top conferences, but you think is cool and deserves more attention. You hope that people in the audience will be impressed, and once the paper is published, it will slowly gain attention and signal a return of your cool topic. And you've already done the other things, like give talks, write surveys etc.
There are a few acceptable reasons, but we should really work to minimize those:
  • you need a paper to travel to the conference, but your reason to travel is actually to meet the other people there (a very worthy goal). Maybe your grant agency doesn't let you travel otherwise, or maybe you need an excuse for your class / family / etc.

  • you've worked on something due to irresistible curiosity, but it didn't turn out to be great. At the same time, you're kind of aware that you need to pad your CV.
I admit that I totally understand this last point. My papers outside STOC / FOCS / SODA / SoCG / ICALP were not great (not saying you should believe my papers in STOC /... were all great; I liked them, but hey, I wrote them). Why did I publish the others, in non-top conferences?

Well, I did work on them (sometimes quite a bit), so I felt the need to write them, and put them somewhere.

The alternative was putting them in a journal or on arXiv. But a journal takes too much effort for something non-great --- not worth it. On the other hand, arXiv makes people nervous. The fact that I have a bunch of papers in "refereed" and non-bogus venues make the hiring committees more relaxed. I expect I will be submitting a whole lot more to arXiv when I get more senior.

But the wrong thing to do is fool yourself that you will build a great career by generating enough papers at medium-level conferences (or even worse, pressure people into believing that you've had a great career doing that). You may get some mileage out of aiming for medium-quality results, but it won't be a great career.

An even worse (and dangerous) thing to do is to try to develop a bad conference into a "community" centered around mediocrity and making its own members feel good about themselves. Fortunately, we've at least kept the algorithmic conferences safe in this regard. I think essentially everybody agrees that SWAT/WADS/etc are not communities distinct from SODA.

4. Yeah, but how do you rank conferences?

I'll post about this in a few days, since commentators seem to care enough to ask. Yes, I know I'll be flamed at, and don't care all that much. If you're honest to yourself, you know my opinions are not too far off the mainstream; it's just that I tend to express them more directly.

14 comments:

Anonymous said...

Almost everything that is told could apply to journals as well.
The other thing is that everyone makes the cut of quality where she finds it more functional. Should I only work for JACM or should I decrease my standards slightly to SICOMP or should I secrease slightly more to Algorithmica & ACMToA or should I decrease them to include blah, blah...
It is just a matter of taste.

11011110 said...

An important criterion for me, that you've left out, is the audience. I stopped sending computational geometry papers to STOC/FOCS, and started sending them primarily to SODA and SoCG instead, because I noticed that the STOC/FOCS ones weren't getting the attention I thought they deserved (in terms of citations, say). The other people whom I wanted to know about the results weren't going to STOC/FOCS and so didn't pay attention to my papers there.

For the same reason, I'll send papers to Graph Drawing or the Meshing Roundtable even though those are in absolute quality arguably worse than SoCG: by sending a paper to those more specialized conferences, I can reach a different audience, one more likely to be interested in my work.

Of course, that doesn't explain good-but-not-the-best algorithms conferences such as, say, SWAT. My feeling there is that the papers there may not be on trendy problems, or they may be using more standard techniques, or the authors may have been on the SODA committee, or they may have had personal reasons for being able to go to one conference and not the other — if you're trying to limit how much you have to read to keep up with the field, SODA is a better choice, but there's still plenty of far-from-worthless research being published at those other places.

Anonymous said...

Purists who reject rankings of conferences (what makes you think SWAT is less cool than SODA!) are the equivalent of anarchists in political thinking, who think we don't need to try to generate social efficiency by some central mechanism. In many cases, they have valid objections (and I have often been labeled an anarchist, in the political sense). But I think it is hard to take the dogma entirely seriously.

?? Who are you talking about? Nobody said lets not rank things or say one is not better than the other. It's just that sending your papers to the highest ranking conference they'll be accepted in is not the best strategy for everyone.


Whatever, just don't hide behind statements like "a SWAT paper can be even better than a SODA paper." Yes, nobody said it can't. That wasn't the point.


The point is you didn't called SWAT second-tier or non-top conference; you called it worthless.

MiP said...

If you read carefully what I'm saying, SWAT is a worthless label. It doesn't mean all papers there are bad, but putting a paper in SWAT gives it little visibility (compared to other places), and little credit (you're certainly not building a career this way). It simply means "published somewhere non-bogus." Alright, maybe that's not entirely worthless as a label, I assume.

Paul said...

The way I see it, there are many topics of interest, and the community prioritizes amongst them. Some topics are more interesting, and this work makes it STOC/FOCS/SODA. Some work is less interesting, and this makes it to SWAT. (I am not even bringing up the point that this "interesting" is a subjective thing and just the majority opinion. This is for another discussion). Its not that the topics of S/F/S are interesting and those of SWAT are not...its just that those of swat
are not AS interesting.

Now, you are saying that everyone should be working on only what the community deams interesting....in that case, there would be too much overlap and too many people working on the same thing. So the community needs some people to work on the less interesting subjects---because, in the end, they are still interesting.

It is true, that the more interesting topics have more skilled people working on them, and therefore, achieving something on those topics says something about how skilled you are. So it is OK to judge people (at least partially, as a rough quick estimate) on the number of STOC/FOCS/SODA pubs.

BUT this does not make the second-tier conferences worthless. They just have results that are not considered as important as STOC/FOCS/SODA by a lot of people. Not everyone is a genius, and some people are not at a level where they can compete with the geniuses. There are plenty of topics, and these people work on some other topics. The community wouldn't function without it.

Anonymous said...

They just have results that are not considered as important as STOC/FOCS/SODA by a lot of people.

And in several instances, subfields that were once considered very relevant by the community are now considered rather marginal as a whole.

Lookup on DBLP the table of contents of any decade old FOCS/STOC proceedings and on the average you'll find 5-10 papers that today would be rejected solely on relevance grounds (i.e. nice result but not relevant enough for the community to care about).

Thanks to second tier conferences, the other results that were displaced by those now not-so-relevant papers were not forever lost, but got to see the light of day elsewhere.

Paul B said...

Your analysis completely ignores another important criterion and a significant part of the raison d'etre for conferences, including ones such as SWAT, STACS, FSTTCS, and COCOON: Geographic proximity and accessibility.

Not everyone, even authors, can get a visa or afford the cost or inconvenience to travel to an expensive major city in the US. Certainly, the local/regional research community for these conferences will have many more members than can do so. Having a nearby conference gives them the opportunity to get the benefits of research interaction - both with each other and with researchers from outside their immediate geographic community.

There are many regional conferences and they have their appropriate role in this context. That does not mean that they are necessarily worthwhile venues for you to travel half-way around the world to attend and present your paper, though our research community as a whole would lose some vitality and interchange of ideas if nobody chose them.

Anonymous said...

putting a paper in SWAT gives it little visibility (compared to other places), and little credit (you're certainly not building a career this way).

One shouldn't ignore the opinion of the field about what is or isn't relevant. On the other hand trend chasing is almost a sure way *not* to build a long lasting reputation.

Good people find areas they consider relevant and work on them, STOC/FOCS be damned. Look around you at MIT and see how many of your professors spend time actively chasing STOC/FOCS as compared to the ones who simply do excellent work, and then submit to whichever conference is most appropriate.

Anonymous said...

Guys, guys, think a bit before you post. Writing things about trend chasing (and hot topics at STOC/FOCS etc) on Mip's blog is remarkably stupid. This is the guy who solves problems where nobody made progress in 20 years. And he always complains that there's zero activity in his field at STOC/FOCS (which he's generally right about).

Anonymous said...

Guys, guys, think a bit before you post. Writing things about trend chasing (and hot topics at STOC/FOCS etc) on Mip's blog is remarkably stupid. This is the guy who solves problems where nobody made progress in 20 years.

This makes it sound as if Mihai could do no wrong just because he cracked a few toughies. Many who read his SWAT comments clearly disagree.

Anonymous said...

This makes it sound as if Mihai could do no wrong just because he cracked a few toughies.

This makes it sound as if you're trying to pick a fight out of thin air. The point was that Mihai is the opposite of a trend-chaser, which is obvious, and unrelated to how right or wrong his opinions can be.

Anonymous said...

This makes it sound as if Mihai could do no wrong just because he cracked a few toughies.

This makes it sound as if you're trying to pick a fight out of thin air. The point was that Mihai is the opposite of a trend-chaser, which is obvious, and unrelated to how right or wrong his opinions can be.


This makes it sound as if you are Mihai himself.

Anonymous said...

About your comments on journals: there's no good reason why it should take a year to check correctness of (most!) CS papers; this seems to be a social artifact.

For example, in the physics community it's standard to get referee reports (often several) in a matter of weeks.

Also, I don't see how an accept/reject decision made by a conference is more "well-grounded": it just means that your work is being ranked relative to the other people who happened to submit to that conference.

Surely we should be looking for a more objective measure of quality?

MiP said...

On the contrary, comparing to a large (somewhat random) sample of papers is an extremely stable/objective measure, according to our favorite Chernoff bound. The decision by an editor whether a paper is "good enough" is 100% subjective.