Sunday, September 14, 2008

SODA (II)

The SODA accepted papers, with abstracts, can be found in this text file, or as a PDF file (curtesy of Suresh). 


Each year, looking at the SODA list of papers makes me depressed. It is hard to see --- among all the quantum dogs and ponies exhibiting Braess's paradox in stochastic networks with piecewise linear costs --- how we are working to understand the fundamental questions about computation.

What we're missing badly is innovation in solution space, not innovation in problem space. Remember this when you sit down to work on your next paper.

19 comments:

JeffE said...

Why must we all be "working to understand the fundamental questions about computation"? What makes one question about computation more "fundamental" than another?

Knowledge is built slowly, over long periods of time, out of many small, scattered pieces. Of course, we all celebrate madly when someone brings order out of the chaos, but first there must be chaos. Glorious, fertile chaos.

Anonymous said...

Which papers do you object to?

Michael Mitzenmacher said...

First, what Jeff said.

Second, there are lots of reasons to work on problems. Certainly one might be because you feel it's a "fundamental question about computation". But there are other good reasons. (Potential practical utility, for one.)

This is not to say you shouldn't have opinions on what a good paper is. It sounds like (although it's not clear to me) that you think SODA focuses too much on areas outside the main focus of TCS (into "side areas" like quantum and economics), and/or that too many SODA papers are incremental. Both are potentially valid points, but I don't think your post argues them effectively. (Indeed, as I'm saying, I'm not even clear that that's your actual point.)

Mihai said...

Jeff:

1. If there were a simple answer to what makes one question more fundamental than another, we would put it in the call for papers and have the PC use it :) There's no rule for this, though I think we agree most of the times on what's fundamental (we know it when we see it).

2. Nope, not everyone has to be working on fundamental problems. But more should, and the social mechanism should be made to encourage it. Right now it's just too easy to go "can't solve the problem? let's consider a slightly different one"

Anon:

I don't object to any papers in particular. It's just a trend in the theory community that I'm unhappy with. It would be much easier to give you specific examples of really cool papers from this SODA.

Michael:

1. The side areas are one thing, yes. But, on first look, this SODA seems much more balanced than some previous ones, and the PC deserves congratulations for that.

2. I don't quite see how I mentioned "too incremental" (which is of course a valid point to raise). Typical problems with blogs, everybody hears what they want to hear.

3. What I meant is simply that people are inventing too many problems. Yes, practice requires many problems. But are we doing anything helpful by considering these infinite small variations? They will not get used in practice, since that practitioners never hear about them, and we're not the right community to invent the problems and evaluate the solutions. In some sense, I agree with your point on experiments: if you're studying a problem claimed to be practical (and without great theoretical value), you should actually have experiments.

Suresh Venkatasubramanian said...

"We know it when we see it": ah, the porno-riffic definition of quality. Frankly, after attending as many SODA business meetings as I have, I think the only thing we can actually agree on is how crappy the current incarnation of the conference is compared to "before". Where for "current" you insert whichever year it happens to be, and "before" is whichever year you happened to like.

I think the problem, Mihai, is the undercurrent (aha, I'm reading into this what I want!) that if only we cared enough, we'd work on more fundamental problems.

I'm not sure that's the case, and I think it does a disservice to the community. We can only do what we are able to: I certainly couldn't (at this point in time) drop everything I was doing to work on P vs NP (and actually produce results).

I think it's important not to confuse the papers that appear in conferences with the problems people set out to solve. I view papers the way people talk about journalism: as a "first draft of history". Over time, we can connect the dots between papers and realize how important some pieces of work were, but I doubt that this was clear at the time. For a simple example in an area I have some familiarity with, who would have thought that the Munro-Paterson work on selection with limited space would be as cited as it is now ?

Less modestly, I have an internal vision in my head of why I'm solving a set of problems, and I do think that they ask fundamental questions about the area I'm currently interested in (data mining): but if you read the actual papers I write, they fall short of this ideal, because no one paper really can match up to any vision worth its name.

Anonymous said...

"What we're missing badly is innovation in solution space, not innovation in problem space." I strongly disagree. The formalization of real-world issues into concrete TCS problems is at least as important. (Including the "quantum Braess paradox".)

cantdance said...

I don't think Mihai is saying that "quantum Braess paradox" is not important, what is not important is umpteenth similar paper studying slight modifications of the same problem and using virtually the same (or else non-innovative) tricks to get to the point where we have already been, so to say. It just seeems that it is a lot easier to sell a worthless paper under a "innovation in the problem space label"...

Good post, btw, I totally agree, feel the same when reading "accepted" lists most of the time. This year's SODA somehow seemed more to the point, but it's probably very subjective...

Anonymous said...

I'll agree with Mihai on this one.

There are a lot of papers published because the authors felt the need to publish a paper, not because the paper advances any "science" or addresses any pressing practical question.

This is true in all fields, not just SODA-type algorithms.

Mihai said...

Suresh, your comment is somehow oblivious to what is being said. I, in fact, volunteered the opinion that this SODA is better than some previous ones (and Marcin agrees.)

If P vs NP were the only fundamental problem in the field, we should go searching for some field with more depth to it. For example, we decided a long time ago that APSP is an important problem, and I'm happy to see papers that:

... obtain a truly sub-cubic algorithm for the APSP
problem in real-weighted directed graphs, where the number of distinct weights emanating from each vertex is O(n^{0.338}).
[Raphael Yuster, SODA'09]

Even if this is just a theoretical improvement, and it's not quite solving the general problem, we're focusing and making progress, and hopefully one day something nice will happen.

If you have a grand vision (be it with many gaps and pieces), you should try hard to publicize it, and allow people to work on those problems. The alternative of letting history connect the dots in your work seems inferior (from the perspective of the community).

Josh said...

I am surely reading into this more than Mihai intended, but it seems to me the phenomenon he's bemoaning is a direct result of the must-publish-in-a-conference-every-6-months culture of CS. I think theoretical math suffers much less from this problem, and part of the reason is that people are not judged by number of conference publications, but by the depth and quality of their work. Even the whole notion of a "conference publication" carrying any weight at all is foreign to many mathematicians.

Mihai said...

Josh, I am not sure that the 6-months cycles is the main cause of it. If we had journals and a culture that inventing a new problem each time is cool, people would behave the same.

Josh said...

Mihai, I'm not sure that the 6-month cycle is the problem either. But I imagine that the pressure to publish so frequently leads people to consider problems that they can solve more quickly. These problems tend to be the small increments and modifications you mentioned, rather than new fundamental research.

(BTW: I just saw that Michael Mitzenmacher has posted a response on his blog, and Jonathan Katz gave a similar response to this one.)

Anonymous said...

But I imagine that the pressure to publish so frequently leads people to consider problems that they can solve more quickly. These problems tend to be the small increments and modifications you mentioned, rather than new fundamental research.

Can you really get a hired at a decent place writing unmemorable papers? I would think this would put some pressure on to produce quality. (And once you get hired, and have tenure, isn't there even less incentive to write weak papers?)

Anonymous said...

I would say that people are primarily judged by the quality and influence of their work.

That is primarily not quantity, nor is it depth, though both quantity and depth can be good attributes for one to have.

Part of TCS culture is to have highly collaborative papers with multiple authors and credit widely shared. The regular conference deadlines provide just the right incentive to get these group projects written up in a timely manner. (Otherwise it is too easy to leave it for someone else to do.)

Such collaborative work is a good sign for future success because it demonstrates ability to work well with others. Given the selectivity of the very top TCS conferences, the quantity of papers there is a factor. However, a large quantity of work at minor conferences doesn't carry much weight.

Also, quality and depth are not the same thing. Just because something is hard doesn't mean that it is important. It regularly happens that some of the best TCS ideas are characterized more by cleverness and elegance rather than depth. (A complaint that PCs sometimes incorrectly conflate quality and depth is part of the argument in the "conceptual contributions" manifesto distributed earlier this year.) This isn't to demean depth - some of the best results are also deep - but rather to say that depth shouldn't be the main factor in assessing quality.

The TCS conferences particularly seem to work well in amplifying the influence of selected work. Because of wide coverage of subject areas and the selectively and imprimateur of these conferences, results are disseminated to a broad spectrum of TCS researchers rather than the narrow audience that might read them in a specialized journal or find them in the sea of submissions on the ArXiv.

Josh said...

Paul,

Thanks for such a detailed and thoughtful response. Yours is one of the first arguments I've seen that has even come close to convincing me of the value of the current TCS conference culture.

Just to clear something up: when I said "depth" I was not referring to technical difficulty. My understanding of the "conceptual contributions manifesto" was that PCs often conflate technical difficulty with quality. But there are in fact at least three different aspects here: quality, technical difficulty, and depth. Results can be elegant and easy yet still contain deep new ideas; conversely, results can be technically very difficult yet still be incredibly shallow. Overall quality of a work might include depth, difficulty, elegance, exposition, and presentation, if not also some other "je ne sais quoi."

But coming to a deep idea may take more than 6 months, even if in retrospect it looks easy and elegant.

alex said...

"For example, we decided a long time ago that APSP is an important problem and I'm happy to see papers that..."

If you're happy with this as a justification, I really don't see why you are complaining about "quantum dogs and ponies exhbibiting Braess's paradox." If "we" decided that APSP is important, "we" can decide that quantum computing, Braess's paradox, and so on, are even more important. Why should we prefer the decisions "we" made a long time ago rather than recently?

Mihai said...

Anon, there at least 2 separate answers:

1. When I declare a new problem important, it hasn't been vetted by the community. Older problems like APSP certainly have (as bonus points, practical people know them well and claim to use them).

2. Our field is overly trendy. To me, an important problem is one that remains important after all the trend chasers quiet down.

alex said...

"When I declare a new problem important, it hasn't been vetted by the community. Older problems like APSP certainly have "

Quantum computing, Braess's paradox, game theory and networks - all of these have received a lot of attention over a considerable time span, so I think its fair to declare them vetted by the community.

Mihai said...

Quantum computing and computational game theory are still very hot trends, and so far have no connection to any computing reality. It's hard to find too many people in theory who are currently objective enough to vet them (it's always easy to be objective about a past trend though, and you'll hear lots about the PRAM decade, for instance).