Being on the SODA PC is excruciating. Too many submissions are at the level of a hard exercise – things that David Karger would assign in Advanced Algorithms or Randomized Algorithms as homework. And since I'm a problem solver by nature, I cannot resist solving the problems before (in lieu of) reading the papers...
I am left wondering how life on the ICS PC is like. The fact that the PC members can solve the problem in half an hour is promised to not be a negative – so is there any point in engaging the technical side of your brain during the review process?
Which brings me to graph algorithms, a field I have engaged in sporadically. This field strikes me as problem solving par excellence. You think about the problem, you pull a rabbit out of the hat, and run around naked screaming "Εὕρηκα!" Yet most reviewers (which, I will assume, write graph-theory papers in the same way) cannot help commenting on the lack of new techniques in the solution. I interpret this as a code for "We didn't care to think about the problem, we just read over your solution and remarked that it solved the problem by looking at some edges in some order, then at some trees and some vertices."
(I should say I have no sour grapes about this... Graph algorithms in not my main field and all my papers on the topic are in STOC/FOCS. Yet I am amazed by how much the field sabotages itself with this "techniques thing.")
By contrast, I once came across a reviewer that actually wore his problem-solver hat while reading my paper. The review read "I spent days to figure out the interplay and indispensability of the data structures and techniques used. But after fully understanding the entire picture and the interplay of all the data structures, I felt very satisfied. It is indeed a very beautiful piece of algorithmic art. I congratulate the authors for such a novel algorithm." This is possibly the best review I ever got – complemented, of course, by some "no new techniques but important result" reviews.
Which brings me to the main trouble with TCS, in my opinion. If only we could stop inventing a ton of artificial problems every year, and we actually cared about solving some age-old clear-cut problems – then it would be no surprise to find reviewers that have actually thought carefully about your hard problem.
25 comments:
It's always been my policy to spend at least a few hours trying to prove the main result of a paper I'm reviewing before I read the authors' proof outline. There are obviously many dimensions in which papers should be evaluated, but one of them in my opinion is originality of the proof (a paper with unoriginal proofs could still have high impact, but then the other dimensions need to be just that much better). I personally cannot appreciate a proof until I've thought about the problem myself.
As a reader of papers (as opposed to a reviewer) spending the few hours to think about possible approaches is a good idea.
As a conference reviewer there is a tension. One ought to think about things enough to go through the obvious ideas (both to avoid the problem of a rating a paper as obvious when it is obvious only in retrospect and the problem of the existence of an alternative trivial approach that did not occur to the authors) but one should also avoid spending so much time that it sets one up as competing with the authors. A savvy enough reviewer dealing with a number of papers has some potential for anticipating the authors of some of the papers they are reviewing even when what the authors are doing is novel enough.
Hi Paul,
to avoid the problem of a rating a paper as obvious when it is obvious only in retrospect
I think spending some time thinking about the problem (after reading the theorem statement, but before reading anything else) actually helps avoid this problem. A good exposition of a proof organizes itself in such a way that the next step is always obvious. Thus, I feel that if I read such a proof without having thought about the problem myself a bit first, it will be harder to judge, and even worse, I may be tricked into overvaluing unnecessarily technical proofs of easy theorems ("the proof seems hard, so the theorem must be really profound").
On the other hand, if I can prove the main theorem having read nothing of the paper but the theorem statement itself, then I'd argue that, by definition, the proof really is obvious and not just obvious in retrospect. At this point, I switch to judging other dimensions of the paper (for example, maybe just finding the right theorem statement in and of itself is an important step that merits acceptance).
About competing with the authors, I've always turned this into (what I think is) a non-issue by writing proofs I've found in my reviews, and letting the authors do what they want with them. I've never published a proof found during the review process, and I don't plan on doing so in the future either. Though in a case where the authors eventually publish their paper without any mention of the existence of an easier proof, I'd consider that an injustice against the community, at which point I'd probably put the proof I found on the arXiv (note: I haven't had this happen yet).
>If only we could stop inventing a ton of artificial problems every year, and we actually cared about solving some age-old clear-cut problems – then it would be no surprise to find reviewers that have actually thought carefully about your hard problem.
Well, the problem is we have not been able to, so we keep "inventing" new problems that we can solve.
Overvaluing technical aspect have caused the authors to feel that they have to give proofs that are not easy, I have seen people obfuscate their proofs to get them accepted. And this in turn forces reviewers to spend time to check if the proof has not been obfuscated. When I see people present proofs in conferences that are much simpler than those in the accepted paper, I cannot stop myself from suspecting that the author(s) have intentionally obfuscated the proof.
What is the point of a difficult proof if the proof does not contain any new idea or technique? We should stop considering difficult proofs as positive points, but rather consider them as negative points.
If the result is not interesting, the proof does not introduce any new techniques, and there is no new interesting idea in the paper, the paper should be rejected, even if the proof is extremely difficult.
"knowing" that the theorem has a proof also helps quite a bit, I think
I like inventing new problems motivated either by the practical scenarios or by importance of understanding a theoretical object (I work in Cryptography). There is so much the field HAS gained from this exercise, and there is a lot more yet to be gained from this.
I am not spending too much of time constructing a one-way function as hard to break as solving NP-complete problems. But thats just me...
I agree with the previous commenter. Leaving aside the issue of facilitating reviewing, there can be plenty of value in creating new (perhaps, at first glance, 'artificial') problems. Examples: new complexity classes, the unique games conjecture, computational aspects of game theory.
The following remarks are not entirely serious.
1. The authors who read "it took me 67 minutes to solve the problem stated in the introduction" in the review have a good hint of who the reviewer is.
2. It seems that results that end up in courses (as lectures or homeworks) are either (a) very cool or (b) too easy to be published.
Our main goal is to solve problems, not invent new techniques. When a problem is important, I am most happy with a 5-line proof that contains no new "technique", yet was somehow missed for many years.
In fact, this is exactly the research I like to do. But it will become increasingly harder if this community buys more into the "new techniques" idea, which is couple with us not agreeing one any important problem and inventing new fields all the time.
The proper way to invent a field is to sit down and write a book about it, in which you carefully apply known techniques and see what is implicitly known.
The way we do things, 3 years after a field is really in vogue, most papers are still figuring out how to recycle old ideas into a new area. When we hit the open problems, the field all but dies.
Solving the problem *after* reading the abstract and introduction, which hopefully contains 1) a clear statement of the problem and 2) some high level ideas about the solution, is not the same as solving the problem from scratch.
Mihai:
There are quite a few statements made in your post and comments that I assume are exaggerated for effect. I would appreciate seeing you take the other position, and discuss when you think it is a good idea to try to invent a new model or formulate a new question. Presumably you think that under certain circumstances this is a good idea.
It must be obvious to you that the open problems you solve came from somewhere, and that the fields you work in at one time did not exist. Someone had to invent the field and state the problem. Many new fields become faddish and then die, taking the new problems and models with them. Even in a subfield that stands the test of time, many models and problems are artificial, and in retrospect where perhaps not fruitful to study. These statements are true and, to me, completely unremarkable. Uncertainty is everywhere in research, including uncertainty about what are the important questions to ask. We cannot simply state that the only important questions are those that already exist. And we cannot state that because asking a new question has a 99% chance of failing to identify something interesting, that this implies asking a new question is a bad idea and no one should do it.
I'm sure you don't literally believe this, but your post (and comments, and other related posts) come across that way, and it would help me understand your point of view to state more explicitly how researchers should go about discovering new areas that need to be explored.
The problems you care so much about are actually toy problems proxying for problems that we can't solve.
Mihai, where are the personal insults? without them the post is almost as boring as 'my biased coin' or 'computational complexity'. Come on, dance for us a little!
New techniques can be helpful, they can help others solve other problems. But by a technical paper we usually mean a paper with hard proofs, although there is nothing new in the techniques used in it, the proof is just a long composition of known techniques.
New models or problems can also be helpful, but it seems that people prefer to invent problems of no practical use, no future interest and solve them, in place of making progress toward solving the open problems. There are a number of researchers that from time to time that write interesting papers, which include those papers that we see blog posts about them. TCS needs more of these.
In one sentence, if a paper is of no interest for others, it shouldn't be accepted in top tier conferences, even if the proof is REALLY hard. There are other places for publishing them.
No new *interesting* result, no new *interesting* problem, no new *interesting* technique, no new *interesting* idea -> reject.
anonymous from #4
But by a technical paper we usually mean a paper with hard proofs, although there is nothing new in the techniques used in it, the proof is just a long composition of known techniques.
This sort of paper gets batted about a lot: what do you mean by it? Can you give an example?
An algorithm is "just" a long composition of known commands. But figuring out to put those commands in that order may take months of work and failed attempts. I don't understand the difference between a "new technique" and a "(new) composition of old techniques". If written in standard mathematical prose, then at a fine enough level, all proofs are long compositions of known techniques.
I think that sometimes, reviewers want one particular step to "feel especially clever". Certainly I feel that way about certain proofs, and if most of the proof surrounding that clever-feeling step is short and easy, then I might deem the proof elegant. Cantor's diagonal argument qualifies, for instance. But why is it any less innovative or difficult to be the first one to discover how to string known techniques together to solve a problem for the first time, even if it is written in such a way that each individual step does not seem clever or magical?
I agree with Mihai on one point: this obsession with "new techniques" seems undefined and potentially unhealthy. Sure, it's great if a paper introduces techniques that are useful elsewhere, but focusing on this obsessively is like always asking how well a paper solves some other unnamed problems that are not even in the paper. Why can't we just judge how well it solves the problems it claims to solve, and on the inherent importance of those problems?
Dave, what I said is that if there is nothing interesting and new in a paper, then it shouldn't be in a top tier conference. It is a conjunction, it is not just "no new techniques". I am not arguing for technical papers, neither against them. IMO, being technical or not is irrelevant.
I am neither against a proof which combines the known techniques in a new and interesting way. There is something new and interesting in the combination, accept it, but if is just a technical heavy lifting paper, i.e. using known methods in known ways for solving a problem no one (other than author) cares about it, why should it be in a top tier conference?
If there is something interesting that other people will learn from listening to the talk, accept it, if not, there is no point in accepting it. It is not techniques vs. ideas vs. results vs. ... . It is new/interesting vs. known/uninteresting.
>Which brings me to the main >trouble with TCS, in my opinion. >If only we could stop inventing a >ton of artificial problems every >year, and we actually cared about >solving some age-old clear-cut >problems – then it would be no >surprise to find reviewers that >have actually thought carefully >about your hard problem.
Did you not once say that we as TCS are a highly rewarding field as opposed to Math. Do you think TCS is going to remain the highly rewarding fied it is today if we lock ourselves in age-old problems instead of looking for exciting new problems that will change the world?
So what if in this process we produce a lot of artificial problems? The important problems are not going to fall on our head like that apple did for Newton. We will have to look for them, and in the process we will hit a lot of junk. Thats ok!
Lots of BS in this post.
I also feel that you are a bit of a cry baby. No doubt about your strength as a researcher, but you continuosly make noise about solving age-old problems, writing the best possible papers, and so on...trying to show off much like those heavy-weight WWE-titles.
It is somewhat understandable since the world has been a bit unfair to you (best thesis nomination case at MIT, for example). But you probably need to get over it.
Anonymous at 1:18:00
> No new *interesting* result,
> no new *interesting* problem,
> no new *interesting* technique,
> no new *interesting* idea -> reject.
Is it possible that what one finds "interesting," another would find boring?
I suggest reading the epilogue of "Observations about the Development of Theoretical Computer Science", by Juris Hartmanis, FOCS 1979. It reminds me of the Statement on Conceptual Contributions in Theory. Mihai can claim that the current drive behind ICS is doomed to end in research failure, and no one can prove him wrong because we won't know for 20 years. But I think it is tough to disagree with the statement that following Hartmanis' lead in 1979 led to better results than if we had instead concentrated all of our efforts only on the problems existing in 1979:
"Frequently we have practiced 'intellectual counterpunching' by staying with small, previously defined (and possibly) irrelevant problems instead of searching for new formulations and development of theories more directly related to computing."
He also makes the exact same point I made above, that unintentionally irrelevant results are the unavoidable consequence of taking chances, but this does not imply we should not take chances:
"On the other hand, I am also struck by how much of what we have done has been of no lasting value. The field is littered with innumerable papers of dubious quality, with papers which hide their shallowness (probably also from the authors) behind obsure mathematical formalizations and deal with minute problems of no overall importance.
At the same time, we have to understand that much of the thinking and research in this area is just the clearing out of the 'intellectual underbrush', the testing and exploring of ideas and models until we see where the important ideas, problems, and results are hidden. Our research and publications are furthermore a communal learning and educational process which shapes future research. It is a complicated process of probing the unknown, trying to grasp the right concepts, to formulate the right models and gain the necessary results and insights."
I am constantly trying to work out, after reading this blog, whether we hate ourselves or each other?
When is the notification date for SODA? Thanks a lot,
In about a week. We're working very actively on it.
Great, thanks for your answer. In few days then, I will let you know depending on the outcome if I am currently looking forward to them or not :)
Post a Comment