This is a guest post by Mikkel Thorup:
--------------------------
I think there is nothing more inhibiting for problem solving than referees looking for new general techniques.
When I go to STOC/FOCS, I hope to see some nice solutions to important problems and some new general techniques. I am not interested in semi-new techniques for semi-important problems. A paper winning both categories is a wonderful but rare event.
Thus I propose a max-evaluation rather than a sum. If the strength of a paper is that it solves an important problem, then speculations on the generality of the approach are of secondary importance. Conversely, if the strength of the paper is some new general techniques, then I can forgive that it doesn't solve anything new and important.
One of the nice things about TCS is that we have problems that are important, not just as internal technical challenges, but because of their relation to computing. At the end of the day, we hope that our techniques will end up solving important problems.
Important problems should be solved whatever way comes natural. It may be deep problem specific understanding, and it may build on previous techniques. Why would we be disappointed if an old problem got solved by a surprising reuse of an old technique?
Wednesday, September 29, 2010
Problem solving versus new techniques
Elegant reuse of old techniques is not a bad thing. After all, when we develop new techniques, we are hoping they will be reused, and from a teaching perspective, it is great to discover that the same general technique applies to more diverse problems. The last thing we want is to encourage authors to hide reuse. Of course, reuse should be highly non-obvious for a strong conference, but for established open problems, non-obviousness will normally be self-evident.
Posted by Mihai at 4:47 PM
Subscribe to:
Post Comments (Atom)
11 comments:
I'm surprised more people in our community don't think this way. I once had a paper rejected where the sole negative reviewer wrote that we made progress on an important question, then carried on to criticize the lack of new techniques. Clearly, it wasn't just him -- the PC agreed. From what I've seen, "We've developed extremely novel technique X, though we don't have any real applications for it at the moment" is viewed much more positively in our community than "We've solved extremely important problem Y, using recently developed technique X".
What good is a new technique if we can't solve our important problems with it?
I disagree with the part about "reuse should be highly non-obvious". Why? Isn't that just creating incentives for making it look non-obvious by obfuscating?
--kunal
How do you define an important open problem? Defining it more precisely will help shape people' understanding of what kinda problems are important rather than just leaving it to the feeling of being important by few individuals.
Some candidates.
- open questions in previous paper of stoc/focs.
- open questions formulated by famous researcher.
- open questions which many famous researchers have tried to solve?
- Any other?
Among these I find the third to make sense the most. The problem is that it still does not tell how or why such a problem is important.
While #3 on your list can convince me to treat a problem as important, I am more happy about Mikkel's definition:
"One of the nice things about TCS is that we have problems that are important, not just as internal technical challenges, but because of their relation to computing"
Having an external point of reference is great, in my opinion -- our problems are worth more than abstract flags created to make a competition.
The field seems obsessed with technical difficulty. Recently we had a chance to make progress on a well studied problem using known techniques in a slightly new way.
The one critical comment in the reviews for each of the n+1 times the paper has been rejected is: there are no new ideas.
No discussion about the importance of the contribution or about the fact that it still considered a central problem in computer science after many decades. The proof is not complicated, ergo paper must be rejected and on to the next paper in the review pile.
Thus I propose a max-evaluation rather than a sum.
But 'sum' approximates 'max' within a factor of 2, so what you assume to be the current system is already a good approximation of what you are looking for.
Some commenters, however, seem to say that some reviewers use 'min' instead of 'max' in their evaluation
Some commenters, however, seem to say that some reviewers use 'min' instead of 'max' in their evaluation
The min would suggest that a high-technical score low-importance paper would be rejected under the current system which is generally not the case.
From my experience as PC member over the years the observed overall behavior of the evaluation function is:
Technical_score | Relevance_Score | Decision
high | high | accept
high | low | accept
low | high | reject
low | low | reject
any | zero | reject
zero | any | reject
This means that under the current system in most cases technical difficulty can make for the lack of relevance, while relevance does not overcome lack of technical difficulty except in the most extraordinary of circumstances.
Obsession with new techniques is injurious to research. Firstly, what seems to be new techniques to TCSists may have been seen in some form in Math. and so is not really new. Secondly, using old techniques in new and creative ways is very beneficial to research, as that seems to be the only way to really understand the technique and its relations to others. Further, abstraction and relating different techniques seems much more respectable that coming up with "new" techniques, as it shows that many of the "new" techniques floating round are just corollaries of known things. In fact what one should want is to be able to say - there are really only 5 techniques and they are explained well in that textbook. Thats the way we can hope future generations to have any motivation to learn advanced stuff. Boasting that we have 20000 techniques only scares people away.
I agree with the last anonymous. Can someone please define "new technique"? I had had papers where I might claim that such and such a technique is new, but you could also argue they were just clever tricks. I doubt there have really been that many truly new techniques have been developed in the last 10 years. Can anyone cite examples?
A new technique is a clever trick that gets applied to many independent problems.
For instance, we didn't know how to separate linear space from polynomial space in data structures. In my paper with Mikkel from STOC'06, we gave the first such separation by looking at many queries simultaneously and effectively saving in the communication complexity.
By now there are dozens of papers that prove interesting lower bounds on many different problems, starting with this trick. I would certainly call it a "technique" at this stage.
How about the approach of Seth Pettie's SODA 2010 paper, to analyze data structures via forbidden submatrices? The bounds existed before, and analyses of things with forbidden substructures existed before, but doesn't it become a new data structures technique now that it's applied to data structures?
Post a Comment