That is something that we should all be armed with, yet I cannot find a good list. I was hoping to find something on TheoryMatters, but that project seems to have been hijacked by the lens-to-sciences view, and has very little concrete things about what we have already done. While that may be good for NSF and the US administration, we should also be prepared to show that Theory Matters for Computer Science.
(Incidentally, this reminds me of a great quote by Shang-Hua Teng: "When I came to the US, I learned about the notion of discrimination. But before I discovered discrimination against Blacks, Hispanics, etc, I discovered discrimination against theory in CS departments.")
So either:
- post a link to some similar lists out there
- or suggest your favorite entries for such a list. Ideally, the top 10 list would be compelling enough to be more like 10 lines than 10 pages with explanations, but maybe I'm asking for too much...
Thanks, Mihai, for raising this question on the blog. My reasons for focusing on the last 25 years are:
ReplyDelete- We should be able to present examples that are not too old to argue that what we are doing now matters.
- To avoid the counter-argument: "of course, many fundamental things were pointed out in the first years of computers, but these were things that other people would have realized soon enough".
Some candidates for the list:
- Elliptic curve cryptography?
- Streaming algorithms?
- Interior point methods?
This is what the Kanellakis prize is for, btw.
ReplyDelete* public-key cryptography, specifically Diffie-Helman and RSA (technically it's 30 years though)
* Hashing (FKS, etc etc)
* Splay trees (now I'm just going through the Kanellakis prize list :))
* PCPs and approximations: not directly, but indirectly, by bringing the study of approximation algorithms front and center.
* Nearest neighbour methods (LSH and the like)
* Interior point methods: maybe, although I always thought that people still preferred to use simplex
* The data streaming model: certainly.
* Hubs and Authorities/Page rank ! or, 'eigenvalues of matrices/random walks'. You'd be surprised how often methods based on random walks/hubs-authorities are used.
Some candidates [with a heavy bias :-)]--
ReplyDelete(*) Tornado codes (Luby, Mitzenmacher, Shokrollahi and Spielman) and more generally their work on irregular LDPC codes. This work was one of the prime reasons for resurgence of the study of LDPC codes in coding theory and practice (including the birth of Digital Fountain).
(*) List decoding algorithm for Reed-Solomon codes by Sudan (and the improvements to it by Guruswami-Sudan). As far as I know there are no commercial implementations (yet) but these papers have been highly influential in coding theory.
* text compression and indexing: Lempel-Ziv compression, and more recently Burrows-Wheeler compression and text indices like the FM-index
ReplyDelete* Sequencing algorithms for biology: Developed by theorists, and based on classical combinatorial-algorithms thinking.
* Google/Eigenvalue methods
Probabilistic roadmaps for robotics.
ReplyDeleteSkip lists.
Obviously, I'd argue for the "power of two choices", or Balanced Allocations, which is having more impact that people realize (and now it's "cousin" cuckoo hashing as well), though this fits more generally into the hashing theme.
ReplyDeleteAlong these lines, I'd certainly argue for universal hash functions and Bloom filters; although they don't fit into the "past 25 years" restriction, their power has only really been understood quite recently. (It took a while for practice to catch up with theory there!)
Looking at my syllabus for CS 222, I'd add:
Eigenvalue methods, certainly. (PageRank, HITS, SALSA, Eigentrust, PICASHOW...)
Dimensionality reduction/sketching.
(Min-wise independent hash families.)
Streaming algorithms. (In particular, algorithms for finding "heavy hitters" or elephants.)
Network coding (well, maybe, give it 5 years).
All the basic theoretically designed peer-to-peer network constructions.
Can we take any credit for machine learning algorithms? (Adaboost?)
Range trees/R-trees.
ReplyDeleteText indexing advances over the last two decades which have made massive search engines possible.
Quantum cryptography.
> Can we take any credit for machine learning algorithms? (Adaboost?)
ReplyDeleteWe better - the first boosting paper appeared in FOCS'89.
I would definitely add quantum computing to the list. Despite the fact that building decent quantum computers is still on the "to do" list, the impact of quantum computing outside of theory is hard to beat.
Piotr
ReplyDeleteDespite the fact that building decent quantum computers is still on the "to do" list, the impact of quantum computing outside of theory is hard to beat.
What does "impact" mean? If you mean that lots of non-theoreticians think quantum computing is really cool and want to hear more about it, then I agree that it has had a big impact. On the other hand, I know of no non-theoretical intellectual/research impact, except for attempts to build a quantum computer. They aren't theoretical, but they aren't practical either.