Through an unexpected turn of events, I am teaching CS 172 at Berkeley. This is the standard course on Finite Automata, Regular Grammars, Turing Machines, undecidability, NP-completeness, etc. At Berkeley it is a course aimed primarily for seniors (i.e. it comes after Algorithms, Discrete Math, and usually other "advanced" courses).
Bill Steiger taught the first weeks (finite automata), but unfortunately he has had to return to New Jersey unexpectedly, and I was asked to take over.
So why did I accept? On the one hand, I can cite Feynman on why teaching is important: if you research fails for a while, you can at least find comfort in your constant teaching. But since said comfort is so easily found in wine, this does not quite explain it.
The main reason I accepted was that I remember taking this course with Mike Sipser in sophomore year: it was all so much fun. No student needs to go through this! This course should be significantly redesigned, and by teaching it, I can take a stab at the new version.
The main reasons I dislike the course are: (1) the mental masturbation: teach kids clean unimportant topics (because they're clean and elegant); and (2) the very out-dated perspective, which might have seemed elegant at the time, but with time has proven misleading. Here are some examples:
- the pumping lemma. In the era in which limited state matters (think streaming algorithms), and we have many exciting developments in understanding limited state (by lower bounds), we have students memorize a mechanical way to show that {w wR} is not regular.
- push-down automata. In case you don't remember this: push-down automata are not a nice limited class of machines that can parse grammars --- they are nondeterministic ! Grammars that need the whole nondeterministic shebang are parsed via dynamic programs (most famously, the Earley). More restricted classes are parsed by algorithms with cryptic names, like the LALR parser.
- the lack of exciting stuff in the undecidability section. People in programming languages and databases go on and on about decidable and undecidable logics. Surely there has to be a more exciting example than the Post correspondence problem (and, no, all rights to use "PCP" for this problem have been revoked as of 1990).
- Turing machines. All computer-looking models of computation are equivalent within logarithmic (often constant) factors. Saying that we cannot fix a model of computation with less than a polynomial imprecision in the running times can only lead to a new generation of seriously misguided theorists.
- the P vs NP fetish. I have too much to say about this, so let's keep it for another post. Suffice it to say that the P vs NP fetish has led to people ignoring major topics like fixed-parameter tractability or SAT sparsification.
- a dry approach to space complexity. While things like NL= coNL are reasonably interesting, they seem to me far less interesting than space/time trade-offs. The issue of space/time trade-offs (which has some connection to what computation means in the real world) is completely buried by looking only at the space complexity alone, leading to some unexciting language theory.
My attempts to "fix" the course will be documented on this blog. In the time, please suggest your favorite topic that you think should be covered in introductory complexity courses.