Do waterfalls play chess? and other stories

13 August 2011

I finally took the time to read Scott Aaronson’s new paper Why philosophers should care about computational complexity (there’s a post about it with lots of comments on his blog).

Summary: Aaronson, knowing that I’m writing my thesis at the moment, took the liberty of preparing an introduction for me. Thanks! I’m going to include it verbatim. (Just kidding of course, though I’m actually going to cite this paper extensively.)

The early work on computability had a lot of philosophical significance; actually, we can say that computability was born to answer a philosophical question, Hilbert’s Entscheidungsproblem. We all know that the answer turned out to be negative (or do we? See Section 3.1). Apparently computational complexity theory, the theory of computing under limited resources, didn’t turn out to be so popular among philosophers. That’s obviously a problem, and this paper tries to do something about it.

After a brief introduction to complexity theory (Section 2), Aaronson turns his attention to one of the main cornerstones of this field, which is also one the points that are usually criticised: the relevance of polynomial time, as opposed to exponential time. Here he argues that this distinction is at least as interesting as the distinction between computable and uncomputable. Section 3.3 contains an interesting question that can be answered using a complexity-theoretic argument: why would we call 243112609 − 1 (together with a proof of its primality) a “known” prime, while “the first prime larger than 243112609 − 1” feels somehow “unknown”?

Section 4 is about the Turing test. This is the first time I see in print what Aaronson calls the “lookup-table argument” (though he cites Ned Block and others for it): namely, that there’s no doubt whatsoever that an abstract program P able to pass the Turing test does actually exist (you just need a huge but finite table of replies dependent on the previous history of the conversation). The only way to claim the impossibility of a machine passing the Turing test, apart from metaphysical arguments, requires addressing concerns such as the efficiency of P or the availability of enough space in the universe to store it. That is, complexity-theoretic questions.

Section 5, possibly the most interesting one, addresses the problem of logical omniscience. It is usually assumed that knowledge is closed under deduction rules; for instance, if I know A, and I know B, then I certainly know AB. But consider this: while I know enough basic axioms of maths, I surely don’t know that Fermat’s last theorem holds (except by trusting the mathematical community, which I usually do), even though it’s a mere deductive consequence of said axioms. We can argue that human beings do not possess the computational resources needed to actually achieve omniscience.

Section 6 is about a “waterfall argument” against computationalism. The idea is that the meaning of what a supposed computer actually computes is always imposed by someone looking at it, that is, it’s always relative to some external observer. I freely admit I am (was?) a fan of this argument, though I don’t necessarily see it as an attack to computationalism (I use to call this “computational relativism”, a term whose invention I claim, since Google returns zero hits for it :-)). According to some proponents, this leads to some “strange” consequences.

For instance, assume that we encode chess positions in the physical states of a waterfall, then take a look at some “final” state of the waterfall, and once again interpret that as a chess position. Can the waterfall be said to play chess? Aaronson argues that it is not so, unless the encoding (which is just a reduction from chess to computing the state of the waterfall) can be computed by a procedure requiring less resources than those needed to actually compute the state of the waterfall. But then a question emerges: does DNA compute Hamilton paths?

Section 7 describes how computational complexity can help to define what inductive inference is about, including Occam’s razor and the “new riddle of induction”.

Then, in Section 8, Aaronson argues that complexity theory might inspire some debate about the interpretation of quantum physics, particularly about the many-worlds interpretation.

In Section 9 new notions of proof, quite distinct from the usual notion of “formal” and “informal” proofs, are described. These are all based on complexity-theoretic and cryptographic results: interactive proof systems, zero-knowledge proofs, probabilistically checkable proofs.

Section 10 is about the difference between time and space. This is usually stated as “space can be reused, time cannot”. What if we allow time travel (i.e., closed timelike curves)? The assumption that PNP might suggest an argument for the impossibility of time travel, even assuming the validity of quantum physics.

Section 11 is about economics and game theory; here complexity theory can be useful to describe the behaviour of agents under bounded rationality assumptions.

Finally, in Section 12, Aaronson concludes that lots of philosophical problems seem to have interesting complexity-theoretic aspects. Several questions remain open, and the most important of those is still “If PNP, then how have humans managed to make such enormous mathematical progress, even in the face of the general intractability of theorem-proving?”.

I really hope that Aaronson’s paper spurs a lot of philosophical discussion involving and concerning complexity theory; I too believe there’s much to write about that.