Undecidability in terms of complexity

13 December 2010

In his classic book Computational Complexity, Papadimitriou writes (page 59) that

Undecidability is in some sense the most lethal form of complexity.

I’ve just come across a paper with a very interesting remark along the same lines: Making proofs without Modus Ponens: An introduction to the combinatorics and complexity of cut elimination by A. Carbone and S. Semmes, published in the Bulletin of the American Mathematical Society. This paper is about the length of proofs (a subject I’ve already touched upon here and here) without cut elimination.

On page 113, the authors write

By contrast [with propositional logic] in predicate logic one typically faces issues of algorithmic decidability or undecidability. That is, whether there is an algorithm at all that always gives the right answer, never mind how long it takes. The problem of determining whether a formula in predicate logic is a tautology is algorithmically undecidable. One can think of this as a matter of complexity, as follows. The tautologies in predicate logic of length at most $n$ form a finite set for which one can choose a finite set of proofs. Let $f(n)$ denote the maximum of the lengths of the shortest proofs of tautologies of length $\le n$. The fact that there is no algorithm for determining whether or not a formula is a tautology means that $f(n)$ grows very fast, faster than any recursive function.

Essentially, this means that we can’t design an algorithm to solve an undecidable problem because the search space is too large, and the only way to somehow bound it is via a function that grows faster than any computable function. Compare this with the search space of the “witnesses” for an NP-complete problem: its size is exponential, which is too large to be explored efficiently (at least, as far as we know at the moment), but still small enough to make the problem decidable.