Undecidability in terms of complexity

In his classic book Computational Complexity, Papadimitriou writes (page 59) that

Undecidability is in some sense the most lethal form of complexity.

I’ve just come across a paper with a very interesting remark along the same lines: Making proofs without Modus Ponens: An introduction to the combinatorics and complexity of cut elimination by A. Carbone and S. Semmes, published in the Bulletin of the American Mathematical Society. This paper is about the length of proofs (a subject I’ve already touched upon here and here) without cut elimination.

On page 113, the authors write

By contrast [with propositional logic] in predicate logic one typically faces issues of algorithmic decidability or undecidability. That is, whether there is an algorithm at all that always gives the right answer, never mind how long it takes. The problem of determining whether a formula in predicate logic is a tautology is algorithmically undecidable. One can think of this as a matter of complexity, as follows. The tautologies in predicate logic of length at most n form a finite set for which one can choose a finite set of proofs. Let f(n) denote the maximum of the lengths of the shortest proofs of tautologies of length ≤ n. The fact that there is no algorithm for determining whether or not a formula is a tautology means that f(n) grows very fast, faster than any recursive function.

Essentially, this means that we can’t design an algorithm to solve an undecidable problem because the search space is too large, and the only way to somehow bound it is via a function that grows faster than any computable function. Compare this with the search space of the “witnesses” for an NP-complete problem: its size is exponential, which is too large to be explored efficiently (at least, as far as we know at the moment), but still small enough to make the problem decidable.

Position paper: Computing with LEGO

There is a LEGO Turing machine, constructed at Aarhus University (see here for more information):

There is also a LEGO Babbage difference engine built by Andrew Carol:

Now we also have an Antikythera mechanism (an ancient Greek computer for predicting eclipses) made of LEGO, again by Andrew Carol:

(Thanks to CompSciFact on Twitter for the last one.)

I hereby propose that LEGO brick constructions are to be considered serious unconventional computing devices, and should be investigated by theoretical computer scientists in order to establish their computational and complexity theoretic properties. (What about LEGO bricks as a non-uniform computing model?)

Update (22 June 2012)

Here is another Turing machine built at Centrum Wiskunde & Informatica (Amsterdam) “in honor of the Alan Turing year 2012”:

And what about circuits? Here is how to construct mechanical (“push-pull”) logic gates. For instance, this is an AND gate:

On a philosophical quest

This was originally posted as an answer to the question Why go to theoretical computer science/research? on the Theoretical Computer Science Stack Exchange website.

One of the main reasons why I find the theory of computation (“my” branch of theoretical computer science) fascinating and worth studying is the following one: it provides us with a way to investigate some deep (and sometimes puzzling) philosophical questions.

One of the founders of the theory of computation, Alan Turing, tried to pin down the meaning of “computing a function” for a human being equipped with a piece of paper, by giving a mathematical description of the process. I’m not the only one to think he was extremely successful, and Turing machines proved to be an accurate model of many other computing processes.

Now that we possess a class of mathematical objects describing computations, we can actually prove theorems about them, thus trying to uncover what can be computed, and how it can be computed; it immediately turned out that lots of perfectly legitimate functions cannot be computed at all, and that they can be classified according to a degree of uncomputability (some functions are just “more uncomputable” than others).

Some other guys, the first ones usually identified with Juris Hartmanis and Richard E. Stearns, tried to describe mathematically what it means for a function (resp., a problem) to be hard or easy to compute (resp., to solve). There are several complexity measures according to which the hardness of problems can be described; the most common one is just how much time we need to solve them. Alan Cobham and Jack Edmonds were quite successful in identifying a reasonable notion of “efficient computation”.

Within the computational complexity framework, we can now prove some results that are consistent with our intuitive notion of computation. My favourite example is the time hierarchy theorem: if we are given more time to compute, we can solve harder problems.

The central open problem of complexity theory, P vs NP, is just a formalisation of another philosophically significant question: is it really harder to solve a problem than to check if an alleged solution of it is indeed correct? I believe this question is worth asking, and answering, independently of its practical significance.