Archive for the ‘Logic’ category

Undecidability in terms of complexity

13 December 2010

In his classic book Computational Complexity, Papadimitriou writes (page 59) that

Undecidability is in some sense the most lethal form of complexity.

I’ve just come across a paper with a very interesting remark along the same lines: Making proofs without Modus Ponens: An introduction to the combinatorics and complexity of cut elimination by A. Carbone and S. Semmes, published in the Bulletin of the American Mathematical Society. This paper is about the length of proofs (a subject I’ve already touched upon here and here) without cut elimination.

On page 113, the authors write

By contrast [with propositional logic] in predicate logic one typically faces issues of algorithmic decidability or undecidability. That is, whether there is an algorithm at all that always gives the right answer, never mind how long it takes. The problem of determining whether a formula in predicate logic is a tautology is algorithmically undecidable. One can think of this as a matter of complexity, as follows. The tautologies in predicate logic of length at most n form a finite set for which one can choose a finite set of proofs. Let f(n) denote the maximum of the lengths of the shortest proofs of tautologies of length ≤ n. The fact that there is no algorithm for determining whether or not a formula is a tautology means that f(n) grows very fast, faster than any recursive function.

Essentially, this means that we can’t design an algorithm to solve an undecidable problem because the search space is too large, and the only way to somehow bound it is via a function that grows faster than any computable function. Compare this with the search space of the “witnesses” for an NP-complete problem: its size is exponential, which is too large to be explored efficiently (at least, as far as we know at the moment), but still small enough to make the problem decidable.

Karl Popper and Turing’s thesis

4 August 2010

Those of you who have been following me on Twitter may have noticed that, lately, I seem to be tormented by scientifico-philosophical questions related to Turing machines and Turing’s thesis.

Indeed I am, because I’ve chosen this as the subject of an exam paper, a decision that I’m beginning to regret: the matter is so deep and fascinating, and I’m so little qualified to discuss it, that I’m spending an awful amount of time perusing unfamiliar literature without actually writing anything. There’s a lifetime to spend just to familiarise with the amazing history of computability, even if we limit ourselves to the first half of the 20th century, and we haven’t even begun with the philosophy of science yet!

I decided to directly attack the big ugly question: consider (one particular version of) Turing’s thesis, the claim that every arithmetical function computable by a human being, provided with enough ink and paper, can be also computed by one of Turing’s machines. Is this a scientific statement?

Deciding how to establish whether something is science or not is called the demarcation problem. Having, unfortunately, no time to get a philosophy degree at the moment, I chose a particular point of view, that of Karl Popper (the philosopher of critical rationalism), who was one of the main names in the course I attended. Popper says that what makes science science is not the verifiability of our claims, which is inevitably a hopeless task, but their falsifiability.

Black Swan

Popper rejects induction as a valid inference principle in science, essentially for purely logical reasons. Having seen enough white swans doesn’t allow us to infer that all swans must be white, as somewhere else a black one might be hiding and mocking us. However, this must not stop us from making the claim that all swans are white: this is a perfectly reasonable hypothesis, only we should not deceive ourselves and pretend that our claim is somewhat justified by our previous observations.

What makes the white swan hypothesis a good hypothesis is that it can be falsified. Tomorrow, a friend of ours could come to us and produce a black swan: this falsifies our claim, and we should (in principle) leave that one behind, and make science advance by formulating another.

Popper says: that’s what science is made of! Formulate bold statements, using whatever means you find convenient: observations, deduction from metaphysical principles, imagination; it doesn’t really matter. And the bolder you are, the better: a more falsifiable statement is more informative, as it excludes a lot of possible worlds. Then you must fiercely attack your theory, by making lots and lots of empirical observations, with the goal not of proving it, but of making it fail.

And the testing never ends. It’s only a matter of convention to stop trying to falsify our theory for a moment, and do something else. Because we can always resume our experiments later, when doubts start troubling us again. In his Logik der Forschung (translated as The Logic of Scientific Discovery) Popper makes his point in a striking way.

The game of science is, in principle, without end. He who decides one day that scientific statements do not call for any further test, and that they can be regarded as finally verified, retires from the game.

Subscribing to Popper’s view, I’ll make my own bold (philosophical) claim, i.e., that Turing’s thesis is a scientific statement. Details in the next episodes.

In the mean time, here is a nice short introduction to Popper’s philosophy of science: Popper’s account of scientific method by J. A. Passmore.

Hilbert’s troubles

21 July 2010

While digging in the early computability literature, I found a precious and long forgotten account of the history of Hilbert’s problems. I’m proud to present my discovery to the scientific community.

*** david (daboss@uni-göttingen.de) has joined #logic
<david> plz, solve integer eqns and prove completness of teh arithmetics
<kurt> just 1 sec, brb
* kurt is away: proving some stuff
* kurt is back
<kurt> lol dave, taek a look at this: http://is.gd/dAo7d
<david> FFFUUUUUUUUUUUU
<david> at least solve teh entsk^Wentschi^W entscheidungsproblem...
<alonzo> cant do that, lambada calculus proves it
<kurt> oh stfu alonzo
<alan> my machiens are liek people and cant do it
<alonzo> alan: your right, and so am i (told ya)
<kurt> lol dave, epic fail!!1!
<alan> lmao
<david> wtf is wrong with you ppl??
<gottlob> i kno the feeling bro
*** david (daboss@uni-göttingen.de) has left #logic (pwnd)

After a few pages (omitted here) there’s a final remark on the subject:

<yuri> oh btw, cant solve integer eqns either

On the length of proofs (episode II)

12 July 2010

Last week I wrote a post about arithmetical theorems having very long proofs, and linked to an article on the SEP for the details. Today, me and my colleague Luca Manzoni realised that there is a much simpler proof; it is essentially the same proof described by Scott Aaronson for the uncomputability of the busy beaver function, and it holds for any undecidable, recursively axiomatisable theory T (that is, if there exists a recursive set of axioms generating the sentences of the theory, but no decision procedure for checking whether a sentence is a theorem).

Let L(φ) be the length in symbols of the shortest proof of the sentence φ, using a standard set of inference rules together with any recursive set of axioms for T; set L(φ) = 0 if φ is not a theorem. Also, let L(n) be the maximum L(φ) among all sentences φ of length at most n.

Theorem. L grows faster than any computable function.

Proof. Otherwise, given a sentence φ, we can first compute an integer f (|φ|) ≥ L(|φ|), then enumerate all proofs of length at most f (|φ|) and check them. If at least one of these is a proof of φ, we answer “yes”, otherwise “no”. But this is a decision procedure for T, since we know that if φ is a theorem, then it has a proof of length at most f (|φ|); contradiction. □

In particular, the theorem holds for Peano arithmetic and friends.

On the length of proofs

4 July 2010

One of the most amazing facts about (un)computability is the existence of functions f : ℕ → ℕ that grow faster than any recursive function, that is, for all computable g : ℕ → ℕ we have

\displaystyle \lim_{n \to \infty} \frac{g(n)}{f(n)} = 0.

The most common function of this kind is the busy beaver function, described by Tibor Radó in his paper On non-computable functions; for an introduction to this topic, you can read the excellent paper Who can name the bigger number? by Scott Aaronson (that’s where I first read about it).

Radó’s paper was published in the Bell System Technical Journal in 1962 but, as often happens, related work was done before by Kurt Gödel; see, for instance, the paper bearing the same title as this post, whose translation can be found in the classic collection The Undecidable (edited by Martin Davis).

A cool result that can be proved is that the length of the proofs of arithmetical statements also grows faster than any recursive function. Quoting Juliette Kennedy’s article about Gödel on the Stanford Encyclopedia of Philosophy:

Theorem 5. Given any recursive function f there are provable sentences φ of arithmetic such that the shortest proof is greater than f(⌜φ⌝) in length.

In other words, some theorems just have really freaking long proofs. See the SEP article for the details.


Follow

Get every new post delivered to your Inbox.

Join 49 other followers