What did Alan Turing have in mind when he conceived his universal computing machine?
One could speculate that his train of thoughts was like this:
- I can simulate any of my machines (there’s experimental evidence, and of course I defined them to work like people doing maths on a piece of paper).
- I’ve already formulated a computability thesis that says: if I can do it, then a computing machine also can.
- But then, there must be a universal computing machine! Let’s fill in the details…
Or maybe he came to that realisation some completely different way. Possibly, directly from Gödel’s work?
Who knows, maybe there even exists something written by Turing himself on this subject.
While digging in the early computability literature, I found a precious and long forgotten account of the history of Hilbert’s problems. I’m proud to present my discovery to the scientific community.
*** david (daboss@uni-göttingen.de) has joined #logic
<david> plz, solve integer eqns and prove completness of teh arithmetics
<kurt> just 1 sec, brb
* kurt is away: proving some stuff
* kurt is back
<kurt> lol dave, taek a look at this: http://is.gd/dAo7d
<david> at least solve teh entsk^Wentschi^W entscheidungsproblem...
<alonzo> cant do that, lambada calculus proves it
<kurt> oh stfu alonzo
<alan> my machiens are liek people and cant do it
<alonzo> alan: your right, and so am i (told ya)
<kurt> lol dave, epic fail!!1!
<david> wtf is wrong with you ppl??
<gottlob> i kno the feeling bro
*** david (daboss@uni-göttingen.de) has left #logic (pwnd)
After a few pages (omitted here) there’s a final remark on the subject:
<yuri> oh btw, cant solve integer eqns either
Last week I wrote a post about arithmetical theorems having very long proofs, and linked to an article on the SEP for the details. Today, me and my colleague Luca Manzoni realised that there is a much simpler proof; it is essentially the same proof described by Scott Aaronson for the uncomputability of the busy beaver function, and it holds for any undecidable, recursively axiomatisable theory T (that is, if there exists a recursive set of axioms generating the sentences of the theory, but no decision procedure for checking whether a sentence is a theorem).
Let L(φ) be the length in symbols of the shortest proof of the sentence φ, using a standard set of inference rules together with any recursive set of axioms for T; set L(φ) = 0 if φ is not a theorem. Also, let L(n) be the maximum L(φ) among all sentences φ of length at most n.
Theorem. L grows faster than any computable function.
Proof. Otherwise, given a sentence φ, we can first compute an integer f (|φ|) ≥ L(|φ|), then enumerate all proofs of length at most f (|φ|) and check them. If at least one of these is a proof of φ, we answer “yes”, otherwise “no”. But this is a decision procedure for T, since we know that if φ is a theorem, then it has a proof of length at most f (|φ|); contradiction. □
In particular, the theorem holds for Peano arithmetic and friends.
The primitive recursive functions are a class of computable arithmetic functions with a reasonably natural definition. They do not exhaust all computable functions (e.g., Ackermann’s function grows faster than any PR function). Here is a list of reasons to like them anyway.
- The PR functions are exactly the functions that can be computed by using only for loops (i.e., without unbounded kinds of loop such as while and repeat-until).
- Each time you apply the primitive recursion schema, you get new functions. Meaning: with n + 1 nested for loops you can compute functions that you can’t compute using only n nested for loops.
- Any computable function can be obtained by using PR functions plus a single application of the μ operator. In other words, you just need to use one while loop (together with the for loops) to obtain a Turing machine. This is called Kleene’s normal form theorem.
- Every recursively enumerable set is also primitively recursively enumerable. See this question on MathOverflow, where I recently discovered this fact.
- The PR functions are exactly the functions that can be computed by Turing machines operating within a PR time bound. Isn’t this amazing? See, e.g., this paper by A.R. Meyer and D.M. Ritchie (yes, that D.M. Ritchie).
- Any function of practical use is primitive recursive. This is a consequence of the previous point: polynomials, but also exponentials (and much, much worse functions) are all PR. If a function is not PR, then you don’t have enough time to compute it.
I strongly urge you to like the primitive recursive functions too.
Today we submitted the final version of our paper P systems with elementary active membranes: Beyond NP and coNP, which has been accepted for the Eleventh International Conference on Membrane Computing, taking place on 24–27 August.
The venue of CMC11 is Friedrich-Schiller-Universität in Jena, Germany. This university has some slightly notable alumni, such as Rudolf Carnap, Gottlob Frege, Gottfried “Calculemus” Leibniz, Karl Marx, and Arthur Schopenhauer among others; the teaching staff was marginally interesting too, including modest names such as Gottlieb Fichte, Georg Wilhelm Friedrich Hegel, Friedrich Schelling, and Friedrich Schiller himself. In short, rest assured that I certainly won’t be going around hunting for busts or statues of these guys, and that I won’t get someone to take pictures of me beside them to post on this blog.
Anyway, in our paper we solve a PP-complete problem via P systems with active membranes, using only division for elementary membranes (that is, membranes that don’t contain other membranes). The problem is a variant of Majority-SAT.
P systems with elementary active membranes are known for their ability to generate exponentially many membranes in linear time by using membrane division. These membranes can then explore, in a parallel way, the whole space of candidate solutions of an NP-complete problem, thus solving it in polynomial time.
In the paper we extend this algorithm (which is pretty standard in our literature) by checking whether the number of membranes finding a satisfying assignment exceeds a certain threshold, instead of just checking whether there is at least one. Lo and behold, not only we can solve NP-complete problems in polynomial time, but also PP-complete ones.
One of the most amazing facts about (un)computability is the existence of functions f : ℕ → ℕ that grow faster than any recursive function, that is, for all computable g : ℕ → ℕ we have
The most common function of this kind is the busy beaver function, described by Tibor Radó in his paper On non-computable functions; for an introduction to this topic, you can read the excellent paper Who can name the bigger number? by Scott Aaronson (that’s where I first read about it).
Radó’s paper was published in the Bell System Technical Journal in 1962 but, as often happens, related work was done before by Kurt Gödel; see, for instance, the paper bearing the same title as this post, whose translation can be found in the classic collection The Undecidable (edited by Martin Davis).
A cool result that can be proved is that the length of the proofs of arithmetical statements also grows faster than any recursive function. Quoting Juliette Kennedy’s article about Gödel on the Stanford Encyclopedia of Philosophy:
Theorem 5. Given any recursive function f there are provable sentences φ of arithmetic such that the shortest proof is greater than f(⌜φ⌝) in length.
In other words, some theorems just have really freaking long proofs. See the SEP article for the details.
Bob is a programmer and software engineer, developing automated software testing utilities for his company. However, his manager Eve wants to fire him and, in order to get an excuse, she assigns him a very hard task: developing a tool to check the termination of programs. Eve, for some reason, seems to know that he’s doomed to failure.
Fortunately, the exhausted Bob receives a visit in his dreams by someone important, who explains the problem to him and helps him get his revenge.
Thanks to Lee Graham for this cartoon.