Developments in Language Theory 2010: Where tetration has lease

Tomorrow I’m leaving for Canada, to attend the 14th International Conference on Developments in Language Theory, held at University of Western Ontario, London. You can find the programme here.

Our DLT 2010 paper, authored by yours truly, Alberto Leporati and Claudio Zandron, is titled On a powerful class of non-universal P systems with active membranes. In it, we describe a restricted variant of P systems with active membranes (computing devices inspired by biological cells) that is not Turing-equivalent, but nonetheless decide a surprisingly large class of languages. This class also seems to be somewhat “new” (we haven’t been able to find any reference to it in the literature).

Consider the operation of tetration (iterated exponentiation), defined as

{}^n b = \underbrace{\,b^{b^{b^{\cdot^{\cdot^{\cdot^b}}}}}}_{\text{\emph{n} times}}

and call PTETRA the class of languages decided by Turing machines working in time {}^{p(n)}2 for some polynomial p (the P in PTETRA signifies the fact that exponentiation is iterated only polynomially many times instead of, say, exponentially many). This class is trivially closed under union, intersection, complement, and polytime reductions. Furthermore, it’s easy to see that it can also be defined in terms of space {}^{p(n)}2 (you can also throw in nondeterminism, if you really want to).

Well, the main result of our paper is the following funny theorem.

Theorem. The class of languages decided by our variant of P system with active membranes is exactly PTETRA.

The proof works like this:

  • The upper bound is proved by a few calculations that show how our P systems can never use more that tetrational space, because the process of membrane division (which generates lots of new membranes) must stop after a finite number of steps.
  • To prove the lower bound, we show that we can simulate with our P systems all tetrational-space counter machines, hence tetrational-space Turing machines.

The result is somewhat unexpected, as I was under the impression that all we could do with these P systems was EXPSPACE, and indeed I wasted a few days trying to prove that. Fortunately, reality turned out to be much more interesting.

A relatively serious proof that P ≠ NP?

Note: I’ve stopped updating this post, now that I’ve collected quite a lot of comments. For the latest news, I suggest checking my tweets and, in particular, the Polymath wiki page (edited by more people than just yours truly). Maybe I’ll have another post in the next few days, if something shocking happens.

The news seems to have originated from a post on Greg Baker’s blog. The author of the claimed proof of P ≠ NP is Vinay Deolalikar, principal research scientist at HP Labs (though “[t]his work was pursued independently of [his] duties as a HP Labs researcher, and without the knowledge of others”). Judging from his past publication history, we can’t afford to immediately dismiss the author as a crank this time.

It seems that a preliminary version of the paper, dated 6 August (note: there’s a new version with “minor updates” here here), was circulated by Deolalikar to “several leading researchers in various areas”, rather amusingly in both “10pt and 12pt fonts” (apparently, the proof does not relativise to 11pt). According to Baker, Stephen Cook commented “This appears to be a relatively serious claim to have solved P vs NP”. The story started spreading when someone uploaded the paper to Scribd without the author’s knowledge.

The paper is very long (103 104 107 pages in the 12pt version) and involves “several fields spanning logic, statistics, graphical models, random ensembles, and statistical physics”. So, before engaging in a serious reading, I’m going to wait for someone more qualified than me to make a preliminary evaluation. In particular, the word “physics” sounds pretty scary to me in this context.

What do computer scientists and related people on the web think about it? Here is a list of people who commented on Deolalikar’s paper (until 10 August).

  • Richard Lipton thinks this is a “well written paper, by a serious researcher” and “who knows—perhaps this is the solution we have all have been waiting for”.
  • Lance Fortnow is laconically cautious: “Much ado about an unverified proof”.
  • Dave Bacon thinks “this a fairly serious attack by a serious researcher”.
  • Scott Aaronson is willing to supplement the million dollar Clay Millennium prize by $200,000 if the proof turns out to be correct, though he remarks that “even if this attempt fails, it seems to introduce some thought-provoking new ideas”.
  • Daniel Lemire doesn’t comment directly on the contents the paper, but on some reasons why Deolalikar gets so much attention: well-written paper, and enough reputation to be considered as a peer.
  • Suresh Venkatasubramanian seems to think that understanding this paper does indeed require knowledge of many fields. He’s also proposing a collaborative effort to identify any issue with the current version of the proof.
  • William Gasarch would bet that Deolalikar “proved something very interesting, but not P ≠ NP”: we just haven’t made enough progress on that problem yet.
  • Bruce Schneier also bets that the proof is flawed, despite the author being a “respected researcher”.
  • Ryan Williams is quite sceptical (his technical comments here, here, here and here).
  • Richard Lipton and Ken Regan highlight some technical issues with the proof.
  • In the mean time, Geomblog reports that the analysis of Deolalikar’s proof moved to a new wiki, in collaboration with the Polymath project.
  • Terence Tao says “[t]his is a serious effort, but one which sits somewhat orthogonally to the existing work on this problem”, so a sensible evaluation might require quite some time.
  • András Salamon is also “sceptical“, as “the technique used is unlikely to work to prove the desired result”. However, “the paper is definitely worth reading” anyway.
  • More detailed comments by Ryan Williams on the Polymath wiki.
  • In another post, Dave Bacon comments on the reasons behind the interest drawn by the paper (it’s very broad in scope and contains a lot of review). And, by the way, he and his colleagues are sceptical too.
  • Charanjit Jutla thinks there’s a “huge (unfixable) gap”.
  • As of 10 August, 22:58 UTC there isn’t any mention of the P ≠ NP paper on Deolalikar’s home page anymore (though the paper itself is still available in its third version).

The proof also made it to Gerhard Woeginger’s P-versus-NP page and, somehow, to BitTorrent (?). Nature news has an article on this story: Million-dollar problem cracked?.

On a side note, I’m genuinely surprised by the amount of buzz on Twitter generated by this story (mostly deriving from Slashdot and Hacker News I think). Independently of the actual correctness of the proof, I believe this is a good thing, as it suggests that there’s enough public interest for the most important open problem in computer science.

Karl Popper and Turing’s thesis

Those of you who have been following me on Twitter may have noticed that, lately, I seem to be tormented by scientifico-philosophical questions related to Turing machines and Turing’s thesis.

Indeed I am, because I’ve chosen this as the subject of an exam paper, a decision that I’m beginning to regret: the matter is so deep and fascinating, and I’m so little qualified to discuss it, that I’m spending an awful amount of time perusing unfamiliar literature without actually writing anything. There’s a lifetime to spend just to familiarise with the amazing history of computability, even if we limit ourselves to the first half of the 20th century, and we haven’t even begun with the philosophy of science yet!

I decided to directly attack the big ugly question: consider (one particular version of) Turing’s thesis, the claim that every arithmetical function computable by a human being, provided with enough ink and paper, can be also computed by one of Turing’s machines. Is this a scientific statement?

Deciding how to establish whether something is science or not is called the demarcation problem. Having, unfortunately, no time to get a philosophy degree at the moment, I chose a particular point of view, that of Karl Popper (the philosopher of critical rationalism), who was one of the main names in the course I attended. Popper says that what makes science science is not the verifiability of our claims, which is inevitably a hopeless task, but their falsifiability.

Black Swan

Popper rejects induction as a valid inference principle in science, essentially for purely logical reasons. Having seen enough white swans doesn’t allow us to infer that all swans must be white, as somewhere else a black one might be hiding and mocking us. However, this must not stop us from making the claim that all swans are white: this is a perfectly reasonable hypothesis, only we should not deceive ourselves and pretend that our claim is somewhat justified by our previous observations.

What makes the white swan hypothesis a good hypothesis is that it can be falsified. Tomorrow, a friend of ours could come to us and produce a black swan: this falsifies our claim, and we should (in principle) leave that one behind, and make science advance by formulating another.

Popper says: that’s what science is made of! Formulate bold statements, using whatever means you find convenient: observations, deduction from metaphysical principles, imagination; it doesn’t really matter. And the bolder you are, the better: a more falsifiable statement is more informative, as it excludes a lot of possible worlds. Then you must fiercely attack your theory, by making lots and lots of empirical observations, with the goal not of proving it, but of making it fail.

And the testing never ends. It’s only a matter of convention to stop trying to falsify our theory for a moment, and do something else. Because we can always resume our experiments later, when doubts start troubling us again. In his Logik der Forschung (translated as The Logic of Scientific Discovery) Popper makes his point in a striking way.

The game of science is, in principle, without end. He who decides one day that scientific statements do not call for any further test, and that they can be regarded as finally verified, retires from the game.

Subscribing to Popper’s view, I’ll make my own bold (philosophical) claim, i.e., that Turing’s thesis is a scientific statement. Details in the next episodes.

In the mean time, here is a nice short introduction to Popper’s philosophy of science: Popper’s account of scientific method by J. A. Passmore.