Nash beats Gödel: On the history of complexity and cryptography

Posted 17 February 2012 by Antonio E. Porreca
Categories: Complexity, Cryptography, History

Tags: , ,

We know that Kurt Gödel, unhappy with having only completeness and incompleteness theorems named after him, also essentially invented the P vs NP question in a 1956 letter to John Von Neumann. Unfortunately, there were no blogs back then, so we had to wait until the 1980s to read it, and Cook, Karp & co. had to reinvent the question from scratch.

What we didn’t know until a few days ago (I personally discovered it today via Aaron Roth) is that in 1955, one year before Gödel’s letter, John Nash also had something to say on the topic, in a few letters to NSA. Gödel and Nash independently made strikingly similar remarks, starting from completely different points of view. While Gödel’s perspective was that of a logician (his letter was about the length of proofs in first-order logic), Nash was interested in cryptography, and wrote about the security of cryptographic systems.

NSA’s press release about the declassification of Nash’s letters was apparently made on 27 January 2012; here is a PDF containing Nash’s letters and the replies he got from NSA. I’ll only focus on the second letter here, leaving a cryptoanalysis of the system he proposed in the third one to someone more knowledgeable than me on the topic.

I assume Nash was familiar with Shannon’s work Communication theory of secrecy systems (1949), so he already knew that the only mathematically unbreakable cipher is one-time pad. Hence, in order to discuss practical encryption systems, he invented the principles of modern cryptography about 20 years in advance:

If [the computation required in order to recover the key from the ciphertext], although possible in principle, were sufficiently long at best then the process could still be secure in a practical sense.

Here is some computational complexity, 8 years before Hartmanis and Stearns:

The most direct computation procedure would be for the enemy to try all 2r possible keys, one by one. Obviously this is easily made impractical for the enemy by simply choosing r large enough.

Apparently Nash already knew quite well (“obviously”, “easily”) that exponential time is too much.

And here is how one can classify cryptosystems (but also general computation problems) according to him. I think you can read essentially the same paragraph in all modern complexity theory and cryptography books.

So a logical way to classify enciphering processes is by the way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably a relatively small power of r, ar2 or ar3, as in substitution ciphers.

Nash also conjectures that many problems cannot be solved in polynomial time. In hindsight, there is some naivety here about the design of good ciphers, but this is still very interesting:

Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, with the information content of the key.

The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable. As ciphers became more sophisticated the game of cipher breaking by skilled teams, etc., should become a thing of the past.

All in all, this seems to me a truly magnificent historical document, possibly on par with the aforementioned letter by Gödel. I’d really like to know what people working in cryptography think about this.

These “Lost Letter” discoveries also make me wonder what further treasures may be hidden in NSA’s cardboard boxes, or in private correspondence collections. What if there exists a “Fermat’s Lost (Last?) Letter” along the lines of “Here’s the proof that didn’t fit in the margin”, or a “Russell’s Lost Letter” that he sent to Whitehead to tell him “I found a proposition that can’t be proved nor refuted in Principia Mathematica”?

Imagine a world without free knowledge

Posted 18 January 2012 by Antonio E. Porreca
Categories: Politics

Tags: , , , , ,

Learn more.

Do waterfalls play chess? and other stories

Posted 13 August 2011 by Antonio E. Porreca
Categories: Complexity, Computability, Philosophy

Tags: , , , ,

I finally took the time to read Scott Aaronson’s new paper Why philosophers should care about computational complexity (there’s a post about it with lots of comments on his blog).

Summary: Aaronson, knowing that I’m writing my thesis at the moment, took the liberty of preparing an introduction for me. Thanks! I’m going to include it verbatim. (Just kidding of course, though I’m actually going to cite this paper extensively.)

Red Queen

The early work on computability had a lot of philosophical significance; actually, we can say that computability was born to answer a philosophical question, Hilbert’s Entscheidungsproblem. We all know that the answer turned out to be negative (or do we? See Section 3.1). Apparently computational complexity theory, the theory of computing under limited resources, didn’t turn out to be so popular among philosophers. That’s obviously a problem, and this paper tries to do something about it.

After a brief introduction to complexity theory (Section 2), Aaronson turns his attention to one of the main cornerstones of this field, which is also one the points that are usually criticised: the relevance of polynomial time, as opposed to exponential time. Here he argues that this distinction is at least as interesting as the distinction between computable and uncomputable. Section 3.3 contains an interesting question that can be answered using a complexity-theoretic argument: why would we call 243112609 − 1 (together with a proof of its primality) a “known” prime, while “the first prime large than 243112609 − 1” feels somehow “unknown”?

Section 4 is about the Turing test. This is the first time I see in print what Aaronson calls the “lookup-table argument” (though he cites Ned Block and others for it): namely, that there’s no doubt whatsoever that an abstract program P able to pass the Turing test does actually exist (you just need a huge but finite table of replies dependent on the previous history of the conversation). The only way to claim the impossibility of a machine passing the Turing test, apart from metaphysical arguments, requires addressing concerns such as the efficiency of P or the availability of enough space in the universe to store it. That is, complexity-theoretic questions.

Section 5, possibly the most interesting one, addresses the problem of logical omniscience. It is usually assumed that knowledge is closed under deduction rules; for instance, if I know A, and I know B, then I certainly know AB. But consider this: while I know enough basic axioms of maths, I surely don’t know that Fermat’s last theorem holds (except by trusting the mathematical community, which I usually do), even though it’s a mere deductive consequence of said axioms. We can argue that human beings do not possess the computational resources needed to actually achieve omniscience.

Section 6 is about a “waterfall argument” against computationalism. The idea is that the meaning of what a supposed computer actually computes is always imposed by someone looking at it, that is, it’s always relative to some external observer. I freely admit I am (was?) a fan of this argument, though I don’t necessarily see it as an attack to computationalism (I use to call this “computational relativism”, a term whose invention I claim, since Google returns zero hits for it :-)). According to some proponents, this leads to some “strange” consequences.

Niagara Falls

For instance, assume that we encode chess positions in the physical states of a waterfall, then take a look at some “final” state of the waterfall, and once again interpret that as a chess position. Can the waterfall be said to play chess? Aaronson argues that it is not so, unless the encoding (which is just a reduction from chess to computing the state of the waterfall) can be computed by a procedure requiring less resources than those needed to actually compute the state of the waterfall. But then a question emerges: does DNA compute Hamilton paths?

Section 7 describes how computational complexity can help to define what inductive inference is about, including Occam’s razor and the “new riddle of induction”.

Then, in Section 8, Aaronson argues that complexity theory might inspire some debate about the interpretation of quantum physics, particularly about the many-worlds interpretation.

In Section 9 new notions of proof, quite distinct from the usual notion of “formal” and “informal” proofs, are described. These are all based on complexity-theoretic and cryptographic results: interactive proof systems, zero-knowledge proofs, probabilistically checkable proofs.

Section 10 is about the difference between time and space. This is usually stated as “space can be reused, time cannot”. What if we allow time travel (i.e., closed timelike curves)? The assumption that PNP might suggest an argument for the impossibility of time travel, even assuming the validity of quantum physics.

Section 11 is about economics and game theory; here complexity theory can be useful to describe the behaviour of agents under bounded rationality assumptions.

Finally, in Section 12, Aaronson concludes that lots of philosophical problems seem to have interesting complexity-theoretic aspects. Several questions remain open, and the most important of those is still “If PNP, then how have humans managed to make such enormous mathematical progress, even in the face of the general intractability of theorem-proving?”.

I really hope that Aaronson’s paper spurs a lot of philosophical discussion involving and concerning complexity theory; I too believe there’s much to write about that.

Time is unpredictable

Posted 19 February 2011 by Antonio E. Porreca
Categories: Complexity, Computability, Favourite theorems

Tags: ,

This post is inspired by the question Are runtime bounds in P decidable? asked by John Sidles on the CSTheory website, and the answer given by Emanuele Viola.

Is there an automatic procedure to determine whether a given Turing machine, known to be halting, operates within time bound O(f) (assuming f is a computable function)?

Predictably, the answer turns out to be negative.

Let’s start by formalising the problem. Assume M is the Turing machine whose runtime we’re interested in, and F is another TM computing the time bound f; then

L = {(M, F) : M halts within O(F) steps}.

Also let H be the halting problem:

H = {(M’, x) : M halts on input x}.

We can now define the function R(M’, x) = (M, F), where F computes n2 and M, on input y, behaves as follows:

  • Simulate M’ on x for n = |y| steps.
  • If M’ has already halted, then loop for n2 steps.
  • Otherwise, loop for n3 steps.

We’re finally ready to prove our undecidability result.

Theorem. R is a many-one reduction from H to L.

Proof. Clearly R is a computable function, so we just need to show that

(M’, x) ∈ H ⇔ (M, F) ∈ L.

If (M’, x) ∈ H, that is, if M’ halts on input x, then it does so in k steps (for some k ∈ ℕ). Hence M runs in O(n2) time (notice that it runs in n3 time for |y| < k, but it’s only the asymptotic behaviour that matters for us). Thus (M, F) ∈ L.

On the other hand, if M’ does not halt on x, then M never completes its simulation, and the runtime for M is O(n3). Thus (M, F) ∉ L. □

Remark. We’ve actually proved a stronger statement, i.e., that the set of Turing machines running in O(n2) time is undecidable, even if we restrict the domain to machines halting in polynomial time. Similar undecidability results hold for an infinite class of time bounds.

Merry Christmath!

Posted 4 January 2011 by Antonio E. Porreca
Categories: Maths

Tags: ,

On the second day of Christmas my true love gave to me…
The only even prime and the absolute value of e to the .

A short post, just because I want this on my blog: a beautiful video and song by Vi Hart (take a look at the rest of her website, it’s amazing). Epsilon greater-than to you too.

Also, thanks to Dave Richeson for sharing it in the first place.

Undecidability in terms of complexity

Posted 13 December 2010 by Antonio E. Porreca
Categories: Complexity, Computability, Logic

Tags: , , ,

In his classic book Computational Complexity, Papadimitriou writes (page 59) that

Undecidability is in some sense the most lethal form of complexity.

I’ve just come across a paper with a very interesting remark along the same lines: Making proofs without Modus Ponens: An introduction to the combinatorics and complexity of cut elimination by A. Carbone and S. Semmes, published in the Bulletin of the American Mathematical Society. This paper is about the length of proofs (a subject I’ve already touched upon here and here) without cut elimination.

On page 113, the authors write

By contrast [with propositional logic] in predicate logic one typically faces issues of algorithmic decidability or undecidability. That is, whether there is an algorithm at all that always gives the right answer, never mind how long it takes. The problem of determining whether a formula in predicate logic is a tautology is algorithmically undecidable. One can think of this as a matter of complexity, as follows. The tautologies in predicate logic of length at most n form a finite set for which one can choose a finite set of proofs. Let f(n) denote the maximum of the lengths of the shortest proofs of tautologies of length ≤ n. The fact that there is no algorithm for determining whether or not a formula is a tautology means that f(n) grows very fast, faster than any recursive function.

Essentially, this means that we can’t design an algorithm to solve an undecidable problem because the search space is too large, and the only way to somehow bound it is via a function that grows faster than any computable function. Compare this with the search space of the “witnesses” for an NP-complete problem: its size is exponential, which is too large to be explored efficiently (at least, as far as we know at the moment), but still small enough to make the problem decidable.

Position paper: Computing with LEGO

Posted 10 December 2010 by Antonio E. Porreca
Categories: History, Unconventional computing

Tags: , , , ,

There is a LEGO Turing machine, constructed at Aarhus University (see here for more information):

There is also a LEGO Babbage difference engine built by Andrew Carol:

Now we also have an Antikythera mechanism (an ancient Greek computer for predicting eclipses) made of LEGO, again by Andrew Carol:

(Thanks to CompSciFact on Twitter for the last one.)

I hereby propose that LEGO brick constructions are to be considered serious unconventional computing devices, and should be investigated by theoretical computer scientists in order to establish their computational and complexity theoretic properties. (What about LEGO bricks as a non-uniform computing model?)

Update (22 June 2012)

Here is another Turing machine built at Centrum Wiskunde & Informatica (Amsterdam) “in honor of the Alan Turing year 2012”:

And what about circuits? Here is how to construct mechanical (“push-pull”) logic gates. For instance, this is an AND gate:


Follow

Get every new post delivered to your Inbox.

Join 49 other followers