Archive for the ‘History’ category

Nash beats Gödel: On the history of complexity and cryptography

17 February 2012

We know that Kurt Gödel, unhappy with having only completeness and incompleteness theorems named after him, also essentially invented the P vs NP question in a 1956 letter to John Von Neumann. Unfortunately, there were no blogs back then, so we had to wait until the 1980s to read it, and Cook, Karp & co. had to reinvent the question from scratch.

What we didn’t know until a few days ago (I personally discovered it today via Aaron Roth) is that in 1955, one year before Gödel’s letter, John Nash also had something to say on the topic, in a few letters to NSA. Gödel and Nash independently made strikingly similar remarks, starting from completely different points of view. While Gödel’s perspective was that of a logician (his letter was about the length of proofs in first-order logic), Nash was interested in cryptography, and wrote about the security of cryptographic systems.

NSA’s press release about the declassification of Nash’s letters was apparently made on 27 January 2012; here is a PDF containing Nash’s letters and the replies he got from NSA. I’ll only focus on the second letter here, leaving a cryptoanalysis of the system he proposed in the third one to someone more knowledgeable than me on the topic.

I assume Nash was familiar with Shannon’s work Communication theory of secrecy systems (1949), so he already knew that the only mathematically unbreakable cipher is one-time pad. Hence, in order to discuss practical encryption systems, he invented the principles of modern cryptography about 20 years in advance:

If [the computation required in order to recover the key from the ciphertext], although possible in principle, were sufficiently long at best then the process could still be secure in a practical sense.

Here is some computational complexity, 8 years before Hartmanis and Stearns:

The most direct computation procedure would be for the enemy to try all 2r possible keys, one by one. Obviously this is easily made impractical for the enemy by simply choosing r large enough.

Apparently Nash already knew quite well (“obviously”, “easily”) that exponential time is too much.

And here is how one can classify cryptosystems (but also general computation problems) according to him. I think you can read essentially the same paragraph in all modern complexity theory and cryptography books.

So a logical way to classify enciphering processes is by the way in which the computation length for the computation of the key increases with increasing length of the key. This is at best exponential and at worst probably a relatively small power of r, ar2 or ar3, as in substitution ciphers.

Nash also conjectures that many problems cannot be solved in polynomial time. In hindsight, there is some naivety here about the design of good ciphers, but this is still very interesting:

Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, with the information content of the key.

The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable. As ciphers became more sophisticated the game of cipher breaking by skilled teams, etc., should become a thing of the past.

All in all, this seems to me a truly magnificent historical document, possibly on par with the aforementioned letter by Gödel. I’d really like to know what people working in cryptography think about this.

These “Lost Letter” discoveries also make me wonder what further treasures may be hidden in NSA’s cardboard boxes, or in private correspondence collections. What if there exists a “Fermat’s Lost (Last?) Letter” along the lines of “Here’s the proof that didn’t fit in the margin”, or a “Russell’s Lost Letter” that he sent to Whitehead to tell him “I found a proposition that can’t be proved nor refuted in Principia Mathematica”?

Position paper: Computing with LEGO

10 December 2010

There is a LEGO Turing machine, constructed at Aarhus University (see here for more information):

There is also a LEGO Babbage difference engine built by Andrew Carol:

Now we also have an Antikythera mechanism (an ancient Greek computer for predicting eclipses) made of LEGO, again by Andrew Carol:

(Thanks to CompSciFact on Twitter for the last one.)

I hereby propose that LEGO brick constructions are to be considered serious unconventional computing devices, and should be investigated by theoretical computer scientists in order to establish their computational and complexity theoretic properties. (What about LEGO bricks as a non-uniform computing model?)

Update (22 June 2012)

Here is another Turing machine built at Centrum Wiskunde & Informatica (Amsterdam) “in honor of the Alan Turing year 2012”:

And what about circuits? Here is how to construct mechanical (“push-pull”) logic gates. For instance, this is an AND gate:

On a philosophical quest

2 December 2010

This was originally posted as an answer to the question Why go to theoretical computer science/research? on the Theoretical Computer Science Stack Exchange website.

One of the main reasons why I find the theory of computation (“my” branch of theoretical computer science) fascinating and worth studying is the following one: it provides us with a way to investigate some deep (and sometimes puzzling) philosophical questions.

One of the founders of the theory of computation, Alan Turing, tried to pin down the meaning of “computing a function” for a human being equipped with a piece of paper, by giving a mathematical description of the process. I’m not the only one to think he was extremely successful, and Turing machines proved to be an accurate model of many other computing processes.

Now that we possess a class of mathematical objects describing computations, we can actually prove theorems about them, thus trying to uncover what can be computed, and how it can be computed; it immediately turned out that lots of perfectly legitimate functions cannot be computed at all, and that they can be classified according to a degree of uncomputability (some functions are just “more uncomputable” than others).

Some other guys, the first ones usually identified with Juris Hartmanis and Richard E. Stearns, tried to describe mathematically what it means for a function (resp., a problem) to be hard or easy to compute (resp., to solve). There are several complexity measures according to which the hardness of problems can be described; the most common one is just how much time we need to solve them. Alan Cobham and Jack Edmonds were quite successful in identifying a reasonable notion of “efficient computation”.

Within the computational complexity framework, we can now prove some results that are consistent with our intuitive notion of computation. My favourite example is the time hierarchy theorem: if we are given more time to compute, we can solve harder problems.

The central open problem of complexity theory, P vs NP, is just a formalisation of another philosophically significant question: is it really harder to solve a problem than to check if an alleged solution of it is indeed correct? I believe this question is worth asking, and answering, independently of its practical significance.

The universal Turing machine

23 July 2010

What did Alan Turing have in mind when he conceived his universal computing machine?

One could speculate that his train of thoughts was like this:

  1. I can simulate any of my machines (there’s experimental evidence, and of course I defined them to work like people doing maths on a piece of paper).
  2. I’ve already formulated a computability thesis that says: if I can do it, then a computing machine also can.
  3. But then, there must be a universal computing machine! Let’s fill in the details…

Or maybe he came to that realisation some completely different way. Possibly, directly from Gödel’s work?

Who knows, maybe there even exists something written by Turing himself on this subject.

Hilbert’s troubles

21 July 2010

While digging in the early computability literature, I found a precious and long forgotten account of the history of Hilbert’s problems. I’m proud to present my discovery to the scientific community.

*** david (daboss@uni-göttingen.de) has joined #logic
<david> plz, solve integer eqns and prove completness of teh arithmetics
<kurt> just 1 sec, brb
* kurt is away: proving some stuff
* kurt is back
<kurt> lol dave, taek a look at this: http://is.gd/dAo7d
<david> FFFUUUUUUUUUUUU
<david> at least solve teh entsk^Wentschi^W entscheidungsproblem...
<alonzo> cant do that, lambada calculus proves it
<kurt> oh stfu alonzo
<alan> my machiens are liek people and cant do it
<alonzo> alan: your right, and so am i (told ya)
<kurt> lol dave, epic fail!!1!
<alan> lmao
<david> wtf is wrong with you ppl??
<gottlob> i kno the feeling bro
*** david (daboss@uni-göttingen.de) has left #logic (pwnd)

After a few pages (omitted here) there’s a final remark on the subject:

<yuri> oh btw, cant solve integer eqns either


Follow

Get every new post delivered to your Inbox.

Join 49 other followers