Crypto-Gram

October 15, 2002

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.

Copyright (c) 2002 by Counterpane Internet Security, Inc.


In this issue:


National Strategy to Secure Cyberspace

On 18 September, the White House officially released its National Strategy to Secure Cyberspace. Well, it didn’t really release it on that date; versions had been leaking here and there for a while. And it really isn’t a national strategy; it’s just a draft for comment. But still, it’s something.

No, it isn’t. The week it was released I got all sorts of calls from reporters asking me what I thought of the report, whether the recommendations made sense, and why certain things were omitted. My primary reaction was: “Who cares? It doesn’t matter what the report says.”

For some reason, Richard Clarke continues to believe that he can increase cybersecurity in this country by asking nicely. This government has tried this sort of thing again and again, and it never works. This National Strategy document isn’t law, and it doesn’t contain any mandates to government agencies. It has lots of recommendations. It has all sorts of processes. It has yet another list of suggested best practices. It’s simply another document in my increasingly tall pile of recommendations to make everything better. (The Clinton Administration had theirs, the “National Plan for Information Systems Protection.” And both the GAO and the OMB have published cyber-strategy documents.) But plans, no matter how detailed and how accurate they are, don’t secure anything; action does.

And consensus doesn’t secure anything. Preliminary drafts of the plan included strong words about wireless insecurity, which were removed because the wireless industry didn’t want to look bad for not doing anything about it. Preliminary drafts included a suggestion that ISPs provide all their users with personal firewalls; that was taken out because ISPs didn’t want to look bad for not already doing something like that.

And so on. This is what you get with a PR document. You get lots of varying input from all sorts of special interests, and you end up with a document that offends no one because it demands nothing.

The worst part of it is that some of the people involved in writing the document were high-powered, sincere security practitioners. It must have been a hard wake-up call for them to learn how things work in Washington. You can tell that a lot of thought and effort went into this document, and the fact that it was gutted at the behest of special interests is shameful…but typical.

So now everyone gets to feel good about doing his or her part for security, and nothing changes.

Security is a commons. Like air and water and radio spectrum, any individual’s use of it affects us all. The way to prevent people from abusing a commons is to regulate it. Companies didn’t stop dumping toxic wastes into rivers because the government asked them nicely. Companies stopped because the government made it illegal to do so.

In his essay on the topic, Marcus Ranum pointed out that consensus doesn’t work in security design. Consensus security results in some good decisions, but mostly bad ones. By itself consensus isn’t
harmful; it is the compromises that are almost always harmful, because the more parties you have in the discussion, the more interests there are that conflict with security. Consensus doesn’t work because the one crucial party in these negotiations—the attackers—aren’t sitting around the negotiating table with everyone else. “And the hackers don’t negotiate anyhow. In other words, it doesn’t matter if you achieve consensus…; whether it works or not is subject to a different set of rules, ones over which your wishes exercise zero control.”

If the U.S. government wants something done, they should pass a law. That’s what governments do. It’s like pollution; don’t mandate specific technologies, legislate results. Make companies liable for insecurities, and you’ll be surprised how quickly things get more secure. Leave the feel-good PR activities to the various industry trade organizations; that’s what they’re supposed to do.

The draft report:
<http://www.whitehouse.gov/pcipb/>

News articles:
<http://www.bangkokpost.com/021002_Database/…>
<http://www.infoworld.com/articles/hn/xml/02/09/18/…>
<http://www.computerworld.com/securitytopics/…>
<http://www.computerworld.com/governmenttopics/…>
<http://www.news.com.com/2102-1023-958545.html>

Marcus Ranum’s essay:
<http://www.tisc2002.com/newsletters/414.html>

Other essays:
<http://www.infowarrior.org/articles/2002-11.html>
<http://online.securityfocus.com/columnists/110>
<http://online.securityfocus.com/news/677>
<http://www.zdnet.com/anchordesk/stories/story/…>
<http://www.avolio.com/columns/…>


More on AES Cryptanalysis

I can say with certainty that no one knows for certain if XSL can break Rijndael or Serpent or anything else. Actually, I can say something stronger: no one has produced an actual demonstration of XSL breaking even a simplified version of Rijndael or Serpent or anything else. This makes a lot of people skeptical.

Demonstrations are important. When differential cryptanalysis finally broke the full 16-round DES, the authors did not demonstrate the attack. Even though the attack was faster than brute force, it was still too complicated to demonstrate practically. But the authors did demonstrate the attack against reduced-round variants of DES, and against other algorithms. The community believed that the attack worked because the techniques had been demonstrated multiple times and the theory behind the techniques were well understood.

The XSL techniques have not been demonstrated yet. A number of respectable cryptographers, whose opinions I value highly, don’t think the techniques work. Don Coppersmith has published a note on the topic. And T. Moh has a Web page about this. (To be fair, T. Moh and Nicolas Courtois have an ongoing diagreement about another crypto-related topic. But while that certainly affects the motivations, it doesn’t necessarily invalidate the math.)

I know that several groups are working on the techniques, and if they work one of those groups should be able to demonstrate something, on something, soon. I’ll provide additional information when I learn of it.

Coppersmith’s comment:
<http://aes.nist.gov/aes/FMPro?…>
Sorry about the ridiculous link. The substance of the note is in the “Letters from Readers” column below, or here’s a referral link.
<http://makeashorterlink.com/?K27C515E1>

Moh’s site:
<http://www.usdsi.com/aes.html>

My essay on XSL from last month:
<http://www.schneier.com/crypto-gram-0209.html#1>


Crypto-Gram Reprints

Crypto-Gram is currently in its fifth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Cyberterrorism:
<http://www.schneier.com/crypto-gram-0110.html#1>

Dangers of Port 80
<http://www.schneier.com/crypto-gram-0110.html#9>

Semantic Attacks:
<http://www.schneier.com/crypto-gram-0010.html#1>

NSA on Security:
<http://www.schneier.com/crypto-gram-0010.html#7>

So, You Want to be a Cryptographer:
<http://www.schneier.com/…>

Key Length and Security:
<http://www.schneier.com/…>

Steganography: Truths and Fictions:
<http://www.schneier.com/…>

Memo to the Amateur Cipher Designer:
<http://www.schneier.com/…>


The Doghouse: GreatEncryption

It’s got all the snake-oil warning signs: a novel encryption algorithm that isn’t discussed, an obvious ignorance of cryptography, a patent in progress, and a bogus contest. Sample sentences from the Web site: “Keys 2,000-4,000 characters long are recommended for key strength that is far greater than that of other software programs now sold.” And: “Software with a key strength of 109^4000 + 109^3999 + … 109^1.” Egads.

The funniest bit is when they claim that their encryption is fast, “encrypting about 5,000 plaintext characters/second on an average PC.” Assume the average PC is 500 MHz; that translates to about 100,000 clock cycles per byte (ASCII character) encrypted. AES encrypts at 20 clock cycles per byte; there are stream ciphers that are over twice as fast. That means AES is 5,000 times faster than GreatEncryption.

The Web site says: “Permission to export Great Encryption to the rest of the world, except for terrorist states, is being sought.” If we’re lucky, they’ll get permission to export it ONLY to terrorist states.

<http://www.greatencryption.com/>


News

Good article on the myth of cyberterrorism:
<http://online.securityfocus.com/columnists/111>
And more silly hype:
<http://www.theregus.com/content/6/26414.html>

64-bit key brute-forced:
<http://slashdot.org/article.pl?sid=02/09/26/…>

Interesting Q&A with Whitfield Diffie, conducted by Richard Thieme:
<http://www.cisomagazine.com/2002/aug/qa.shtml>

Security vs. Open Society:
<http://www.osopinion.com/perl/story/19416.html>

Can Software be Certified?
<http://www.businessweek.com/technology/content/…>

This is about as pathetic as you can get. The Federal Trade Commission has decided that computer security needs a mascot, kind of like Smokey the Bear. So we now have Dewey the Turtle, who’s here to promote secure computing for everyone. “When you see the ping of death, duck and cover.”
<www.ftc.gov/bcp/conline/edcams/infosecurity/index.html>

A Russian hacker was sentenced to three years in prison here in the United States for breaking computer crime laws here. It’s an interesting story. He was in Russia at the time, and broke no laws in his country. However, the U.S. prosecution broke Russian laws to collect evidence against him. The judge agreed with the FBI’s assertion that Russian law didn’t apply to them. Isn’t international jurisprudence fun?
<http://in.tech.yahoo.com/021005/137/1w2bq.html>

Secure software: will we ever see it?
<http://www.computerworld.com/securitytopics/…>

Insiders are the biggest computer security threat:
<http://www.pcworld.com/news/article/0,aid,105528,00.asp>


Counterpane News

It was an excellent quarter for Counterpane. Sales up 100% over last year, a bunch of new resellers, way more monitoring, that sort of thing. We’ll have a press release with the details real soon now.

Schneier is speaking at SMAU 2002 in Milan on 25 Oct:
<http://www.smau.it/smau2002/english/docs/flash.html>

Schneier is speaking at the Symposium on Privacy & Security in Zurich on 30 and 31 October:
<http://www.privacy-security.ch/english/programm/…>

Schneier is speaking at Comdex in Las Vegas on 18 November:
<http://www.comdex.com/fall/>


One-Time Pads

It’s a meme that never seems to go away. Every time I write about this cryptanalytic result, or the insecurity of that system, someone starts crowing about one-time pads. “Every other cryptographic algorithm is based on some assumption, and one-time pads are the only provably secure system,” they say. “They’re the only safe algorithm,” they say. “They’re the future,” they say.

Well, they’re wrong. And step, by step, I will explain why. (Parts of this essay are taken from my book “Secrets and Lies.”)

One-time pads are the simplest of all algorithms, and were invented early on in the 20th century. The basic idea is that you have a pad of paper with a bunch of randomly chosen key letters, the same size as the message, on it. You add one key letter to each plaintext letter, and never repeat the key letters. (That’s the “one-time” part.) For example, assume the message is IT and the pad letters are CM. You add I (9) to C (3) to get L (12), or T (20) to M (13) to get G (7). (20 + 13 = 7 mod 26.) Then you burn the paper afterwards. The receiver reverses the process using his pad of paper, and then burns the key letters when he’s done. This system works with any alphabet, including a binary one.

One-time pads are the only provably secure cryptosystem. Because the key is the same size as the plaintext, every possible plaintext is equally likely. With different keys, the ciphertext DKHS could decrypt to SELL, STOP, BLUE, or WFSH. With a normal algorithm, such as DES or AES or even RSA, you can tell which key is correct because only one key can produce a reasonable plaintext. (Formally, the message size needed is called the “unicity distance.” It’s about 19 ASCII bytes for an English message encrypted with a cipher with a 128-bit block. With a one-time pad, the unicity distance approaches infinity and it becomes impossible to recognize plaintext. This is the security proof.) Because a one-time pad’s key is the same size as the message, it’s impossible to tell when you have the correct decryption.

This is the only provably secure cryptosystem we know of.

It’s also pretty much useless. Because the key has to be as long as the message, it doesn’t solve the security problem. One way to look at encryption is that it takes very long secrets—the message—and turns them into very short secrets: the key. With a one-time pad, you haven’t shrunk the secret any. It’s just as hard to courier the pad to the recipient as it is to courier the message itself. Modern cryptography encrypts large things—Internet connections, digital audio and video, telephone conversations, etc.—and dealing with one-time pads for those applications is just impracticable.

If you think you know how to do key management, but you don’t have much confidence in your ability to design good ciphers, a one-time pad might make sense. We’re in precisely the opposite situation, however: we have a hard time getting the key management right (partly because most applications won’t really support couriers with briefcases handcuffed to their wrists, Marines with rifles guarding the room with the encryption equipment in it, or thermite charges available for physically destroying storage media before the bad guys get past the Marines with rifles guarding the encryption equipment), but we’re pretty confident in our ability to build reasonably strong algorithms. It’s just not the weak point in our systems.

What a one-time pad system does is take a difficult message security problem—that’s why you need encryption in the first place—and turn it into a just-as-difficult key distribution problem. It’s a “solution” that doesn’t scale well, doesn’t lend itself to mass-market distribution, is singularly ill-suited to computer networks, and just plain doesn’t work.

The exceptions to this are generally in specialized situations where simple key management is a solvable problem and the security requirement is timeshifting. In these situations, the problem isn’t transporting the bits securely, but transporting the bits securely at the time the message is generated. Securing the bits beforehand is easy. And there are historical examples of one-time pads being used successfully, in specialized circumstances. Russian spies used pencil and paper one-time pads to communicate. (The NSA broke the system because the Russians reused the same one-time pads. Oops.) An early Teletype hotline between Washington and Moscow was encrypted using a one-time pad system. One-time pads were also used successfully in WWII by the English; spies in locations with radios but no other encoding equipment were given pads printed on silk, and were able to encode messages for transmission faster and more securely than by previous methods involving memorized poetry.

Those examples used real one-time pads. Generally, products that claim to use a one-time pad actually don’t. My guess is that the engineers quickly realize that they can’t possibly implement a one-time pad, so they use the output of a stream cipher and call that a one-time-pad generator, or a virtual one-time pad, or almost a one-time pad, or some other marketing-speak. It’s not a one-time pad. The security proof completely fails when you use a stream cipher.

On the other hand, if you ever find a product that actually uses a one-time pad, it is almost certainly unusable and/or insecure.

So, let me summarize. One-time pads are useless for all but very specialized applications, primarily historical and non-computer. And almost any system that uses a one-time pad is insecure. It will claim to use a one-time pad, but actually use a two-time pad (oops). Or it will claim to use a one-time pad, but actually use a stream cipher. Or it will use a one-time pad, but won’t deal with message re-synchronization and re-transmission attacks. Or it will ignore message authentication, and be susceptible to bit-flipping attacks and the like. Or it will fall prey to keystream reuse attacks. Etc., etc., etc.

One-time pads may be theoretically secure, but they are not secure in a practical sense. They replace a cryptographic problem that we know a lot about solving—how to design secure algorithms—with an implementation problem we have very little hope of solving. They’re not the future. And you should look at anyone who says otherwise with deep and profound suspicion.


Comments from Readers

From: “Christian Hampson” <champson hampsonservices.com>
Subject: Your name on Reveal’s list

Regarding the reason for the inclusion of your name and Rabbi Schneerson on the list for Reveal, the term “crypt” is considered to be an occult word. Your name is highly associated with cryptography. Also, Avi Schneier is associated with Tai Chi in New York and Arthur Schneier is part of the International Center for Religion and Diplomacy. As for Schneerson, I also noticed such words as “Judeo,” “Hasidi,” and “Kaballah” as being occult. It appears that anything other than Civil Religion is to be considered occult, as “Allah,” “Chant,” “Mahayana,” “Sabat,” “Ritual,” “Prophet,” and “Resurrection” are also included on the list. Perhaps you should feel honored by your inclusion.

From: Douglas Davidson <drd alumni.princeton.edu>
Subject: Your name on Reveal’s list

I just wanted to point out that this might not necessarily be illegitimate. If this organization is using some form of statistical filtering (something along the lines of that described for spam filtering in <http://www.paulgraham.com/spam.html>), then it is quite possible that their word list is derived entirely automatically from the analysis of some corpus. In that case, there may not be any way for a human to explain the presence of a particular word; it is there simply because it occurs in the corpus—not necessarily frequently, either. In Graham’s case, for example, the resulting word lists were a surprise even to Graham.

Unfortunately, if AntiChildPorn is using some technique of this sort, it becomes difficult to validate their filters. In the case of spam filtering, every user naturally has a sufficiently large corpus of spam and non-spam e-mail available to construct their own filters. However, not everyone has a large corpus of pornographic, racist, or similar material available. Unless AntiChildPorn makes their corpus available for examination—which they probably are not willing to do—it would be difficult to evaluate their techniques without assembling a large corpus yourself and seeing what their software says about it.

If AntiChildPorn is doing what they say they are doing, then one might make a guess that anti-Semitic writings occasionally include the names of rabbis. If they are not doing what they say they are doing, then perhaps they have fed Phrack or something similar into the mix. Without further evidence there is no way to tell.

From: “Don Coppersmith” <dcopper us.ibm.com>
Subject: XSL Against Rijndael

Your recent “Crypto-gram” leads people to believe that Courtois and
Pieprzyk’s XSL work breaks Rijndael.

I believe that the Courtois-Pieprzyk work is flawed. They overcount the number of linearly independent equations. The result is that they do not in fact have enough linear equations to solve the system, and the method does not break Rijndael.

The details: The problem is evident in the “T’ method” of section 6.3 of their IACR reprint #2002/044. They generate $ T’ = t’ t^{P-1} * { {S-1} choose {P-1} }$ terms that can be multiplied by x1 and still remain in their set of $T$ monomials, and then seem to claim to have that many new equations. But in fact, any of the $t’ [ t^{P-1} – (t-r)^{P-1} ] * { {S-1} choose {P-1} }$ equations that come from multiplication of a basic equation by a monomial, have already been counted among their $R$ equations, and so they can’t count them again.

The method has some merit, and is worth investigating, but it does not
break Rijndael as it stands.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of “Secrets and Lies” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide.

<http://www.counterpane.com/>

Sidebar photo of Bruce Schneier by Joe MacInnis.