Crypto-Gram

April 15, 1999

by Bruce Schneier
President
Counterpane Systems

schneier@schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security.

Copyright (c) 1999 by Bruce Schneier


In this issue:


Cryptography: The Importance of Not Being Different

Suppose your doctor said, “I realize we have antibiotics that are good at treating your kind of infection without harmful side effects, and that there are decades of research to support this treatment. But I’m going to give you tortilla-chip powder instead, because, uh, it might work.” You’d get a new doctor.

Practicing medicine is difficult. The profession doesn’t rush to embrace new drugs; it takes years of testing before benefits can be proven, dosages established, and side effects cataloged. A good doctor won’t treat a bacterial infection with a medicine he just invented when proven antibiotics are available. And a smart patient wants the same drug that cured the last person, not something different.

Cryptography is difficult, too. It combines mathematics, computer science, sometimes electrical engineering, and a twisted mindset that can figure out how to get around rules, break systems, and subvert the designers’ intentions. Even very smart, knowledgeable, experienced people invent bad cryptography. In the crypto community, people aren’t even all that embarrassed when their algorithms and protocols are broken. That’s how hard it is.

Reusing Secure Components

Building cryptography into products is hard, too. Most cryptography products on the market are insecure. Some don’t work as advertised. Some are obviously flawed. Others are more subtly flawed. Sometimes people discover the flaws quickly, while other times it takes years (usually because no one bothered to look for them). Sometimes a decade goes by before someone invents new mathematics to break something.

This difficulty is made even more serious for several reasons. First, flaws can appear anywhere. They can be in the trust model, the system design, the algorithms and protocols, the implementations, the source code, the human-computer interface, the procedures, the underlying computer system. Anywhere.

Second, these flaws cannot be found through normal beta testing. Security has nothing to do with functionality. A cryptography product can function normally and be completely insecure. Flaws remain undiscovered until someone looks for them explicitly.

Third, and most importantly, a single flaw breaks the security of the entire system. If you think of cryptography as a chain, the system is only as secure as its weakest link. This means that everything has to be secure. It’s not enough to make the algorithms and protocols perfect if the implementation has problems. And a great product with a broken algorithm is useless. And a great algorithm, protocol, and implementation can be ruined by a flawed random number generator. And if there is a security flaw in the code, the rest of it doesn’t matter.

Given this harsh reality, the most rational design decision is to use as few links as possible, and as high a percentage of strong links as possible. Since it is impractical for a system designer (or even a design team) to analyze a completely new system, a smart designer reuses components that are generally believed to be secure, and only invents new cryptography where absolutely necessary.

Trusting the Known

Consider IPSec, the Internet IP security protocol. Beginning in 1992, it was designed in the open by committee and was the subject of considerable public scrutiny from the start. Everyone knew it was an important protocol and people spent a lot of effort trying to get it right. Security technologies were proposed, broken, and then modified. Versions were codified and analyzed. The first draft of the standard was published in 1995. Aspects were debated on security merits and on performance, ease of implementation, upgradability, and use.

In November 1998, the committee published a pile of RFCs—one in a series of steps to make IPSec an Internet standard. And it is still being studied. Cryptographers at the Naval Research Laboratory recently discovered a minor implementation flaw. The work continues, in public, by anyone and everyone who is interested.

On the other hand, Microsoft developed its own Point-to-Point Tunneling Protocol (PPTP) to do much the same thing. They invented their own authentication protocol, their own hash functions, and their own key-generation algorithm. Every one of these items was badly flawed. They used a known encryption algorithm, but they used it in such a way as to negate its security. They made implementation mistakes that weakened the system even further. But since they did all this work internally, no one knew that their PPTP was weak.

Microsoft fielded PPTP in Windows NT and 95, and used it in their virtual private network (VPN) products. It wasn’t until summer of 1998 that Counterpane Systems published a paper describing the flaws we found. Microsoft quickly posted a series of fixes, which we have since evaluated and found wanting. They don’t fix things nearly as well as Microsoft would like people to believe.

And then there is a company like TriStrata, which claimed to have a proprietary security solution without telling anyone how it works (because it’s patent pending). You have to trust them. They claimed to have a new algorithm and new set of protocols that are much better than any that exist today. And even if they make their system public, the fact that they’ve patented it and retain proprietary control means that many cryptographers won’t bother analyzing their claims.

Leveraging the Collective Strength

You can choose any of these three systems to secure your virtual private network. Although it’s possible for any of them to be flawed, you want to minimize your risk. If you go with IPSec, you have a much greater assurance that the algorithms and protocols are strong. Of course, the product could still be flawed—there could be an implementation bug or a bug in any of the odd little corners of the code not covered in the IPSec standards—but at least you know that the algorithms and protocols have withstood a level of analysis and review that the Microsoft and TriStrata options have not.

Choosing the TriStrata system is like going to a doctor who has no medical degree and whose novel treatments (which he refuses to explain) have no support by the AMA. Sure, it’s possible (although highly unlikely) that he’s discovered a totally new branch of medicine, but do you want to be the guinea pig?

The point here is that the best security methods leverage the collective analytical ability of the cryptographic community. No single company (outside the military) has the financial resources necessary to evaluate a new cryptographic algorithm or shake the design flaws out of a complex protocol. The same holds true in cryptographic libraries. If you write your own, you will probably make mistakes. If you use one that’s public and has been around for a while, some of the mistakes will have been found and corrected.

It’s hard enough making strong cryptography work in a new system; it’s just plain lunacy to use new cryptography when viable, long-studied alternatives exist. Yet most security companies, and even otherwise smart and sensible people, exhibit acute neophilia and are easily blinded by shiny new pieces of cryptography.

Following the Crowd

At Counterpane Systems, we analyze dozens of products a year. We review all sorts of cryptography, from new algorithms to new implementations. We break the vast majority of proprietary systems and, with no exception, the best products are the ones that use existing cryptography as much as possible.

Not only are the conservative choices generally smarter, but they mean we can actually analyze the system. We can review a simple cryptography product in a couple of days if it reuses existing algorithms and protocols, in a week or two if it uses newish protocols and existing algorithms. If it uses new algorithms, a week is barely enough time to get started.

This doesn’t mean that everything new is lousy. What it does mean is that everything new is suspect. New cryptography belongs in academic papers, and then in demonstration systems. If it is truly better, then eventually cryptographers will come to trust it. And only then does it make sense to use it in real products. This process can take five to ten years for an algorithm, less for protocols or source-code libraries. Look at the length of time it is taking elliptic curve systems to be accepted, and even now they are only accepted when more trusted alternatives can’t meet performance requirements.

In cryptography, there is security in following the crowd. A homegrown algorithm can’t possibly be subjected to the hundreds of thousands of hours of cryptanalysis that DES and RSA have seen. A company, or even an industry association, can’t begin to mobilize the resources that have been brought to bear against the Kerberos authentication protocol, for example. No one can duplicate the confidence that PGP offers, after years of people going over the code, line by line, looking for implementation flaws. By following the crowd, you can leverage the cryptanalytic expertise of the worldwide community, not just a few weeks of some analyst’s time.

And beware the doctor who says, “I invented and patented this totally new treatment that consists of tortilla-chip powder. It has never been tried before, but I just know it is much better and I’m going to give it to you.” There’s a good reason we call new cryptography “snake oil.”

Acknowledgments

Thanks to Matt Blaze for the analogy that opened this column. This originally appeared in the March 1999 issue of IEEE Computer.


News

A new Pentagon study acknowledges that U.S. Defense Computers are highly vulnerable to attack. Big surprise.
http://abcnews.go.com/sections/tech/dailyNews/…

The Security and Freedom through Encryption (SAFE) Act (H.R. 850) passed the House Judiciary Committee in late March. It was a victory, since an amendment—proposed by Rep. McCollum (R-Fl)—that would mandate key escrow as a condition for export was blocked by Rep. Goodlatte (R-VA) on jurisdictional grounds. Rep. Lofgren (D-CA), a co-author of the bill, compared the amendment to the Administration’s failed Clipper Chip. Things will get tougher for the bill in other committees; it now moves to the International Relations Committee, where an intense debate on the foreign availability of encryption products is expected.
http://abcnews.go.com/sections/tech/CNET/…
http://www.wired.com/news/politics/0,1283,18708,00.html
Background materials on the SAFE bill: http://www.cdt.org/crypto/legis_106/SAFE/
Proposed McCollum Amendment:
http://www.cdt.org/legislation/106th/encryption/…
CDT’s testimony in support of SAFE: http://www.cdt.org/crypto/alantestimony.shtml

The Wassenaar Arrangement is an attempt to enforce on other countries the same kinds of limits on strong cryptography that the U.S. has. This only makes sense if the U.S. is the only source of strong cryptography. But it isn’t—overseas security software is now just as good as work done by U.S. programmers. And some of the signatories are taking advantage of the wiggle room to liberalize their policies.
http://www.wired.com/news/politics/0,1283,19018,00.html


Counterpane Systems—Featured Research

“The Solitaire Encryption Algorithm”

Bruce Schneier, appendix to CRYPTONOMICON, by Neal Stephenson, Avon Books, 1999.

Computers have revolutionized the field of cryptography. It is relatively easy to design a computer algorithm that is secure against adversaries with unimaginable computing power. Less attention has been paid to pencil and paper algorithms, suitable for people who don’t have access to a computer but still want to exchange secret messages. Solitaire is an OFB stream cipher that encrypts and decrypts using an ordinary deck of playing cards. Even so, it is secure against computer cryptanalysis.

http://www.schneier.com/solitaire.html


Threats Against Smart Cards

Smart cards are viewed by some as the “magic bullets” of computer security, multipurpose tools that can be used for access control, e-commerce, authentication, privacy protection and a variety of other applications. While the flexibility of smart cards makes them an attractive option for numerous business uses, it also multiplies the number of threats to their overall security. To date, there has been little analysis of these wide-ranging security risks.

Because of the large number of parties involved in any smart card-based system, there are many classes of attacks to which smart cards are susceptible. Most of these attacks are not possible in conventional, self-contained computer systems, since they would take place within a traditional computer’s security boundary. But in the smart card world, the following attacks all pose a legitimate threat.

Attacks by the Terminal Against the Cardholder or Data Owner

These are the easiest attacks to understand. When a cardholder puts his card into a terminal, he trusts the terminal to relay any input and output from the card accurately. Prevention mechanisms in most smart card systems center around the fact that the terminal only has access to a card for a short period of time. The real prevention mechanisms, though, have nothing to do with the smart card/terminal exchange; they are the back-end processing systems that monitor the cards and terminals and flag suspicious behavior.

Attacks by the Cardholder Against the Terminal

More subtle are attacks by the cardholder against the terminal. These involve fake or modified cards running rogue software with the intent of subverting the protocol between the card and the terminal. Good protocol design mitigates the risk of these kinds of attacks. The threat is further reduced when the card contains hard-to-forge physical characteristics (e.g., the hologram on a Visa card) that can be manually checked by the terminal owner.

Attacks by the Cardholder Against the Data Owner

In many smart card-based commerce systems, data stored on a card must be protected from the cardholder. In some cases, the cardholder is not allowed to know that data. If the card is a stored-value card, and the user can change the value, he can effectively mint money. There have been many successful attacks against the data inside a card, such as fault analysis, reverse-engineering, and side-channel attacks such as power and timing analysis.

Attacks by the Cardholder Against the Issuer

There are many financial attacks that appear to be targeting the issuer, but in fact are targeting the integrity and authenticity of data or programs stored on the card. If card issuers choose to put bits that authorize use of the system in a card, they should not be surprised when those bits are attacked. These systems rest on the questionable assumption that the security perimeter of a smart card is sufficient for their purposes.

Attacks by the Cardholder Against the Software Manufacturer

Generally, in systems where the card is issued to an assumed hostile user, the assumption is that the card will not have new software loaded onto it. The underlying assumption may be that the split between card owner and software owner is unassailable. However, attackers have shown a remarkable ability to get the appropriate hardware sent to them, often gratis, to aid in launching an attack.

Attacks by the Terminal Owner Against the Issuer

In some systems, the terminal owner and card issuer are different parties. This split introduces several new attack possibilities. The terminal controls all communication between the card and card issuer, and can always falsify records or fail to complete one or more steps of a transaction in an attempt to facilitate fraud or create customer service difficulties for the issuer.

Attacks by the Issuer Against the Cardholder

In general, most systems presuppose that the card issuer has the best interests of the cardholder at heart. But this is not necessarily the case. These attacks are typically privacy invasions of one kind or another. Smart card systems that serve as a substitute for cash must be designed very carefully to maintain the essential properties of cash money: anonymity and unlinkability.

Attacks by the Manufacturer Against the Data Owner

Certain designs by manufacturers may have substantial and detrimental effects on the data owners in a system. By providing an operating system that allows (or even encourages) multiple users to run programs on the same card, a number of new security issues are opened up, such as subversion of the operating system, intentionally poor random number generators or one application on a smart card subverting another application running on the same card.

Securing smart-card systems means recognizing these attacks and designing them into a system. In the best systems, it doesn’t manner if (for example) the user can hack the card. It’s very Zen: work with the security model, not against it.

Adam Shostack is director of technology at Netect Inc.
Full paper: http://www.schneier.com/paper-smart-card-threats.html


Attacking Certificates with Computer Viruses

How do you know an e-mail is authentic? You verify the digital signature, of course. This means that you verify that the message was correctly signed, using the sender’s public key. How do you know that the sender’s (call her Alice) public key is valid? You check the signature on *that* public key.

What you’re checking is called a certificate. Someone else, call him Bob, signs Alice’s public key and confirms that it is valid. So you verify Bob’s signature on Alice’s certificate, so you can verify Alice’s signature on her e-mail.

Okay, how do you know that Bob’s signature is valid? Maybe Carol signs her key (creating another certificate). That doesn’t actually solve the problem; it just moves it up another layer. Or maybe you signed Bob’s key, so you know to trust him. Or maybe someone else whose key you signed has signed Carol’s key. In the end, you have to trust someone.

This notion of a certificate chain is one of the biggest problems with public-key cryptography, and one that isn’t talked about very much. PGP uses the notion of “trusted introducers”; Bob signs Alice’s key because Bob knows Alice and is her friend. You signed Bob’s key for the same reason. So when Alice sends you an e-mail you can note that her public key is signed by Bob, and you trust Bob to introduce you to people. (Much like Bob bringing Alice along to your party.)

Other Internet protocols—S/MIME, SSL, etc.—take a more hierarchical approach. You probably got your public key signed by a company like Verisign. A Web site’s SSL public key might have been signed by Netscape. Microsoft signs public keys used to sign pieces of ActiveX code you might download from the net.

These so-called “root-level certificates” come hard-wired into your browser. So when you try to establish an SSL connection with some Web site, that Web site sends you its public-key certificate. You check to see if that certificate is signed (using the public key in your browser); if it is, you’re happy. The you-have-to-trust-someone public keys are the ones that come with your software. You trust them implicitly, with no outside verification.

So if you’re a paranoid computer-security professional, the obvious question to ask is: can a rogue piece of software replace the root-level certificates in my browser and trick me into trusting someone? Of course it can.

It’s even weirder than that. Researchers Adi Shamir and Nico van Someren looked at writing programs that automatically search for public-key certificates and replace them with phony ones. It turns out that the randomness characteristics of a public key make them stick out like sore thumbs, so they’re easy to find.

This attack isn’t without problems. If a virus replaces the root Netscape certificate with a phony one, it can trick you into believing a fake certificate is valid. But that replacement certificate can’t verify any real certificates, so you’ll also believe that every real certificate is invalid. (Hopefully, you’ll notice this.) But it works well with Microsoft’s Authenticode. Microsoft had the foresight to include two root-level Authenticode certificates, presumably for if one ever gets compromised. But the software is designed to authenticate code if even one checks out. So a virus can replace the Authenticode spare certificate. Now rogue software signed with this rogue certificate verifies as valid, and real software signed by valid Microsoft-approved companies still checks out as valid.

This virus doesn’t exist yet, but it could be written.

An okay story on the topic:
http://www.techweb.com/wire/story/TWB19990315S0001

The actual research paper:
http://www.ncipher.com/products/files/papers/… [link dead; see http://www.ncipher.com/products/rscs/downloads/…]


Counterpane Systems News

John Wiley & Sons has published a book about the Twofish encryption algorithm. The book contains all the information in the initial Twofish submission and the first three Twofish tech reports, expanded and corrected. It lists for $50, but I am offering it at a 20% discount.
http://www.schneier.com/book-twofish.html

CardTech/SecureTech ’99. Bruce Schneier will host and speak at the Cryptography Technology panel at CardTech/SecureTech, on Wednesday afternoon, May 14, in Chicago.
http://www.ctst.com [link moved to http://www.ct-ctst.com/]

NetSec ’99. Bruce Schneier will give the keynote speech at NetSec ’99, a computer security conference (June 14-16, in St. Louis). Even though Bruce’s talk will be at 8:00 in the morning on Tuesday, it will be interesting. Schneier will also be speaking about securing legacy applications at 2:00 that afternoon.
http://www.gocsi.com/conf.htm

The Black Hat Briefings ’99 is a Computer Security Conference scheduled for July 7 and 8 in Las Vegas, Nevada. DefCon is a hackers convention held the weekend after. Bruce Schneier will be speaking at both.
http://www.blackhat.com/
http://www.defcon.org/


Comments from Readers

From: Paul Shields <shields@passport.ca>
Subject: Home-made Cryptographic Algorithms

But it is not the “try to develop new algorithms” that is the danger so much as the belief that it is safe to deploy those algorithms; and since the designers are not always the decision makers, while the designer of an algorithm can see its academic learning-by-doing value, the decision maker can perhaps only see property to sell. Sadly, those decision makers are business people who are not often well-acquainted with mathematical proofs, and instead focus on risk and profit.

If no one ever tries to break it, the algorithm may not be secure; but if the fact is that even after limited deployment no competent person ever again tries to break it, then to the business mind that risk is covered.

I have to admit that in my youth I wrote a crypto application that, while horribly insecure even to my untrained eyes, was unfortunately put into service by just such a decision maker. The story goes that it was actually used to protect highly sensitive information and that it passed a committed attempt to break it; but this did not help me sleep at night.

To convince them I think such people need really compelling stories of such systems that have fallen, at enormous cost to those who trusted them.

From: George Stults <gstults vixel.com>
Subject: Peer Review vs. Secrecy

I was thinking about a point that you have made many times, namely that if you don’t use a published, peer reviewed, and analysed cryptographic method, you are taking a big chance; it could be a good method, but the odds are against it.

It occurs to me that there is another calculation you could make here. Namely that a well known and widely adopted method such as DES is worth a great deal more of effort to break. Enough so that special purpose hardware becomes justifiable to break it, as in the recently publicized machine which broke (40 bit key?) DES.

And the opposite would seem to be true of obscure methods.

Any agency that must deal with a large volume of cryptographic material would surely prioritize its efforts. An unknown method requiring extra time and effort to attack would seem less likely to get the required attention. In effect, security by obscurity. The writers of the UK security document (referenced in a previous issue) seemed to say as much.

From: Dave Emery <die die.com>
Subject: Insecure Mobile Communications

Some of us whose hobby is poking around the RF spectrum to explore what is out there have been trying to make this point about a number of widely used rf links for many many years. Nice to see the issue hit something a bit more mainstream.

Most of the MDTs used by police departments are not encrypted in any meaningful sense, all the complexity of their signal formats is there to provide error correction and detection and efficient use of the radio spectrum. VHF and UHF rf links to moving vehicles are prone to both high error rates and bursts of errors (fades) and both MDT and pager protocols were designed to use very heavy (rate 1/2 or more) forward error correction and interleaving to help cope with this. This and the synchronous transmission mode used to reduce overall bandwidth make MDT signals complex and non-standard by comparison with simple ASCII async start stop stuff.

And some of the providers were silly enough to sell customers on the “security” of their systems by trying to convince them that the error correcting, interleaved, randomized (scrambled, for better signal statistics, not security), signals were so complex that nobody would be able to figure them out.

But if you think that police MDTs running in the open are somewhat shocking, you should also realize that virtually all pager traffic, including email sent via pager systems, is also completely unencrypted and transmitted using protocols designed for robustness in high error environments rather than security. And software for monitoring pagers with a scanner has also been widely circulated around the net for several years. Included is the capability to target particular pagers or monitor everything on the channel and grep for interesting traffic.

And the Mobitex protocol used by ARDIS and RAM mobile for wireless email is another example of something that is complex for error correction and robustness but has essentially no security. And software for monitoring this circulates around the net as well. ARDIS does use XORing with a 32 bit constant of the day to provide some fig leaf of security, but obviously determining the constant is trivial…

And this only scratches the surface of what can be found out in the ether and intercepted with relatively simple gear and some software ingenuity…


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security.

To subscribe, visit http://www.schneier.com/crypto-gram.html or send a blank message to crypto-gram-subscribe@chaparraltree.com. Back issues are available at http://www.schneier.com. To unsubscribe, visit http://www.schneier.com/crypto-gram-faq.html.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is president of Counterpane Systems, the author of Applied Cryptography, and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on cryptography.

Counterpane Systems is a six-person consulting firm specializing in cryptography and computer security. Counterpane provides expert consulting in: design and analysis, implementation and testing, threat modeling, product research and forecasting, classes and training, intellectual property, and export consulting. Contracts range from short-term design evaluations and expert opinions to multi-year development efforts.

Sidebar photo of Bruce Schneier by Joe MacInnis.