Managing Network Security

The Limits of Cryptography

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


50 More Ways

The 50 Ways series of articles on vulnerabilities of information security was started as a joke and has become one of my most successful ventures in getting the word out about the limits of security. So when Richard Power of the Computer Security Institute asked me to do a job on Public Key Cryptography, I spent the requisite 60 minutes and pushed out a short piece on the emerging crypto-industry. Take a look at it ( /journal/50/index.html ) if you want the details.

Former NSA researcher Blaine Burnham, who is now a fairly high muckety muck at Georgia Tech University spear-heading their information security program, has told me for many years of his concerns about the level of trust people place in cryptography, and I guess I share his views on this. The basic notion is that, like most other aspects of our information infrastructure, it's easier to copy than to think. Whenever someone comes up with a good idea for one application, everybody and their brother tries to leverage it for all applications. The low cost of copying in the information age means that there is a lot of reward for the copy cat and less reward for the more detailed thinker. Despite all the talk about being agile, we are in fact still using the industrial age notions of mass production in the way we build our systems.

This means that Microsoft doesn't have a notion about how its software works or even its data formats, because much of its software is just a copy of some other software that was 'close enough' for the purpose. They license it in, put it in their product, and shove it out the door. The lack of quality in much of our software is closely linked, in my view, to this phenomena. It means that we are building very tall information buildings on inappropriate foundations and we are starting to see the effect in the form of more down time, higher costs, enormous computing resources required for even simple tasks, and so on. Our graphical interfaces allow pop-up smiley faces with moving smiles to ask us if we want help, but this doesn't really cover up the fact that the computer can't stay up long enough to get the answer.


Back to Blaine

If you review some of my past articles you will find that I am more than a bit intolerant of this way of thinking. In particular, I wrote a piece called "Change Your Password - Doe See Doe" in September of 1997 in this series that identified the foolish way we propagate policies about changing cryptographic keys into changing passwords without thinking bout the consequences or the rational. This mindless application of rules of thumb where they don't apply is what we are doing today with cryptography on a massive scale.

My regular readers probably knew I would return eventually to the subject at hand, but I wasn't so sure. At any rate, the subject is the foolish ways in which we use cryptography, so let's dig right in with the notion that cryptography can protect secrets for an extended period of time. It cannot.

Any decent scientist must be a student of history. Please note the term MUST as opposed to should or some other word. I choose the word because, by its very nature, science depends on experimental confirmation and refutation to throw out things that don't work. If you don't know the history of the confirmations and refutations of the theories you are working on, you can hardly consider yourself a scientist. The reason the last few Hughes satellite launches failed is that they ignored the experimental basis of science. They thought that they could build and test parts with computer models without doing the necessary real testing of the real parts. In case you missed it, we no longer do nuclear weapons testing - which is intended I guess to lead to the elimination of nuclear weapons. I figure that the credibility of deterrence cannot last long without testing, especially when we demonstrate on a regular basis that the lack of science in our modern religious cult of computing.

The history of cryptography tells us that every cryptographic system ever build has eventually been defeated. In the 1940s, Shannon's efforts defeated ALL previous systems and he defined the 'one-time-pad' system which is theoretically undefeatable - but which is also impractical for the vast majority of current cryptographic applications. Since that time, we have seen several public key systems defeated, the creation of the DES and its defeat, the defeat of the RSA for key sizes thought to be secure over the long run only 10 years earlier, and so on. Even the NSA's cryptographic sculpture has been broken recently - and not by an NSA cryptographer!

The notion that we can build a practical cryptographic system that can not be defeated over time is refuted by the historical evidence and the notion that some new kid on the block will change this situation is equally well refuted by a long list of new-age cryptographers who have repeated the mistakes of history by ignoring them. It is an easy pit to fall into, I have fallen into it myself and been pulled out by those wiser than myself, but it is a pit nonetheless. Do not trust cryptography for long-term secrecy of information. It just won't work.


Personal Decisions Work Better

The military has a long history of using cryptography for tactical operations. The idea is to encrypt things like where a tank should go next. If the bad guys break it in a few minutes, you might lose a tank, but if they break it in a few days, the tank will be long gone. They understand that, in many situations, moving the tank is more important than keeping the movement secret. As a result, if the cryptography has a technical problem that keeps the tanks from moving, the commander can make a command decision to turn off the cryptography and get the tanks moving without it. This is called risk management. It is real-time and it is done in the military all of the time. The commander understands the risks associated with the move because he or she has driven a tank and perhaps shot at while driving a tank in which movement was ordered without cryptography turned on in a hostile - if only simulated - environment. The risks are very real to the commander - both the risks of getting killed by the enemy knowing where the tanks are going and the risks of not moving the tanks.

In business, such decisions are typically made by a different sort of commander and the notion of tactical decisions to turn off cryptography are rarely considered. Let's talk, for example, about my credit card processing company. Like most such companies that accept (or more often reject) credit cards over the Internet, there is no risk management decision made today about items like the address used for the card. This presents enormous problems in that the bank's version of my address is not quite identical with the other computers in the credit processing system. With nominal judgment, they could decide that MS9011 is the same as M.S. 9011 and Mail Stop 9011 - but this sort of judgment is not made by the automation. Instead, we generate errors by having inconsistent data entry and data type information on different systems. So when one person tries to enter M. S. 9011 they may find that it is not valid on their system because of format restrictions. The net effect is that to mitigate the risk of sending my order to the wrong place, they can't tell that I am sending it to the right place and they cost me time and money which reduces the number of things I am able to buy from them.

To me, this represents the inadequacy of risk management in industry and it reflects on the lack of a clear tie between risks to the organization and risks to the individual. When the military commander in issues an order and decides to do it in the clear, there is a very personal risk involved, both from a standpoint of the military review process and from the standpoint of seeing your friends and comrades, whose lives you are responsible for, die if you are wrong. In today's business environment, the tie between risks and consequences to management are weak at best. If your credit card information is stolen, the manager at the ISP who was supposedly holding it securely, just doesn't care. If Amazon.Com sells books to someone in Africa or the Pacific rim and takes the money from my bank account to do it, the manager at Amazon probably makes more money because more books were sold. How is this related to cryptography? Simple - these are examples of places that use cryptography to protect my information - or claim to. Of course, the cryptography doesn't protect you, but it may sell you on their system as being secure - which brings me to a definition.

If you have ever looked up the word 'secure' in the dictionary you are likely to find out that it means the feeling of safety. It has nothing to do with the reality of safety. So in a very real sense, cryptography does provide security to those who do not understand it - because it can be marketed to make you feel safe. I tend to use the term protection - which means 'keep from harm'. A very different word with a very different meaning. Of course this series is called 'managing network security' - NOT managing information protection. That's because it's written for "Network Security Magazine" and I had to find a title that would convince the editors and the casual readers to read it - which is to say - I am using the term security to market this article to you - but I am actually telling you about information protection. That's another form of cryptography - the art of secret writing (by definition). I am using an encrypted message to improve protection. It's a public key system, since I am publishing the translation in this article. Don't you feel safer because of it? But don't feel too safe...


Microsoft and the NSA - Made to Order!

Now normally, when I write these articles a month or two before the publication date, I am relying on historical information as my guide, but for some reason I have an uncanny knack for coming up with things that become relevant just after I go to press. This month the force seems to be with me... and allies come in the strangest forms. Here's the start and some extracts from the press release:

The key is actually called "_NSAKEY" but there is - so far - no proof that it belongs to the National Security Agency or that, if it did, it was there idea to put it there. Perhaps it was simply an NSA key that Microsoft adopted as their own. Who knows. But the point I am trying to make is not really related to the massive potential for abuse by the NSA - or Microsoft - or any of the employees who has access to the underlying mechanisms - although these are certainly valid points to be made - and don't just apply to the cryptography but to all of the other Trojan horses in Microsoft software and that of other companies they and we rely on.

The point I am trying to make is that the notion of secretly using cryptography is fundamentally flawed, the notion of a single crypto-key that is used for millions of systems is fundamentally flawed, and the notion that cryptography will protect you from attacks against your insecure operating system is fundamentally flawed. It is an ill-conceived idea based on poor assumptions and excessive trust in public key cryptosystems. I will elaborate just a bit more on my three points.


What good is it then?

Cryptography is a very useful tool, and I use it myself. It reduces risks associated with observation and modification of information, particularly in real-time transactions where lasting secrecy is not important but momentary integrity is. Secure shell and similar methods to prevent session takeover are good examples of reasonable uses of cryptography - given that proper precautions are taken against trusting it too much.

In truth, I don't try to encrypt things with the intent of keeping them secret for a long time. I also don't trust cryptography as my only protection mechanism for limiting access to my systems. I use many other techniques in combination with cryptography to lower the risks to a level I am willing - but not anxious - to accept.


Conclusions

This article is about cryptography and the excessive trust we are unnecessarily and unwisely placing in it. But, as such, it is also about risk management and the poor job we are doing of applying it to cryptography. So there are really only a few reasons that we make poor risk management decisions.

The biggest reason that people make poor decisions is that we don't know enough to make better ones. I hope this article will start some people thinking about things they weren't thinking about before and asking questions they didn't ask before.

The second biggest reason that people make poor risk management decisions is because their stake in the outcome is not personal. I hope that some top level managers read this and decide to personalize the effect of poor decisions on their upper level management decision-makers. Of course since responsibility rolls down hill, the top level managers had better make sure the buck stops where it will do the most good - firing one person closer to the top usually saves you more and has a larger effect. I have a little list...


About The Author:

Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection, and a practitioner in residence in the University of New Haven's Forensic Science Program, where he educates cybercops on digital forensics. He can be reached by sending email to fred at all.net or visiting /