Managing Network Security

The Real Y2K Issue

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


Why 2K?:

I have been reading a lot of articles about the slow national and global realization that there is going to be some sort of problem relating to our dependency on computers surrounding the transition from December 31, 1999 to January 1, 2000 - the so-called Y2K problem. They inevitably start with a simple explanation for why the Y2K problem exists - and every single one of them that I have read so far is dead wrong.

Here's an example explanation taken from a seemingly knowledgeable author:

Let's start at the beginning. Humans have been using 2 digits to write dates ever since some time in the early 20th century - and possibly even before that. I seem to recall that in "The Music Man" the male lead says something about graduating from a music conservatory in 'aught 6' ('06). That was 1906, but in a few years we will be talking about 2006 - and hopefully the movie will come back in the 2020s. The real problem started not with computers but with people.

People use '06 because it's senseless to write 19 before every year. And that's the core of the problem with computers. If people think there might be a misunderstanding, they will write 1906 or 2006 or 1206. If there is a misunderstanding about a date, a person in charge of a power plant won't turn off the power because of it. If you are getting social security checks and you were born in 1894 - as my grandmother was - a human being might accidentally decide you didn't deserve a check. But a simple phone call would take care of it - if people answered the phones any more - and almost certainly - the social security people wouldn't stop sending all checks to everyone on social security.

So that's the core of it. To slightly mis-quote Rob Armstrong, a well known researcher in his field, Computers don't have the brains of a piece of celery. The Y2K problem comes from the fact that people trust idiot savant computers with their lives. Which brings me to my second point.


Who Says People Are Intelligent?

I have heard people make artificial intelligence claims for years and years. Some day, they imagine, computers will be as intelligent as people - maybe even more so! But this is an oxymoron. Think! Would any really intelligent life form invent something more intelligent than itself? Knowing - as we seem to - that information is the key to money, power, control, and so forth - you would have to be an idiot to want to create a machine that would ultimately turn you into a slave. But that's just what we've done for the last 10 years in the United States and in much of the so-called civilized world.

More and more workers sit in front of computers more and more of the time - acting as their eyes and ears and fingers - working day and often nights - to keep the computers working properly. Now we are paying billions and billions of dollars to fix the stupid computers - so they can be able to enslave us for another so many years. How dumb can we be?

Don't answer that - if you think about it too long you may come up with a very large magnitude indeed. Consider this. We are enslaving ourselves to machines without the brains of a piece of celery AND when we have the chance to move away from our obviously misplaced and extremely over-extended trust in computers, we decide as a community to trust them even more by trying to fix them to keep them going.


There's Got To Be Another Way!

Eventually, I always get back to the subject at hand - managing network security. To me, the Y2K issue is just a microcosm of the network security issue. Now it may seem suspect to claim that network security - an area that is funded for only a few billion dollars a year globally - is a superset of the Y2K issue - which is being funded at 10 billion dollars a year in the US alone. But I never said people were smart. In fact, the Y2K issue is only a tiny little - almost insignificant - part of the overall network security problem.

Now think for a minute. If someone told you that you were building a skyscraper on a marsh, what would you do? If you were in New York City, you would probably put very deep pylons into the silt and it would probably work. How about along a fault line? If you were on the West Coast of the U.S. you would probably not be able to get a building permit and be forced to move into an area where there were fewer earthquakes. Both of these are sensible moves - and both are risk management moves that involve eliminating the conditions that cause the risk. The pylons go deep enough that the instability of the marsh doesn't effect the building. The change in location moves the building to an area where shaking is less likely.

If you are in the computing field you don't do it this way. Instead, you build the building and then try to go sifting through the marsh, with the building up above you, eliminating all of the water particles so as to make the ground stable. Of course we know that this doesn't work because, beyond a certain point, every bug you fix introduces another bug. In the analogy, the tunnels you are making through the marsh are introducing yet more instability into the ground leaving you with a subsidance problem.

Suppose you went and built the building on the marsh or in the Earthquake zone, ignoring all the warnings, and you found that it was indeed unstable. In the county where I live, unsafe buildings are torn down. If you're dumb enough to build one that doesn't meet the safety requirements, it comes down and you lose your investment. But in the computing field, we don't do it that way.

In computer security and since the introduction of large-scale computer networks, in network security, we have been telling designers that to get security, you have to build it in from the start. Adding on security at the end - just like trying to get the water out of the marsh under the building - is a futile effort. Just like the poorly built and ill-conceived building, to get a secure network, you need to design it with security in mind.


Back to the Future:

I want to get back to the Y2K issue - and since Y2K is in the future - I get to go back to the future. We have some choices to make, and one of them - the one we probably won't make - is to lessen our dependency on computers for things like our survival.

Now don't take me wrong. I agree strongly with the notion that computers have helped improve our lives. I am not a Ludite. But just because computers can do things more efficiently in many cases than people does not mean that computers are the silver bullet to making our lives better. Computers are useful tools, but do we really need to get some miniscule added false efficiency by making systems so they cannot be operated manually?

Do we really believe that computers answering telephones are more efficient overall than people answering telephones? Sure - they are more efficient for the organization that sends others to a computer answering system - but they are less efficient for the people who talk to those answering systems. If we all go that way, it becomes less efficient for us all. This advantage only works if you are the only one who has it! Does the term Mutually Assured Destruction (MAD) come to mind?

I like voice mail - it's like email - it's more efficient because it both eliminates the need to call back and again AND it doesn't waste the callers time. In an emergency, a decent system gives you a way to talk to a human being - which is the only real solution to anything more complex than eating celery. By the way - I like celery too.


Forward to the Past

Informational efficiency at the expense of surety (certainty that it will work as desired) only wins if you have something that an opponent doesn't have. Which brings us to the concept of time-to-market. The time-to-market push is so heavy in today's information environment that surety has taken a back seat. This is a local efficiency at the expense of global inefficiency. For example, the MADness in the United States today over getting news stories out on the Clinton scandal is driven by time-to-market - not surety. Who cares of the story is wrong - get it out fast and you get more market share. Who cares if the software doesn't really work right - we'll fix it in the next release (NOT). Without the shortest time to market, you cannot survive in business today. Just-in-time delivery is a symptom of the same thing. Get that little bit of efficiency out of inventory reduction but pay for it a hundred fold when there is a strike or a fire in a plant in the Pacific Rim. This not efficiency - it's MADness.

Our global stupidity has led us all to the brink of global collapse because of our desire for local efficiency. The human race seems to be full of people who can't see past the end of their noses. The Y2K problem is starting to grind away at our noses, so we are now reacting to it in a fervor - hopefully in time to prevent real calamity - but even more hopefully with the hard-won new understanding that integrity and availability are critical elements of computing in any system that has any significant value.

I recall a Star Trek episode when a computer was in charge of the Starship Enterprise and it was given control of the ship. In the middle, Captain Kirk found that the computer could not be shut down without destroying the ship. He eventually out thought it into collapse, the ship sat nearly dead, but they got out of it.

In the early days of machine-assisted computing, nobody who understood computers thought for a moment that we would trust them so far. The first mainframes had emergency shutoffs to make sure that if somehow the computer got out of control we could shut it down. You can always pull the plug - was the ancient saying. But we are now at the point when we just about can't pull the plug - at least not without pulling the plug on ourselves.

Over time, the designers of systems - based largely on the notions put forth by von Neumann in a posthumous publication and E. F. Moore and C. E. Shannon in "Reliable Circuits Using Less Reliable Relays" (1956) - decided that in systems like space shuttles, we could risk lives and large amounts of money with dynamically unstable aircraft - given enough redundancy. In those systems, real specialists spent years designing, testing, and debugging special systems to assure their safety. But no more.

Today, in the great age of the personal computer, a gifted high school student with no proper training in the engineering fundamentals on which computers and information surety are based can become the whiz-kid programmer in charge of a system that puts human life at risk, make millions of dollars overnight, and be hailed by the ever rushing media along the way. We have a recipe made for disaster, and Y2K is only one of the disasters we may soon face.


Somewhere In There Is A Moral

In most of my articles, after I make the problem painfully clear, I provide a solution of some sort. This article is no exception. The moral of the story is:

My references to MAD are not accidental. Trust but verify was a well known saying in the Nixon era. We trusted other countries to stick to their arms agreements, but we needed to verify that they did. It's the same way with computers - or it should be. It's all right to trust them to some extent, but we still need to verify that they work as they are supposed to. Clearly, our lack of adequate verification has led to our current Y2K situation and our ongoing network insecurity situation. As a society, we rush executable code into our "trusted" computers from the Internet and foolishly fail to even verify the simplest things about that executable code before throwing it into critical applications.

I hear it coming - in our critical applications we don't do that! - but we do indeed. In the early 1980s, we were told about computer viruses and the transitive nature of corruption. Yet we still build most critical networks with a thin outer shell and a gooey center. The lack of defense in depth means that any little security hole can be quickly turned into a huge hole. It means that if anyone in our global internal network - which often includes vendors, customers, and others - makes a single bad decision and lets in malicious code - the whole network becomes susceptible. So much for trust.

In the MAD era (which we still live in by the way), our weapons of mass destruction were (and are) designed with the notion of failsafe. By design, when they fail, they are supposed to fail in a safe mode - where safe is defined as the alternative producing the best overall consequence from the alternatives available in the given situation.

I don't want to get into the notion that the best failsafe for all weapons is to never go off - except to say that this is another case of local optimization not aligning well with global optima. The global optima would be to never have any weapon ever go off anywhere. But the time-to-market notion says that when you have a slicker weapon sooner, you can win. Study the Gulf War if you don't believe in this notion. So even though the global optima is no weapons anywhere, the local optima for almost everyone is to build better weapons more quickly - the arms race.

So we are in an information arms race with no failsafes. Guess what. If we were in the nuclear weapons arms race with no failsafes, we would all be dead by now. In case you haven't guessed it by now:

I think we need to start creating failsafes for our information technology.


Right On Time

Frankly, we don't have enough time to really fix the Y2K problem, and even more frankly, we are probably not smart enough to build systems worthy of the trust we currently place in them. But we do have a historical opportunity today to go in a different direction. Instead of increasing our dependency on untrustworthy information systems and networks, the Y2K challenge is an opportunity to move toward the universal adoption of failsafe conditions in all critical systems.

I'll give you even more incentives. It's cheaper, faster to implement, more certain to work, and more likely to work in more places than any other technique we have available today. Given the lack of time, lack of funds, and complexity of the Y2K issue, going to a system of failsafes may be the last best hope we have, not only for the coming date change, but for the future of the information age.


Summary and Conclusions:

Summary: Computers don't have the brains of a piece of celery, and people who trust computers for critical functions without proper failsafes and verification are not much smarter. The Y2K problem is not a 2-digit year problem - it is a problem of people putting too much trust in technology and being in too much of a rush for local optima to do things right.

If we keep going this direction - building a society on a house of cards - it will all fall down - and our society will go with it. Y2K is a symptom, not the disease. If we ignore the disease and simply treat the symptom, we will pay the price in our long term health - and eventually with our lives. The disease has been diagnosed and we have a viable - but not ultimate - cure. Let's start curing the disease with what we have today and use preventive measures to keep the disease from emerging again.


About The Author:

Fred Cohen is a Principal Member of Technical Staff at Sandia National Laboratories and a Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net or visiting /