Managing Network Security

Collaborative Defense

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


The Distributed Computing Environment

In December, I attended a conference centered around the future of computing relative to the implications of Moore's law. Now for those of you who don't know about Moore's law, it states something to the effect that speed, size, and cost of storage and processing improves by a factor of 2 every 18 months. This has been true for the last 30 years or so in computing based on silicon technology, and will likely continue for the next 10 years...

But then... it will end. And when it ends, unless we run out of the desire for more processing power and storage, the next step in performance will come from distributed computing - which is already up to about 100,000 processors for special purpose functions and 500,000,000 computers in the Internet (give or take who knows how much). We may get an order of magnitude or more out of this growth, but that should happen in very short order after we get to the end of the Moore's law curve. The next frontier is in software improvements - qualitative improvements in the quality, efficiency, and so forth. Perhaps we will go back to 2 digit years to reduce storage requirements - but I doubt it.

To me, this means that that we will be in a situation where we need to protect widely distributed networks for a long time to come - and it also means that cooperation and automation of the protection mechanisms within these numerous processors will need to overtake human control on a piecemeal basis. Otherwise, we will continue down the path we are on today, where weaknesses in user systems dominate the vulnerability space and the network allows attackers to easily gain access to the user's systems and use that as an entry point into the internal network.

Network security management techniques of today and those that we continue to develop will likely play a vital role, but scaling these controls to tens of thousands of systems is already problematic and will become far worse with time. The problem becomes far less tenable when we consider the inter-organizational issues involved in most network attacks today. Even if we could get a fantastic capability for network management within a given administrative domain, the cross-domain boundaries are where there is tremendous difficulty for the defender today.


How Does it Really Work?

At the same conference, we had folks like former secretaries of defense and high muckity mucks in big government research labs, and so forth. At one point, we heard about the new centrally command for cyberwarfare defense in the department of defense. The discussion got really interesting at that point because it seemed clear to me that they didn't understand at all how defense is really done in information infrastructure today. I explained it something like this.

Now there are also cases where a phenomena is more widespread, such as the virus that used ftp to send PGP private key-rings to the codebreakers.org ftp server. In this case, the notification works differently. A larger-scale networking mode can be used where information channels such as the Risks forum (risks@csl.sri.com) or the HTCIA mailing list are used to communicate with thousands of people who work in information security. If an incident is really severe, it might even be publicized via the press, but this runs the risk of creating and disseminating large scale misinformation and disinformation and exposes you to the potential for a lot of bad publicity. Mark Graff of Sun brought up examples of other collaborative responses, such as one that he helped to generate through the Forum of Internet ReSponse Teams (FIRST) group.

Just to reiterate and clarify, all of these cases are examples of collaborative defenses. We built the Internet and we are its antibodies. Note the biological analogy - the gene machines that built our bodies also provide the antibodies that keep it alive. Tony Bartoletti pointed out that he doesn't consciously think about handling diseases using his central nervous system. The response is distributed and coordinated at the level of the gene machines. Another attendee noted later on, and I agree with him on this, that if we depend on the biological analogy too much, we may have to live with a body of the network that dies from old age. Note also that it is very easy to kill a person starting with a very small quantity of disease agent. We don't want our networks to have similar weaknesses if we can avoid it.


How Do We Band Together

One of the reasons that people in government seem to have a hard time dealing with the way the Internet works is that their organization is so very hierarchical. And of course they are not the only organizations with this property. In the global and 'levelized' age that the elite among us (including everyone who is reading this article) we now seem to be living in, it is increasingly necessary to be decentralized and distributed in order to survive. Decisions are pushed as 'low' down the structure as possible because, in an organization with many different things to do, in order to grow beyond a certain size, there are too many important decisions to be made by a small group of people. Instead, we need to have good people and help them collaborate in making good decisions.

Somehow, the organizational challenge lies in getting the culture, knowledge, and wisdom necessary to make good decisions distributed to the people who make the systems work. We are very good at passing data around over networks, but we are not so good at passing culture and knowledge and wisdom. This too may come with time.

Of course even the organization that is extremely hierarchical and manages somehow to be successful on its own will not be able to prosper this way in the current networking environment. So systems administrators and those responsible for security in such organizations must be able to live successfully in both worlds. They must be independent collaborative thinkers in the networked world, and hierarchical folks in their internal world. Otherwise one or the other master will not be served and they will have to leave one of the two behind.

An excellent example of this is the person who first discovered and tried to deal with the attack on US military secrets in the case where printer output was routed to Russia before getting to the local printer. He tried to report the issues and get action from the hierarchy, he told his 'superiors' (strange name considering their inability to get things done in this case) and theirs and theirs, and he told the FBI, which decided not to investigate at that point. In the end, action was finally taken, but in the process, the US military lost one of its strongest defenders. In the future, they will not have this problem because they will not detect such attacks.

Now it is, of course, an exaguration to say that hierarchical systems are dead and gone, or cannot survive, or even that they use 'lower downs' as tools of the minds of 'higher ups'. In fact, I view it the other way. It is the real job of the workers at the 'bottom' to control themselves and their 'superiors' so as to get their jobs done effectively. So the technologists who don't know how to work the human system and are not trained to do so by their organizations will suffer the fate of outrageous fortune as will their organizations. I know of far more directors and VPs of information security that are looking for other jobs than who are pleased with their organizations, and if the top level people are trying to find a way out, the people who work for them are most certainly in the same situation by and large.


I Figured This Was a Good Section Break

I have a lot more to say on the organizational issues here, but that last section was getting too long anyway, and I figured this would be a good place to change subjects entirely. That is why I want to talk about distributed coordinated attacks (DCAs).

For those of you who haven't read about this sort of attack, you will be hearing about it soon. As of this writing, a few initial tools have been widely distributed over the Internet to allow attackers to break into site after site and plant Trojan Horses there which are designed to be remotely controlled. These tools generate a series of remotely exploitable hosts and then provide the means for their use to attack targets of choice as commanded by their originator. By the time this article hits paper form, you will likely have heard a lot more about them, and you may even be one of those affected by them.

Now the important thing to note about DCAs from the standpoint of this discussion is that defense against them requires a distributed coordinated defense (DCD). I am not saying this just to blow smoke, and I don't generally use the term requires unless I mean it. As far as I can tell, there is no way to defend against a DCA except with a DCD. No central defense can be effective because the attack happens and the damage is often done in the globally distributed infrastructure, not on site at the victim's location. In fact, the only real role for the victim is often as the sensor of the attack and the de-facto coordinator for the defense. Most victims have no control over any other part of the process.

It is important for the victim to coordinate the defense for two reasons. One reason is that nobody else will likely notice or understand the importance of the attack. From other viewpoints, a DCA may just seem like normal - or perhaps somewhat abnormal but not necessarily malicious - traffic. Since only the victim can really know if there is an attack, nobody else can really tell when it comes or goes. They may be able to determine that the particular behavior explained to them by the victim ends, but if the attack changes location, path, content, and other aspects of its function over time, the characteristics of what to look for will change and those in the infrastructure who try to lend assistance will not know what to look for next. What is malicious is a function of the victim - dependent on perspective so to speak - in other words - the meaning of the term attack is in the mind of the victim. Here's a good example. Suppose you are getting large volumes of web requests - enough to overflow your web server's capacity. Depending on the specifics of the traffic, it may be legitimate and you may need a more capable server - or it might be a malicious denial of service attack. In the general sense, nobody can really know the difference between these two except the person who owns and operates the web site.

So this defense has to be coordinated by the specific victim - and not somebody at a higher level of the victim's chain of command - and it has to be distributed, potentially across a large segment of the Internet, because no one actor on their own in the infrastructure can generate all of the needed controls to trace the attack back to its, possibly distributed, sources and cut it off close to those sources. The investigation and attribution process are also necessary and also need to be distributed (for the same reasons) and coordinated (typically by a global distributed collection of law enforcement officials). Now it is also possible for a large enough actor with a lot of highly distributed resources to do a substantial job of attribution by using a covert distributed intelligence network, but real proof requires in-depth examination of content of systems in many different administrative domains, and this is only available with permission of the owners (in many cases an act of war if not properly done). Further more, the work associated with a large-scale attack of this sort would be far too great for even the largest current information security and investigative agencies to do alone.


Some of the Limits Today

It is amazing just how far we are from reaching the necessary level of protection in this environment today. People at conferences tell us about the bright new future - and then they can't even get the video-projector to work. I just created a mailing list related to Moore's law on www.onelist.com - and within the first 15 minutes a mail loop developed that the onelist service was unable to stop, even after I removed the offending autoresponding user from the list. Both the autoresponder and the mailing system were to blame for this one - either one could have easily stopped the problem. Now consider the impact of a mail loop with autoresponders when we have 1,000 times the bandwidth and computing power. If I get several messages per minute now, how many thousands of them will I get in ten years. Clearly anything that requires human intervention will become a burden to the human who lacks the automation to help them handle it.

Today, and I think most of my readers will back me up on this, less than half of all systems have backups. Only 20 years ago, most systems were mainframes or large computers run by professionals and most of them had some form of backup - if only the cards from the last time the program was run. The backup technology has not kept up with the storage technology - in the sense that the total bytes of disk storage available today probably far exceeds the total volume of manufactured backup storage. Of course backups made only a few years ago are all but useless today because of data loss in the media and a loss of available technology to read it. And CD-ROMs are not helping this problem - and likely neither will DVDs. There is of course some hope in that most of the information we generate today is not important over a long period of time, so there is no great loss from the lack of backups, but I still think that integrity and availability issues such as these are critical and that we have no way to do them effectively in distributed computing environments of today - much less a path to getting them done in the future. As an aside, some of the students I work with are just about to get netRAID working - a network-based RAID storage system that will let us treat networked computers as a giant RAID storage array. You will be able to take computers in and out of the network and they will automatically update each other to retain the integrity and availability of terabytes of storage. The network will be the backup.

Today, human communication dominates the issues and human communication does not appear to be fast enough to react in time to stop rapid growth of packets formed by UDP viruses or email viruses. (The mail loop described in the last paragraph was a virus because it reproduced in the environment it existed in. I eventually killed the virus by changing the environment to prevent unmoderated messages, but this will substantially reduce the value of the mailing list until such time as I am able to mitigate the problem on a more permanent basis.) Imagine what would have happened if I had gone to sleep (it was almost midnight) instead of coming back to my computer for one last email check before bed... By morning there would have been something like 200 email messages sent to 100 people. That's 20,000 messages I saved by being addicted to computers - or to be nicer to myself, by checking my work. I should not have to be addicted to have the systems do the right thing, and while checking your work is fundamental to high quality work in any field, if the folks at either site had checked their work, the environment for the virus would not have existed.


Conclusions

I could go on and on and on with this list, but my point would not be better served. My point is that in distributed computing environments we face a whole host of new issues that, as a community, those of us in information protection have not really considered. Scale makes a big difference, and we are increasingly interdependent on people and systems over which we have little or no control except for the long-term controls of the free market. Because the time frame for harm from information technology is so rapid and the time frame for changing organizations and contracts is so slow by comparison, the only hope for effective protection is the web of people we trust in the distributed global information infrastructure. As the scale of the web increases, the number of people we need to collaborate with makes this human networking increasingly critical.

Today, people are the network's antibodies, but like our bodies die because the antibodies are not good enough, so do the networks of today. Unless we do a better job of it than evolution has done, the network will be killed again and again. This is both the great benefit and the great detriment of the biological model. It is wonderful because it is clearly a great analogy, but it is terrible because we do not want a global infrastructure that keeps dying.

I don't know where it will all go, but I am increasingly convinced that the brightest hope for the future of information protection lies in building strong on-line communities of professionals with good working relationships. In some ways, to some people, it may seem strange that the Internet will drive us toward closer human relationships over vast distances, but it should not be a surprise to anyone who was on the Internet from its inception. I remember the first night I spent on the - at that time ARPA-net - in the early 1970s. I was on the graveyard shift and looked at the ten or so places I could visit - and there was a site in Germany. So I started an on-line chat with a fellow in Germany. I was very impressed with the notion that I could collaborate in the middle of the night with someone across the world in real-time - without big phone bills - without a lot of barriers to getting there - and with relative mutual trust. Almost 30 years later, the same is more or less true.


About The Author:

Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection, and a practitioner in residence in the University of New Haven's Forensic Sciences Program, where he educates cybercops on digital forensics. He can be reached by sending email to fred at all.net or visiting /