Managing Network Security

Prevent, Detect, and Respond

by Fred Cohen

Series Introduction

Over the last several years, computing has changed to an almost purely networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs have increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.

The Three Dimensions of Protection

A common way of viewing protection involves three activities - prevention, detection, and response. Thinking about these three activities is one way of focusing attention on the things we have to do to get effective protection - and as a free bonus - it lets us consider some of the issues behind proactive and reactive defense and security through obscurity. I'll start at the end.

For a long time, many of the world's foremost computer security pundits (whoever they may be from week to week) have told the world two things - the proactive defense is far better than reactive defense - and the security through obscurity is a bad thing. I admit it - I have been one of the people doing this. But as always, nothing is as simple as good and bad in this field.

We'll start with security through obscurity. I once had an on-line debate on this subject and was intrigued to find out that what I meant by obscurity was different from what some others meant by it. I figured obscurity meant that we used things that we didn't tell people about because of they found out our little secret, they might be able to get past our security scheme. I cited as an example of something we keep secret - our passwords. We obscure these using one-way encryption in password files and keep them secret because, once attained, the bad guys can often get past our security measures.

It was pointed out by the other party that - in their view - security through obscurity only applied to things that could not be easily changed. Unlike passwords, if we rely on obscuring the fundamental flaws behind ActiveX as our means of securing our browsers when cruising the Web, we cannot simply change a constant and be safe again. Rather, we have to restructure our whole way of doing business because ActiveX is so deeply flawed that no amount of minor modification can save us - not even temporarily.

Which brings us to the issue of proactive vs. reactive defenses. With a password, we can - at least potentially - react to a breakin by changing passwords. Of course we then have the substantial task of verifying that nothing else has been changed and/or changing it back and the ensuing battle with the attacker, but fundamentally, we have the hope of responding in a timely fashion so as to limit damage. With ActiveX, all we can really do is permanently disable the function, because there is no quick change that can defend against malicious code introduced whenever we use the system as it was intended to be used.

I don't really mean to pick on ActiveX - the ability to load executable programs from untrusted sources over the Internet into memory and run them is particularly dangerous, but there are plenty of other risky things we do in the way we use the Internet and our own intranets. The reason I select this as my example is that it's hard to reactively protect against the arbitrary sorts of things that can be done by ActiveX. The not-very obscure problem with ActiveX is that it assumes that programs taken from over the Internet are trusted - a poor assumption at best. The only real solution is to not use ActiveX as a basis for attaining the desired functionality. Since a substantial investment is typically made in such a technology before such a major flaw is revealed in an attack, the cost of this reaction is substantial - both in sheer dollars spend - and in business impact.

Which brings me back to the basic backdrop for this month's article - the proper mix of prevention, detection, and response.


Prevention is sometimes split into deterence and safeguards. Deterence includes things like public prosecutions of violators, fences that look hard-to-penetrate, and other things that tend to keep people from even trying. Safeguards include things like network firewalls, good computer security practices, and adequate training and awareness.

Deterence tends to force attackers to take other paths. If your company is known for prosecuting employees caught taking company secrets, employees will be less likely to take an active part in this sort of activity. They may help an outsider do the job, but only if their own risk is minimized and the returns are thought to be higher.

Having said that, I am harkened to recount an example of an employee at one company who was caught doing something against the rules and warned about getting fired the next time. The employee did it again using a slightly different method and was fired. Shortly thereafter, another employee got caught doing the same thing - was warned - and was told of the other example. The second employee did it again using still another method to try to conceal it - and was fired.

If there is a point here - it is that deterence doesn't always work. Some people will try the defense even if it looks ominous. Some will try it because it looks ominous. Effective deterence has to be backed up with a real threat that is occasionally demonstrated.

The other type of prevention - safeguards - consists of means and methods that are effective in preventing an attempted attack from being successful. In other words, they are a form of force against force. The attacker tries to force entry and the defense tries to meet that force with an equal or greater force to prevent that entry.

The concept of leverage may be conceptually applied here. For example, using computational leverage in the form of cryptography, we may make password guessing nearly impossible - while using a distributed coordinated attack exploits computational leverage of intermediary systems to push that much harder against the ramparts in an effort to find and exploit a weak point.


Unless we detect attacks, we cannot hope to respond to them. Historically, network-based detection has been poor. For example, according to several published sources, in the Internet less than 1 in 100 attacks are detected by those without a strong detection capability. Similarly, computer viruses are most commonly detected for the first time by people noticing system misbehavior. There are several reasons that detection has been poor:

Perhaps the most daunting challenge in detecting attack is the problem of false positives (detecting intrusions when they do not really exist) and false negatives (not detecting intrusions that do occur). It turns out that, in general, we cannot eliminate all false positives and all false negatives in any practical system that allows sharing and programming. As a result, our detection systems will likely always trade the losses associated with false negatives with the costs associated with false positives.


Because we can't be certain that all detected intrusions are actually attacks, we must be careful about our responses. But because attacks can be so highly automated that tens of thousands of attacks per hour can occur against one system, we cannot spend a lot of time on each activity. Thus we are faced with the dilemma of automating a response that is effective against real attacks and does not create side problems for false positives.

Since we have false negatives, we may only detect a small portion of a large attack. This means that if we underrespond to the small part we detect of the larger attack underway, the attack may succeed.

Perhaps the most complex issue in response is reflexive control. In essence, an attacker can treat our response systems as if they were reflexes, and take advantage of our reflexes to cause harm. For example, if we reflexively shut off network access to a particular network address whenever more than twenty failed logins occur from that address, an attacker can disassemble our network by systematically going through each of our addresses and forging twenty failed logins. The attacker who might otherwise not be able to deny services has done so by using our reflexes against us.

In order to deal with reflexive control attacks, we need an adaptive system of responses that has characteristic behaviors that properly trade off different aspects of protection and fails in a safe way.

Systemic Issues

Each of these dimensions of protection is quite complex, but when they interact with each other, the complexity climbs still higher. No technical or mathematical solutions exist for telling us how to mix prevention, detection, and response. At present, we don't even have an economic model for how to analyze the tradeoffs. What we do have is some notions of what works from a strategic and tactical standpoint. Here are some of the notions that seem to work today:

There is a saving grace in today's information environment. Most attackers are opportunistic and take the path of least resistance, and we are living in a target-rich information environment. Most of the information systems in use today are relatively easy to attack, most companies have poor defenses or no defenses, and most attackers and defenders are relatively unskilled.

Today's situation is something like a swordfight in which information technology is the weapon. A skilled defender with even a moderate set of home grown tools can see the thrusts as the attackers use them and react appropriately. But an unskilled defender, even with the best of tools, cannot hope to protect even a small number of systems from off-the-shelf attacks directed against them.

The astute reader may also discern an interesting point. Prevention is often a special case of detection and automated reaction. For example, access controls within a timesharing system consist of detecting an unauthorized access attempt and refusing to allow the access to take place. The real difference is that preventive defenses are in a very tight detect/response loop - so tight that they prevent unauthorized access.


When protection is easily adapted, a skilled defender can often detect attacks and respond to them within a short enough time to actively and cost effectively defend a network. When these conditions are not met, prevention is necessary for an effective defense.

In most real networks, protection is formed from a mixture of proactive and reactive techniques. It is common for preventive defenses to be put in place only after attacks have been detected and responses made. The defenses that were once reactive soon become proactive as the automation reaches a level where attacks are detected and properly responded to before they have any effect. At this point, the defenders normally see only audit trails listing the attempts and how the defenses reacted.

This way of doing business has more to do with demonstrating the need for defenses to management than with the cost effectiveness of it as a defensive method. To date, no economic model has been demonstrated or widely used to analyze what to prevent, what to detect, how to react, and how to move reactive defenses toward proactive ones.

About The Author

Fred Cohen is a Senior Member of Technical Staff at Sandia National Laboratories and a Senior Partner of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at