Managing Network Security

Relativistic Risk Assessment

by Fred Cohen

# Series Introduction

Over the last several years, computing has changed to an almost purely networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs have increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.

# Sounds Fancy!

No - it's not rocket science or quantum mechanics - but it is one of the best uses I have found for probabilistic risk assessment in the information protection field.

In previous articles I have discussed the limitations of probabilistic risk assessment in information protection, but within a few days of sending the last article to the publisher, I found myself making a strong case to management for a particuilar strategy, based solely on a probabilistic risk analysis comparing two alternatives. I'll get into the details a bit later, but for now, I want to define my terms a bit more.

In normal probabilistic risk analysis, we generate expected loss figures based on numbers of some, typically questionable, origin - and use numerical techniques to find the most cost effective methods for reducing expected loss. In relativistic risk analysis - a term I invented for this article - we ignore absolute numbers and make the assumption that all other things are equal, alter one or more select parameters, and see how the results change.

Some would call this a sensitivity analysis - and I wouldn't argue with the term - but I call it relativistic because it is particularly useful in assessing the relative merit of a small number of alternatives. An example will be most instructive.

# A Case In Point

Suppose I want to choose between having ten people who are superusers for all 1000 of our computers and having two people each with superuser privileges to 200 computers. How do we compare these two alternatives?

Suppose we act as if we were doing a probabilistic risk assessment, but instead of using real numbers, we will make what we think are reasonable guesses - with one caveat - we will make the same guesses for each case and see how they compare.

 Parameter Guess Total number of systems 1,000 Total number of superusers 10 Probability that a superuser is a bad guy 0.01 Value of the information in each system \$10,000 Added cost per system of split superusers \$100

That will do for now - we'll multiply.

In one situation, we have ten people that can cause \$10,000 of harm to each of 1,000 computers with probability 0.01. To multiply the probabilities, we take the likelihood of having a good superuser (0.99) to the 10th power and subtract from one to give the likelihood of a bad superuser (1-0.904 = 0.096). 0.096 * 1,000 * \$10,000 = the expected loss = \$960,000.

In the second case, we have 5 cases of two people causing damage of \$10,000 to each of 200 computers with probability 0.01. Again we multiply the probabilities (1-0.980 = 0.02). 0.02 * 200 * \$10,000 = the expected loss for each subsystem of 200 computers = \$40,000. The total expected loss then comes to 5 * \$40,000 = \$200,000.

Since the cost per system of the second scheme increases by \$100 per system, we add the \$100,000 in added cost to the expected loss figure of the second scheme to find that the second case is three times as good as the first case.

But suppose we were way off in our figures. what's the effect? If a minor change in assumptions makes a major difference in the outcome, our analysis is very sensitive to our assumptions. Otherwise, we could be pretty far off and still select the better of the two alternatives. Here are some sample results from varying one parameter at a time:

• For the cost per system of having split superusers to change the relative results, we would have to be low in our cost estimates by a factor of 4.
• For the expected loss per system to change the relative results, we would have to be high by a factor of about 8 (since the reduction in expected loss would have to be enough to offset the cost of split administration.
• For the probability of a bad superuser to change the relative results, the probability estimate would have to be high by about factor of 10. This drops the expected loss for the former case to about \$100,000 and the expected loss for the latter case to about \$20,000.

# Things Are Never That Simple

Of course, we have ignored a lot of things. For example, a bad superuser might not cause the worst case harm, different systems probably have different values associated with them, and so on. But hopefully, you get the idea.

The idea is NOT to get you to rush out and change your network operations to implement separation of superuser duties across your organization - although that might in fact be a good idea. The idea is to consider relative risks rather than absolute risks and to do a simple sensitivity analysis to see how bad your guesses could be before you'd get the wrong answer.

# Another Example - The Unquantified Risk

Another arena where relativistic risk analysis produces interesting results is when we have an unquantified risk, but a high potential worst case loss. I recently encountered one of these with one of my larger clients - it went something like this:

• We knew that the worst case loss was on the order of tens of millions of dollars per day, but we had no way to assess probabilities of different events or to quantify the potential measures that might be taken to reduce the risks.
• The proposed business function which introduced the unknown risk would produce a savings of several hundred thousand dollars per year.
• We knew of a secure implementation that would cost on the order of ten thousand dollars to implement. This implementation would permit the cost saving business function to operate without impact on other aspects of the corporate information systems.

We did a very simple relativistic risk analysis. It showed that for ten thousand dollars, we could reduce an unquantified risk to virtually zero. In order for this investment to NOT be worthwhile, the likelihood of harm would have to be less than one chance in one thousand per year. Since the cost of a secure implementation was such a small percentage of the annual savings from the business function it enabled, we advised that this protective system was a good investment.

We augmented this case by pointing out that any investment for a firm of this size that could not afford to budget ten thousand dollars for information protection was probably not worth the corporation making in the first place. We also judged that the return on investment for these protective measures was almost certainly far better than most of the investments the company made. Our fairly conservative estimates came to something like a hundred dollars of return for each dollar invested.

# Conclusions

Relativistic risk assessment works and it produces sensible results that can be easily explained, often without a high degree of dependency on uncertain numbers. It is one of the most convincing ways to show the benefits of one protective scheme over another and is particularly useful in cases where risks cannot be properly quantified with any degree of certainty.