Over the last several years, computing has changed to an almost purely networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs have increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
No - it's not rocket science or quantum mechanics - but it is one of the best uses I have found for probabilistic risk assessment in the information protection field.
In previous articles I have discussed the limitations of probabilistic risk assessment in information protection, but within a few days of sending the last article to the publisher, I found myself making a strong case to management for a particuilar strategy, based solely on a probabilistic risk analysis comparing two alternatives. I'll get into the details a bit later, but for now, I want to define my terms a bit more.
In normal probabilistic risk analysis, we generate expected loss figures based on numbers of some, typically questionable, origin - and use numerical techniques to find the most cost effective methods for reducing expected loss. In relativistic risk analysis - a term I invented for this article - we ignore absolute numbers and make the assumption that all other things are equal, alter one or more select parameters, and see how the results change.
Some would call this a sensitivity analysis - and I wouldn't argue with the term - but I call it relativistic because it is particularly useful in assessing the relative merit of a small number of alternatives. An example will be most instructive.
Suppose I want to choose between having ten people who are superusers for all 1000 of our computers and having two people each with superuser privileges to 200 computers. How do we compare these two alternatives?
Suppose we act as if we were doing a probabilistic risk assessment, but instead of using real numbers, we will make what we think are reasonable guesses - with one caveat - we will make the same guesses for each case and see how they compare.
|Total number of systems
|Total number of superusers
|Probability that a superuser is a bad guy
|Value of the information in each system
|Added cost per system of split superusers
That will do for now - we'll multiply.
In one situation, we have ten people that can cause $10,000 of harm to each of 1,000 computers with probability 0.01. To multiply the probabilities, we take the likelihood of having a good superuser (0.99) to the 10th power and subtract from one to give the likelihood of a bad superuser (1-0.904 = 0.096). 0.096 * 1,000 * $10,000 = the expected loss = $960,000.
In the second case, we have 5 cases of two people causing damage of $10,000 to each of 200 computers with probability 0.01. Again we multiply the probabilities (1-0.980 = 0.02). 0.02 * 200 * $10,000 = the expected loss for each subsystem of 200 computers = $40,000. The total expected loss then comes to 5 * $40,000 = $200,000.
Since the cost per system of the second scheme increases by $100 per system, we add the $100,000 in added cost to the expected loss figure of the second scheme to find that the second case is three times as good as the first case.
But suppose we were way off in our figures. what's the effect? If a minor change in assumptions makes a major difference in the outcome, our analysis is very sensitive to our assumptions. Otherwise, we could be pretty far off and still select the better of the two alternatives. Here are some sample results from varying one parameter at a time:
Of course, we have ignored a lot of things. For example, a bad superuser might not cause the worst case harm, different systems probably have different values associated with them, and so on. But hopefully, you get the idea.
The idea is NOT to get you to rush out and change your network operations to implement separation of superuser duties across your organization - although that might in fact be a good idea. The idea is to consider relative risks rather than absolute risks and to do a simple sensitivity analysis to see how bad your guesses could be before you'd get the wrong answer.
Another arena where relativistic risk analysis produces interesting results is when we have an unquantified risk, but a high potential worst case loss. I recently encountered one of these with one of my larger clients - it went something like this:
We did a very simple relativistic risk analysis. It showed that for ten thousand dollars, we could reduce an unquantified risk to virtually zero. In order for this investment to NOT be worthwhile, the likelihood of harm would have to be less than one chance in one thousand per year. Since the cost of a secure implementation was such a small percentage of the annual savings from the business function it enabled, we advised that this protective system was a good investment.
We augmented this case by pointing out that any investment for a firm of this size that could not afford to budget ten thousand dollars for information protection was probably not worth the corporation making in the first place. We also judged that the return on investment for these protective measures was almost certainly far better than most of the investments the company made. Our fairly conservative estimates came to something like a hundred dollars of return for each dollar invested.
Relativistic risk assessment works and it produces sensible results that can be easily explained, often without a high degree of dependency on uncertain numbers. It is one of the most convincing ways to show the benefits of one protective scheme over another and is particularly useful in cases where risks cannot be properly quantified with any degree of certainty.
Fred Cohen is a Senior Member of Technical Staff at Sandia National Laboratories and a Senior Partner of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net.