Managing Network Security

What's Happening Out There

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


The Pace is Picking Up?

The pace of Internet-based attacks is picking up these days. A few years ago, the all.net site was getting an average of one illicit attempt at entry per day, and this was considered high by many people I talked to. These days, other folks who place machines on the Internet and have the ability to sense attack attempts find one attempt per day as well. But individual sites, even though they are typically single IP addresses, not advertised, and without any really valuable content for the serious criminal, are still attacked at a blistering pace compared to how prepared their owners usually are for the attacks.

Corporate sites can get a very substantial number of attack attempts per day. For example, one corporation I consult for has 'ownership' of a class B network (64,000+ IP addresses). Of the 10,000 or so actual IP address they use, only one of them is reachable from the Internet. Most of the IP addresses in their range don't even have routing to them, so they are unreachable from the Internet. Their ISP detects attack attempts for them and reports to them on a weekly basis. Last week, the ISP detected more than 200 attempted attacks of known types. That's more than one attempted break-in per hour for the single IP address they have that is reachable.

With this increase in the pace of Internet attacks, you would expect that we would be getting better defenses in place, but this is not generally what I have found. In fact, the defenses stay about the same - only the reporting improves.


What have you done for me lately

Several of the major firms have discovered that the way security gets funded and the way security personnel keep their corporate jobs is by being able to demonstrate improvement. Now, of course, it is not important that there is real improvement - only that improvement can be demonstrated to upper management. And this is what guides the latest trend in security products to their 'improvements'. Rather than do a really good job of defending organizations, it is far better from a marketing standpoint to provide better proof of what you are doing for them - regardless of how little or how much it actually is.

The ISP that reports lists of different probes they fend off is only one of the most mild examples of this. Another example that I think is horrendous is the military organization that tunes their intrusion detection system's sensitivity to provide enough follow-on investigation to support the number of billets the commander want to maintain. It's a simple matter. If you increase the sensitivity of your detection, you detect more, which means you need to investigate more. Other organizations do the opposite. If you don't have enough personnel to keep up with the number of detections, turn down the sensitivity to the point where you can handle the load.

Back on the reporting theme, the major selling point of the top brands of intrusion detection and network security testing systems today is not the number or kinds of things they detect. It is, in fact, the reporting mechanism they provide to allow you to generate monthly reports on what they have detected this month compared to last month. The idea is that the defender can get a monthly or quarterly report that lists the vulnerabilities detected. By comparing one period to the next, you can demonstrate that you are covering previous holes. At the same time, if we did this well, we would end up with no holes left, so the products need to add a few new vulnerabilities to their scans each period so that you can generate improvement on the old holes while still maintaining enough new holes to justify the need to keep using the product. The defender who uses these products can show constant improvement forever and still have more to do all along the way. You get job security along with proof you are doing a good job.


Statistics and their value

What you don't get is real improvement in information protection. All right - that's a bit of an overstatement - but only a bit. I think of it like building a mountain to the moon. The goal, in this case, is effective protection, and the technique is fending off one new known attack at a time. The problem with this is that there are a potentially infinite number of such attacks. So every time we fend off another one, we have made an absolute improvement, but in the relative sense of how well we cover the set of possible attacks, we get nowhere. It's just like every time we shovel another shovel full of dirt onto the top of a mountain. We are always getting closer to reaching the moon, but we will never reach it this way.

The reason this reporting process works is not that the people doing the reporting are lying. They are not. It is because they are successfully managing the perceptions of their management. The reports are truthful, and there are improvements being made. But if management understood the underlying issues, they would understand that these improvements are far out paced by the new vulnerabilities and the exploitation of longstanding vulnerabilities. That is part of the reason that, while we are getting lots of improvement in security performance, we are also getting increasing numbers of incidents and losses per incident. In summary, we are doing much better, but the results are much worse.

I recently had the chance to generate a bunch of statistics on a network and was able to definitively prove that the things we knew were happening were not happening - sort of. I'll get to the details in a sentence of two, but first I want to make an important point that you already know well. My point is that statistics are based on assumptions. If the assumptions are not made clear with the statistics or if they are not really the right assumptions for the purpose, the results will be impressive but misleading.


Usage patterns and network mapping

In this recent statistical exercise, there was an assumption provided by people that were supposed to know. The details are not important but the assumption in this case was basically that the network worked in a particular way. We gathered a lot of statistics on usage patterns and started to draw conclusions. It didn't take very long to conclude that our statistical results didn't make sense. We spent a lot of time and effort to test and verify that our methods of gathering statistics are right and that we would find what we were looking for if it existed, but we didn't find it. And yet, we had clear and seemingly unambiguous evidence that the phenomena was real and would be reflected in the sorts of statistics we were taking.

When the conclusions don't make sense, you have to look at your assumptions. So we looked and we looked and before long we figured it out. What we figured out was that the things were were trying to gather statistics on were not passing through our sensors. In other words, the phenomena was there, but it was not observable from where we were looking.

Now a lot of people seem to want to know how to map out their networks these days, and I certainly have had more than one request in the last week for this sort of information. What could this possibly have to do with statistics and how does this relate to the increasing attack detections and the reporting process used by the security vendors to show constant improvement? I will get there, but first, I want to talk about the amount of data that we throw away, waste, or ignore in today's information world.


I can smell you

When I talk to people about computer forensics, I tell them that there is so much evidence available in today's networks that there is almost certainly a trail of some sort of anything malicious that is ever done, regardless of how much the bad guys may try to cover their tracks. One of the things I try to point out is that the information we seek is there if we just know how and where to find it.

The amazing capability of modern information networks leads to a lot of inefficiency. Instead of using what's available, we tend to create anything we want from scratch. But I am often able to get all sorts of information from available sources by only nominal amounts of analysis of existing data.

Actually, the proper word is 'sniff' not 'smell'. My statistics were gathered by sniffing packets of various sorts in order to compare my measurements to reported statistics and in order to determine whether usage patterns made sense. As in most current network sensor situations, building custom sensors is almost never feasible for a short-term task, so you use what's available. TCPdump, for example, is a tool that simply copies all of the bits that show up on a network card into a file for analysis and presentation. That's what I used to do my statistical survey, and what I found was very different than what I was looking for.

While I thought that various percentages of traffic should come from various parts of this customer's network, I was surprised to find nothing of the sort. Instead I found that traffic was coming from places I didn't even know existed (and that were not on their network diagrams) and not coming from places I thought it should be coming from in the way I though it should come. I was so confused for a little while that I generated my own data from all the right places, sensed it, and did the same analysis on my test data that I did on the real data. The analysis worked on the test data but not on the real data - and the reason was - that the real data was not what it should have been.

As I was trying to track down the inconsistency in my results, I started to question the assumption that was given to me that all of the traffic in this network passed through the firewall. Of course this is almost always in question in today's environment, and sure enough, it was not true in the network I was examining. I did a characterization of all of the traffic flow and found all sorts of oddities, ranging from traffic from a completely different corporation to traffic that corresponded to addresses that were not assigned to any company but were not in the range of the Internet's private address normally for internal use only. When I correlated this to a network diagram, I found that there were whole networks out there that nobody in the networking staff was even aware of. And the deeper I looked, the more I started to see that all of this information was available to them in the reports they were already generating but never had the chance to analyze. In the end, we had a whole new map of their network, all derived by accident because my statistics didn't match.

Then I started to notice some other things. For example, that the security statistics being gathered apparently missed much of the network they were intended to reflect. In fact, the proxy server that was supposed to require authentication for access was apparently working for portions of the network that it didn't seem to know exited. We noticed that portions of the network never communicated to the Internet through the firewall but clearly communicated to the Internet. Soon we found out that there were other connections that were not described when the staff told us that 'all connections to the Internet go through the firewall'.

Pretty soon, the house of cards began to collapse. As assumption after assumption fell, it became apparent that there were all sorts of statistics and reporting about security and measurable improvements over time, but that all of these statistics and improvement had nothing whatsoever to do with the real situation within the organization. The ISP was detecting and deflecting hundreds of attempts at exploiting non-existent vulnerabilities. On paper it looked great, but in reality, it was irrelevant.

And the worst part of it all was that they had the information they needed to figure this out all of the time and just never had the time, effort, or interest to look at it. They, as most companies, preferred to take what they saw as a statistical win without questioning it.


Conclusions

As the classic line goes, there are lies, damned lies, and statistics. I for one am getting very tired of all the advertising hype and statistical static I see relating to network security. How about some meaningful statistics - like what percentage of overall Internet traffic is going through the firewall (if it's not 100 - it's not really a firewall), and what percentage of insider attacks are detected by the firewall, and how many hours a day some workers spend cruising the Web for personal financial information and sports results.

We gather so many statistics and collate so much information, but in information protection, we seem terribly poor at figuring out how to use the information for improving protection and operational efficiency. When will I see a report that tells me what the skill level was of the attacks hitting my firewall instead of what country the IP address came from? How about a statistic that tells me what portion of my Internet usage is actually for business purposes? How about a statistic that indicates what portion of usage comes from what part of the company? It should be easy to do, but for some reason, we don't seem to be able to get it done.

Enough complaining then. If nobody else will do it, I will. In the last week, I have put together a series of statistical analysis capabilities for one of my clients that addresses many of these issues for their network, and I think it's about time that all of you security managers out there start to demand the same thing of your vendors. Fight against the urge for steady statistical improvement relative to a meaningless statistic and start to use your access logs for something of value to your company and your program. Struggle to get a true understanding of the data at hand and to use it to the fullness of its capability. After all, that's the real promise of the computer age - that by using the information available via automation, we can use the automation to help improve itself. If we can do it for manufacturing and advertising, certainly we can do it for security.


About The Author:

Fred Cohen is a Principal Member of Technical Staff at Sandia National Laboratories and a Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net or visiting /