Good Morning

Copyright (c), 1996, Management Analytics - All Rights Reserved

Next part

I got to the office at 6AM that day and found some 800+ messages in my mail box. A quick examination separated the wheat from the chaff, and within a few minutes, I converted the semi-automatic mode of our defensive systems into full automatic mode. In essence, this means that instead of getting email on each attempted entry, I only get near-real-time activity reports in a window on my screen and the email gets sent to a collection point for subsequent processing if necessary.

It took another 10 minutes or so to get to reading the important mail for the morning which, among other things, included a message in answer to one of our automated responses indicating the following:

I had 4 automated messages this morning about a user here attempting to
telnet to your site. 

I've investigated this as far as I'm able and don't fully understand
what happened.  I'll give you as much information as I can, you may be
able to make sense of it. 

The connections were made from [...] which is a server for ftp, web
pages etc.  No users ever log into it directly.  It runs the W3C web
server in proxy mode for internal clients to access the web. 

The connections to your site were from the web proxy attempting to
access the URL http://all.net:23/ - as the target port is 23 this
explains why it looked like a telnet attempt.  I've looked through the
proxy logs and can see the 4 connection attempts on behalf of a user at
one of our remote sites[...].  I checked all the pages he looked at
around that time and can't see any links to the above URL.  I also spoke
to him and we're both at a loss to explain where this URL came from. 
For what it's worth I'm satisfied he's not deliberately making these
attempts - it's hard to see what good an HTTP connection to your telnetd
would be even if it was successful. 

...
I'd be most interested if you can make sense of any of this.  I hope
this has been helpful, and please contact me if you need any other
information. 
...

---  proxy.log, filtered for all connections made by...
...
[13/Mar/1996:07:57:08 +0000] "GET http://www.c2.org/hacknetscape/n.gif
[13/Mar/1996:07:57:09 +0000] "GET http://www.c2.org/blosser_cy/gifs/...
[13/Mar/1996:07:56:56 +0000] "GET http://www.c2.org/hacknetscape/
...

I immediately visited the listed Web pages and found that in the c2.org home page, there was code that automatically, and without the knowledge or consent of the user, caused Web browsers to telnet into our site. I secured a copy immediately for evidentiary purposes. Just to make this clear, and in case you missed it:

This is interesting because it means that (and is the first major real-world incident where) a Web page can contain code that causes your users browsers to automatically launch attacks against your sites or other sites, using your computers as their launch points. Your user doesn't have to click a special button to do this, it's a Trojan Horse embedded within the normal loading of another Web page or image.

The image that your browser just loaded (the one above containing pictures of three computers) could have included (it didn't) an attempt to telnet into an Internet site - or if it was directed toward your internal systems - an attack against your company launched from the browser within your firewall.

Normally, this would be very hard to track down because the attacks come from thousands of different locations even though they are all initiated from one site. In fact, it would be nearly impossible to track this kind of attack back without the sort of automated response and automated audit trails that were in place at our site. For example, if this attack were used against a site which allowed even one attempted telnet per host without a response, it might not have been tracked down or even detected.

The difficult we did yesterday - the impossible takes 20 minutes - if we work together. It was the combination of our automated response with the other administrator's audit trails made at almost the same time that gave the clues needed to track this down. Without both, the attack could have raged for a long time before anyone would have been able to track it down. I believe that the attacker was counting on this, and I would have paid good money to see the look on his face when he found out that we had caught him.

As soon as the nature of the incident was clear, we changed our automated response message to indicate the source and nature of the attack and tried to contact the site with the malicious code. We sent email, called and left a message, and put a message on their beeper. Next, we remailed to all of the systems administrators who had gotten mail overnight (250 of them) identifying the nature of this attack and asking them to determine whether or not this was the case in their part of the overall incident. We also asked them to contact the offending site (c2.org) and register their distaste with this situation. All of this took about 20 more minutes.

I had some breakfast and it was now about 7:45 in the morning, and our site was handling a peak of about 10 attempts per minute without degradation in performance or any human intervention by our administrators. Previous approximations were that we could handle 5,000 attempts per day without impact, but we were now handling attempts at the rate of more than 14,000 per day and believe we could handle up to about 50,000 per day before running into any substantial performance impacts.

Next I ran our internal audit tool to check the system out. This took about 10 minutes to determine that no logins had been successful and no system files were altered, to extract details on the sequence of events in a form useful to follow-up, and to detect that during the larger incident, there were several sub-incidents, including an automated port scan. Press here to see extracts from the audit output.

While the audit analysis was underway, I followed up on the port scan with an additional email to the site administrator, found from the audit trails that the same site had made scores of attempted telnets in a short time period while the attack was at its peak, and reported a summary of the attack to the systems administrator.

At 8:00, it was time for my morning walk, and since everything was going so smoothly (even though we still had several new attempts per minute), I turned on the automated telephone attendant and took my walk around the park.

When I got back, I called the FBI because the incident had now reached a scale where they might be interested in taking part. They said they would have someone get back to me. I also sent logs to the CERT at C-MU and had it (again) clearly explained that they don't do anything but keep track of reports and give technical assistance where needed. It wasn't. At that time, I had no response from the site where the attack was originating, so I came up with the idea of forwarding the reports of attempted telnets to that site. Since I believed a systems administrator might be involved, I wanted to make certain that the message got through, so I had copies of the responses sent to the root account, the postmaster, and the person listed as the site administrator. Within an few hours, the site responsible for the attack removed the malicious code and the rate of telnets slowed down.

Next part