The Forensic Challenge

Forensic Challenge


The Honeynet Project's Forensic Challenge was launched on January 15, 2001. This page links to all the information we've assembled about the Challenge. This index will help you quickly get to what you want.

Introduction

Every day, incident handlers across the globe are faced with compromised systems, running some set of unknown programs, providing some kind of unintended service to an intruder who has taken control of someone else's -- YOUR, or your client's, or customer's -- computers. To most, the response is a matter of "get it back online ASAP and be done with it." This usually leads to an inadequate and ineffective response, not even knowing what hit you, with a high probability of repeated compromise.

On the law enforcement side, they are hampered by a flood of incidents and a lack of good data. A victim trying to keep a system running or doing a "quickie" job of cleanup usually means incidents are underreported and inadequate handling of the evidence leads to no evidence, or tainted evidence. There has to be a better way to meet the needs of incident handlers and system administrators, as well as law enforcement, if Internet crime is going to be managed and not run amok. One possible answer is effective forensic analysis skills -- widespread knowledge of tools and techniques -- to preserve data, analyze it, and produce meaningful reports and damage estimates to your organization's management, to other incident response teams and system administrators, and to law enforcement.

Enter the Honeynet Project. One of the primary goals of the Honeynet Project is to find order in chaos by letting the attackers do their thing, and allowing the defenders to learn from the experience and improve. The latest challenge, inspired by the Honeynet Project's founder Lance Spitzner, is the Forensic Challenge. Only this time, we're opening it up to anyone who wants to join in.


The Challenge

The Forensic Challenge is an effort to allow incident handlers around the world to all look at the same data -- an image reproduction of the same compromised system -- and to see who can dig the most out of that system and communicate what they've found in a concise manner. This is a nonscientific study of tools, techniques, and procedures applied to postcompromise incident handling. The challenge is to have fun, to solve a common real world problem, and for everyone to learn from the process. If what I've said already isn't enough to get you interested, Foundstone is generously offering copies of their extremely popular "Hacking Exposed" (Second Edition) book for the 20 best submissions.

To get you started, here are the basic facts about the compromise:

Please be aware that these are new images. This is not a system that the Honeynet Project has previously written about or discussed publically. (I.e., you won't get any hints from previous Honeynet papers.) The images were edited to anonymize the system. Only the hostname was modified. Everyone is using the same data, so any anomalies caused by this editing will be identical.

You can find the "dd" format disc images at:

http://project.honeynet.org/challenge/images.html

The image files can be mounted on Linux systems using the loopback interface like this:

 # mkdir /t
 # mount -o ro,loop,nodev,noexec honeypot.hda8.dd /t
 # mount -o ro,loop,nodev,noexec honeypot.hda1.dd /t/boot
 [ etc... ]

Its now your job -- should you choose to accept it! -- to figure out the Who, What, Where, When, How, and maybe even the Why of this compromise. We don't expect that everyone undertaking the challenge can or will address all of the following items, but the list below of questions and deliverables is provided as a guideline for what to produce and what to focus on:

  1. Identify the intrusion method, its date, and time. (Assume the clock on the IDS was synchronized with an NTP reference time source.)
  2. Identify as much as possible about the intruder(s).
  3. List all the files that were added/modified by the intruder. Provide an analysis of these programs (including decompilation or disassembly where necessary to determine their function and role in the incident.)
  4. Was there a sniffer or password harvesting program installed? If so, where and what files are associated with it?
  5. Was there a "rootkit" or other post-concealment trojan horse programs installed on the system? If so, what operating system programs were replaced and how could you get around them? Hint: If you don't know what a "rootkit" is, read this:

    http://staff.washington.edu/dittrich/misc/faqs/rootkits.faq
  6. What is publicly known about the source of any programs found on the system? (e.g., their authors, where source code can be found, what exploits or advisories exist about them, etc.)
  7. Build a time line of events and provide a detailed analysis of activity on the system, noting sources of supporting or confirming evidence (elsewhere on the system or compared with a known "clean" system of similar configuration.)
  8. Provide a report suitable for management or news media (general aspects of the intrusion without specific identifying data).
  9. Provide an advisory for use within the home organization (a fictitious university, "honeyp.edu", in this case, where I hold an honorary Doctorate, by the way) to explain the key aspects of the vulnerability exploited, how to detect and defend against this vulnerability, and how to determine if other systems were similarly compromised.
  10. Produce a cost-estimate for this incident using the following guidelines and method:

    http://staff.washington.edu/dittrich/misc/faqs/incidentcosts.faq

    To simplify and to normalize the results, assume that your annual salary is $70,000 and that there are no user-related costs. (If you work as a team, break out hours by person, but all members should use the same annual salary. Please also include a brief description of each investigator's number of years of experience in the fields of system administration, programming, and security, just to help us compare the number of hours spent with other entrants).

To summarize (and standardize) the deliverables, please produce the following:

   File                   Contents
   ---------------------------------------------------------------------
   index.txt              Index of files/directories submitted
                          (including any not listed below)
   timestamp.txt          Timestamp of MD5 checksums of all files
                          listed and submitted (dating when produced
                          -- see deadline information below)
   costs.txt              Incident cost-estimate
   evidence.txt           Time line and detailed (technical) analysis.
                          (Use an Appendix, and/or mark answers to
                          questions above with "[Q1]", etc.)
   summary.txt            Management and media (non-technical) summary
   advisory.txt           Advisory for consumption by other system
                          administrators and incident handlers within
                          your organization
   files.tar              Any other files produced during analysis and/or
                          excerpts (e.g., strings output or
                          dissassembly listings) from files on the
                          compromised file system, which are referenced in
                          the previous files
The Rules
Submissions will be judged by a panel of experts and winners selected and announced on Monday, March 19, 2001. All decisions of the judges are final (no recounts or legal challenges by teams of grossly overpaid lawyers will be tolerated!).

After the winners are announced, all entries will be posted for the security community to review. We hope that the community can better learn from and improve from all the different techniques that different people and organizations use.

Also, we wouldn't be the Honeynet Project if we didn't capture all of the blackhat's keystrokes as he exploited, accessed, and modified the honeypot! We will release the Honeypot Project's analysis of the hacked system, as well as the blackhat's keystrokes, along with the results of the Challenge on March 19.

Good luck, and have fun!

Dave Dittrich

(Thanks to Lance Spitzner, members of the Honeynet Project, Dan Farmer, Wietse Venema, SecurityFocus.com, linuxsecurity.com, Foundstone, Ali Ritter, and anyone else who helped develop or support the Forensic Challenge whose name I may have left out.)


The Honeynet Project