Printer Friendly

Suspect system incident verification in incident response.

Most system administrators have experienced that dreaded thought "My system has been hacked!" followed by hours and sometimes days of poking around the suspect file system and log files trying to confirm their suspicions. Unfortunately, hackers are becoming more sophisticated at hiding their tracks, which makes finding what was done to the system very difficult if not impossible. New exploits employing kernel mode rootkits are becoming more prevalent and allow the hacker to "own" your system without you knowing it. This means they can steal your proprietary information, use your system in a denial of service attack, run an Ethernet sniffer to find passwords to break into other systems in your network, or host ware servers to distribute illegal materials--all without being detected. The good news is that a new class of computer forensic security tools is emerging to fight this new threat.

Our security fears in information technology are reinforced on a daily basis by reports in the media. A recent report by iDefense stated that authorities estimate 50,000 servers are infected with the TK Worm and this number is still growing. Based on the 2003 Computer Crime and Security Survey published by the Computer Security Institute, less than 30% of over 500 companies surveyed indicated they did not experience unauthorized use of their computer systems in the past 12 months. According to this study, over 20% of companies reported receiving attacks to their Web servers from inside their firewall. This means that the best perimeter security alone is inadequate. Therefore, secure facilities must employ a layered security approach including physical security, firewalls, access management, intrusion detection, server lockdowns, log monitoring, incident response, and regular system auditing to remain secure.

This article discusses technical incident response and system auditing methods that can help you quickly evaluate the status of a suspected system. While examples used here focus on Windows systems, much of the information outlined in this article will pertain to all of today's popular operating systems.

For years, people have talked about incident response planning and the need for a detailed incident response plan to guide you through difficult procedures in times of confusion. Many of today's incident response guides fail to address steps to assist administrators to adequately verify that an incident has occurred. Furthermore, many plans neglect to outline procedures for evaluating an incident in a manner that will properly maintain and preserve evidence for possible future civil or criminal litigation.

You see it over and over again; an administrator suspects a machine has been hacked and starts riffling through the file system looking for anything out of the ordinary. Next, he sifts through local system logs. Unfortunately, the system administrator can't trust what he sees because the system may have been hacked; but, because he doesn't know what else to do, he does it anyway. It's the distrust in what administrators are looking at that cause them to continue to delve deeper into the suspected system trying to find anything that will be conclusive; but the trust issue is there, preventing conclusive findings.

Over the years, savvy system administrators have developed two methods to help resolve trust issues:

Create cryptographic hashes of important files on the file system. In this approach, the administrator who suspects a compromised host can create new hash values and compare the new hash values to a set of "known good" values.

Use a set of known good applications, sometimes referred to as "trusted binaries," to investigate the suspected host running from a CDROM, or remote disk.

A cryptographic hash is an algorithm used to produce fixed-length character sequences based on input of arbitrary length. Any given input always produces the same output, called a hash. If any input bit changes, the output hash will change significantly and in a random manner. Additionally, there is no way the original input can be derived from the hash. Two of the most commonly used hashing algorithms are MD5 and SHA1.

In using the techniques outlined above, an important issue to consider is that the investigation on the suspect system (even when using trusted binaries from a CDROM) changes almost every file's last accessed time. If it turns out there has been an incident, tracking hackers' actions becomes more difficult and can raise authenticity issues in legal proceedings.

Unfortunately, today's hackers can easily affect a host at a much deeper level than merely replacing files to cover their tracks and set up services. Hackers achieve this deeper infection by installing one of the widely available "Kernel Mode Rootkits". These rootkits are implemented as device drivers in Windows platforms and LKM's (Loadable Kernel Modules) in Linux.

The known development and suspected deployment of kernel mode rootkits is growing at an alarming rate. A relatively new website has become a proving ground for kernel mode rootkits, containing a development discussion list, precompiled rootkits, as well as source code for several different rootkits. Thousands of kernel mode rootkits have been downloaded from the site by its over 5,000 enthusiastic members.

To better understand kernel mode rootkits let's take a look at the basic principles of the security kernel architecture used in Windows NT/2000/XP platform design. Microsoft divides the operating system into two modes:

User Mode: This is where all general applications operate. General applications and subsystems for Win32, Win l6 and POSIX (Portable Operating System Interface) all run in this mode.

Kernel Mode: This mode is a trusted mode of operation for system services and device operations or access. All requests by user mode applications are brokered through Windows NT Executive Services within the kernel mode. This includes checking security ACLs (Access Control Lists) and allowing access to file I/O and attached devices.

Early rootkits only replaced user mode applications such as "netstat," "dir," etc. By replacing "dir," a hacker could control the "dir" application output (set to not display certain files); but "dir" would still need to request all file I/O from a protected source in the kernel mode. It was these early rootkits that hashing and trusted binary schemes were designed to overcome.

The current approach to kernel mode rootkits is simple. If the goal is to hide a file or process, rather than replace "dir" or "netstat," why not replace the command that all user mode applications would call for information from in the kernel? In the case of file I/O, we need to replace the kernel mode I/O routine "ZWQUERYDIRECTORY FILE." In this approach, not only will "dir" be able to hide hacker's files; but also any other applications such as today's Virus and Trojan scanners which make calls to the kernel mode I/O routine "ZWQUERYDIRECTORY FILE" will receive compromised information. Hackers accomplish this by writing a Windows device driver that--through a process called "hooking"--replaces trusted kernel mode I/O routine with their own. Of course the hackers' routine only provides information they want users to see. By hooking "ZWQUERY DIRECTORYFILE" the hacker has the ability to hide any file he wants.

At this point, the first question that usually comes to mind is "What about process lists and registry entries? Can they be trusted?" The answer is "no." Hackers can (and do) just as easily "Hook" process and registry query routines to hide running processes and changes to the registry.

The implication of these relatively new hacking techniques is that comparing hash values of files on the system is useless because any hashes created on the system cannot be trusted. The newly created local hashes would use local system I/O and the files seen by user mode applications most likely didn't change anyway. Using trusted binaries running locally would not help for the same reasons.

Resolving Trust in Incident Identification

One accepted way to detect a kernel mode rootkit is to reboot the suspected system in "Safe Mode," then look around for anything that's been hiding. Another way is to connect to the suspect system file shares from a trusted remote system (using its [trusted remote system] I/O and misted binaries) then explore, as before. In the first case, taking the server offline for mere suspicion is rarely an option. In both cases files last access times will be changed and the question may still remain "are the trusted binaries truly trusted?"

How do yon trust the flies you road from a live system and not destroy valuable tracking data?

To answer this need, many corporate security professionals are turning to the growing selection of professional-grade computer forensics products such as ProDiscover IR from Technology Pathways. Professional-grade computer forensics products read disks sector by sector, then implement a read-only file system for analysis of the suspect system. By reading the data at the sector level, the professional-grade computer forensic products avoid the code modified by the kernel mode rootkit and uncover the real data. These products offer core features that provide system administrators the ability to investigate suspected systems in a least-intrusive manner, leaving vital metadata like "last time accessed" intact and preserving evidence for possible criminal or civil litigation if a compromised system is found.

By selecting a network-enabled forensics product, administrators can remotely search for known-bad file hash values, recover deleted files, or search files and disks for keywords. Today's data hiding techniques, such as kernel mode rootkits, are quickly driving professional computer forensics products to becoming a key auditing component of comprehensive IR investigations. A comprehensive investigation may already include system monitoring, Intrusion Detection Systems and log analysis.

What to Look For in Professional Grade Computer Forensics Products

Today's professional-grade computer forensics products are essential to finding and fighting cyber crime. As the numbers of these products increase, product selection criteria become more important. When evaluating professional-grade computer forensics products, administrators should consider the following issues in their product selection criteria:

NIST support. Disk Imaging Tool Standards (www.cftt.nist.gov/DI-spec-31-6.doc) discuss mandatory and optional features, which ensure a true and accurate copy of evidence is collected.

Source code availability. In some cases, independent code verification or code escrow may become important to ensure the integrity of the tool and improve the acceptance of the evidence by juries.

Third-Party Code. How much of the applications software code was created by the company and their agents and how much of the code is programming libraries from other vendors? Applications software code using outside programming libraries from third-party vendors to achieve program functionality may not be able to answer customer needs without request to the third-party software developers.

File signatures. The application should generate and verify cryptographic hashes or checksums for images, disks and files using the MD5 algorithm at a minimum. Versions of the SHA algorithm are highly desirable.

Reporting capability. Automatic generation of reports for analysis findings helps reduce human error.

Live, remote system analysis. Provides the ability to analyze live disks in a nondestructive manner, in addition to imaging. This capability should be available for locally attached evidence disk as well as disk-attached to systems running on the network.

Forensic methodology. Support accepted computer forensics methodologies for the collection analysis and production of computer disk evidence. A good reference for such methodologies can be found on the International Association of Computer Investigative Specialist www.cops.org/forensic_examination_procedures. htm.

Search capability. Fast and extensive search capability to included file, sector level and file header level searches.

The conduct of Incident Response, by its very nature, may lead to Computer Forensics procedures being employed--if for no other reason than preserving the evidence for internal disciplinary action. With the proper tools and methodologies in place, the incident response goal of quick restoration of services can be achieved while also preserving the evidence.

Conclusion

Incident response and system auditing are important parts of the overall security architecture. The process of verifying if a system has been compromised during an incident has historically been time consuming and required taking critical resources out of service. This impacts overall productivity and, if not done quickly, makes it impossible to capture the data needed to catch the criminal. With new tools employing computer forensics, the processes utilized in identifying if a system has been compromised during an incident can be done quickly and without taking the system off line. This improves productivity as well as security. And if the system has been hacked, these tools will capture evidentiary quality data that can be critical to successful criminal or civil litigation. Network-enabled computer forensics products provide administrators with a solution to quickly identify incidents and properly manage the technical aspects to the corporate Incident Response process.

Christopher L. T. Brown is CISSP and founder of Technology Pathways, LLC (Coronado, CA)

www.techpathways.com

www.idefense.com

www.prodiscover.com
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Disaster Recovery
Author:Brown, Christopher L.T.
Publication:Computer Technology Review
Date:Aug 1, 2003
Words:2118
Previous Article:Continuous data availability solutions using an iSCSI virtualization switch.
Next Article:Blackout power.


Related Articles
Homeland security mobilization requires greater coordination. (Academic Viewpoint).
Storage vulnerability: security for storage is sparking action. (Security).
SNFs and Homeland Security. (View on Washington).
New emergency and disaster preparedness course work for physicians and other health care professionals.
FEMA releases recommendations for emergency response to major incidents.
Department of Homeland Security's National Response Plan.
The EAP critical incident continuum: using a continuum of services to assist management and employees in responding to a workplace disaster enables...
An opportunity: improving client services during disaster relief.
Assessing and improving bioterrorism preparedness among first responders: a pilot study; Methyl bromide fumigant lethal to Bacillus anthracis spores;...

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters