Printer Friendly

You've Been Hacked... Now What?

"You're owned." That was the content of a bravado-laden e-mail message received by a system administratar of a large financial services provider one early morning. Unbeknownst to the information technology (IT) staff at the time, a system intruder from another country had gained root access to the company's Web server, which was used for electronic commerce. The Web server had been placed in front of the company's firewall but provided any authorized high-level user a connection to corporate systems through the firewall. Thus, by gaining root access on the Web server, the cracker was in a trusted position and easily gained access to network services behind the firewall. Total chaos ensued. "Because this company didn't have a plan or a way of responding, nobody had any idea of who should do what," says Kelly J. Kuchta, CPP, a senior manager with Ernst & Young, whose incident response team was eventually asked to help fix the damage and prevent a recurrence.

SINCE THE FABLED Internet worm was unleashed by Cornell University graduate student Robert Morris in an infamous incident 12 years ago, security professionals have recognized a need for quick response to security incidents on the Internet. It was then that the Computer Emergency Response Team (CERT), at Carnegie Mellon University, a federally funded organization, was instituted as a central clearinghouse of computer security information and technical advisor on incident response for companies and government agencies.

At CERT's inception, only about 60,000 hosts were connected to the Internet. Now, however, more than 36 million hosts use the Net, and the number of security vulnerabilities and attackers has risen correspondingly. Companies are realizing that, with their increasing reliance on networked computers for mission critical operations, it makes sense to have their own incident response capabilities. Doing so entails setup and response.

If a company plans to establish a computer incident response team (CIRT), getting management support is critical to ensuring that the team will have adequate funding and authority, says E. Larry Lidz, a network security officer at the University of Chicago. Once support is assured, managers must take several steps to establish an effective incident team. These include: evaluating security, selecting a team, developing a policy, exercising the plan, and handling a response.


The CIRT team is not there to establish the security baseline, as that should have already been established by staff network security specialists. However, in forming the team, the security manager or whoever is heading up the effort needs to identify the systems that will most likely be targeted by attackers. For example, a company that conducts business online knows that its Web server has a high risk of attack. The findings will help determine the composition of the team.


Team members should have complementary skills, and they should be well-suited to working interdependently with coworkers in high-pressure situations. If they are in-house staff with other regular duties, as is often the case, they must be able to drop whatever they are doing and begin assisting in system recovery efforts when an incident occurs.

Employees with the greatest expertise in high-risk functions or business units (as determined during the evaluation phase) should be part of the team. The technical makeup of the information system resources also plays a role in determining who is selected. For example, if a company uses Windows NT and UNIX, the team should have members who have a broad knowledge of these platforms. If the in-house staff lacks some of the requisite expertise, the company may also decide to supplement the team with consultants.

The optimal size of the team varies depending on the resources the company has and its risk profile. Typically, CIRTs consist of several IT staff members, general counsel, a human resources representative, a physical security representative, a media or public relations liaison, and a chief information officer or some other senior-level executive.

Legal concerns. A company's legal representative should be involved at every stage of team formation to advise on legal liability issues. In the event of an incident, the counselor should be consulted about how technical staff actions might affect the company's ability to later prosecute the offender if apprehended. The human resource member of the team will play an important role if the intruder turns out to be an employee or contractor.

Investigations. The security department team member brings, of course, knowledge of what is going on in the company as a whole. He or she is also likely to have the investigative skills the team will need to solve any computer crime.

Public relations. Companies with hacker hassles are finding it increasingly difficult to stay off the nightly news, as system intruders are savvy enough to boast about their exploits in the right places. Therefore, when an emergency arises, the CIRT team needs to make some information available to the corporation's public relations personnel, who in turn can get the most accurate information out to interested news media and also ensure that the information being presented will not adversely affect the company by inviting more hackers. This team member might also be called on to interact with senior management to keep them updated on the progress of recovery operations.

Leadership. The team must have a leader who has sufficient authority to direct each team member, including legal counsel. Chief information officers (CIOs) are a good choice for the post, as they have the technical background, but companies assign the position to a variety of senior managers. The team leader, after being notified of a potential emergency, is the one who activates the team and makes key decisions about how to handle the situation, such as which team members are needed. The team leader will be responsible for ensuring that everyone is where he or she is supposed to be.


The next step is to develop an incident response plan. This plan will define what a computer security emergency is, so that team members won't be called on to solve routine problems. The plan will iterate the purpose of the team, who is on it, steps involved in initiating team action, and member duties.

"CIRTs are not for monitoring and enforcement. They respond to emergencies. You don't want to use them too much. If you use them inappropriately, they lose their edge," says Kuchta.

By the same token, CIRTs should not be underused. Employees should not be discouraged from relaying observations regarding strange activity on the network. While it may seem to the employee to be an isolated event, when viewed in the context of other reports, it may indicate to the team that an attack is underway.

The plan should tailor a process for notification. For example, employees could be instructed to report their suspicions to a designated team member (with another named as an alternate).

The plan should also contain the contact information of each team member both at work and at home. Further, it should spell out the decision-making process for terminations and prosecutions in the event that an intruder has been conclusively identified.

Some companies detail business issues that may come up, such as how major events will he handled should they require, for example, taking down the entire network or a single service on the network. The plan may also detail when and how clients or other business partners should be notified of problems that could affect their own networks or dealings with the company. For example, a California company had such extensive data destruction as the result of a system intruder that the situation required rebuilding a high-level network. Since the organization also conducted a substantial amount of electronic business with partners, its response team's plan spelled our such issues as notifying the business clients that their systems, which were linked to the company's systems, might have been breached.

The response plan should also detail when and how internal users will be notified and how their access to data will be accomplished for business continuity purposes.

The plan might also include an overview of the additional services the CIRT provides. While their main function is to respond to crises, some corporate CIRTs provide workshops on security awareness or consult on security issues with various departments. For example, when a team of hackers broke into Stanford University systems, stealing more than 4,500 passwords, the university's CIRT responded by patching the system and getting passwords changed, but they also counseled students and faculty about good password selection and protection behavior. Now every November, the school recognizes "Security Month," where CIRT members have an expanded awareness campaign on system security.


No plan is complete until it has been tested, and no team can be considered reliable unless it practices its procedures. "Without the practice, a real emergency will cause very strained decisions when the time comes," says Kuchta.

Testing reveals how well the team detects and responds to the attack, how quickly it is able to mobilize, and how quickly it is able to resolve the incident. Ideally, the team leader should hold practice sessions every three to six months, but at a minimum they should be held annually, says Kuchta.

Because the testing is designed to uncover problems, managers should not be discouraged when it does just that. Problems that are detected can then be addressed.


Although each organization varies in its approach, policy will generally provide that a CIRT member who becomes aware of an incident, either through his or her own work or by being tipped off by another employee, first identify that there is a sufficient emergency and then notify the team leader.

Next, the team leader should determine the severity of the intrusion and call in the appropriate team members, who should begin a network scan to pinpoint the exact extent of the damage and the entry point. Team members should check file systems for modifications, review audit logs, and examine remote systems. In addition, they should remove any software that hackers have installed on their system. Hackers often do this as part of their springboarding techniques: they install software that makes it possible for them to launch attacks from a different host in order to throw investigators off the trail.

Once the diagnostics have been completed, senior management needs to be informed of the intrusion and what the team is doing to recover. The media contact member of the team should also be briefed so that he or she can respond to media inquiries if necessary and assure clients that the situation is being handled (or alert them if delays should be expected).

Dummy networks. If the intruder is still online, a critical decision needs to be made with regard to whether to try to fool the hacker into staying on a part of the system until the team can track him or her down. For a long time, standard procedure was for the system administrator to terminate the session and log the event. But hackers have come to count on that happening. The first time they get in, they install a backdoor so that they may get in again knowing at least part of the company's response procedures.

In response, some system administrators have begun setting up a dummy network--sometimes called "fish bowls" or "sandboxes"--into which the intruder is routed without being tipped off that the company is now watching. Filled with unimportant information, the dummy network gives the team a harmless way to keep the hacker online while the company tries to trace the connection.

If the team can find out who the hacker is, that information will be factored into the company's response. For example, says Kuchta, if a pharmaceutical company's research and development system is breached and CIRT team members find out that the offending party is likely a teenage curiosity seeker, prosecution may not be pursued. If, however, the IP address belongs to a competitor or foreign government, a different scenario will play out.

Outside assistance. Some situations may require that CIRT members call for outside help if the problem seems to demand a greater level of computer expertise than the in-house staff possesses. For example, a transportation company was in danger of losing major financial backing and possibly going under because of computer problems. IT staff there noted repeated unauthorized intrusions: an attacker was sending in "rogue programs" that were causing system crashes and data loss. The staff thought the damage level was low enough to keep the job in-house. But after a week of unsuccessful attempts to quash the intrusions, they realized they needed help. Another incident response team was called in and the matter was resolved in three days.

After reviewing the evidence and information from logs, the consulting team discovered that the attacks were most likely originating from an intemal employee. Covert surveillance was set up, and the suspect was caught in the act. The employee's computer was also seized for the investigation.

In this case, the in-house staff bad too little expertise to deal with an attack of this sophisticated nature, in which the identity of the attacker was well cloaked. Further, no one there had a sufficient amount of experience in computer forensics. The consultants brought this expertise in and made recommendations in their final report about how the company could prevent such an attack in the future. More security products were suggested, but the company's main problem was that the internal attacker bad access to an area on the network where he had no business purpose.

A final critical component of the incident response process is to document the attack, what caused it, and what was done to fix it. A number of companies build their own databases and use them for making reports. The data may also be used as training aids in helping to keep the network secure.

Some computer incident response teams are small, informal bands of technical staff, while others are large units with staff dedicated to only incident response. Size does not matter as much as efficiency, planning, and skill. With the right policies and people in place, response teams can help companies fight back against hacker attacks.

DeQuendre Neeley is staff editor at Security Management. Special thanks to K. J. Kuchta for his technical review of this article.


SEVERAL ORGANISATIONS EXIST to help companies form incident response teams and stay abreast of news about bugs, viruses, and product problems. Chief among them is CERT. (Although "computer emergency response team" and "computer incident response team" are used interchangeably, CERT is a registered trademark of Carnegie Mellon University.) There are about 40 organizations from Australia to Iceland that have CIRT capabilities, working to slow the spread of viruses or pass information on to companies just ahead of the attackers. Their Web sites offer an abundance of information about setting up CIRTs, along with other useful information (@ Go to for a list of and links to these sites).

Some companies may wonder why they need an in-house CIRT when these organizations exist to do much of the work their staff CIRTs would do. It's an option, but you'll find these organizations to be like a police station, Kuchta reasons. They can't be everywhere at once. An organization like CERT will "provide a basic amount of assistance. They'll tell you about the problems and give you the fixes, but you still need the technical people on hand to install it properly," says Kuchta. Moreover, CERT does not help with intruder investigations.

TOOLS. Kenneth van Wyk and Chuck Downs of Para-Protect Services say that, while there are few security technology tools made specifically for incident response, CIRT team members still might find some software particularly helpful in quick investigations. Beyond audit logs, intrusion detection systems, and the like, CIRTs can take advantage of such tools as disk forensics tools, playback tools, and page-hack capabilities.

Lazarus is a freeware forensics program written by the same programmers who brought SATAN and SAINT to computer security fame. It is designed to gather evidence from hard drives of UNIX, BSD, or Linux-based systems. Norton Utilities, by Symantec, is a similar program designed for DOS and Windows systems. Information about Lazarus can be found at Look to Symantec's Web site, for more information on Norton.

Van Wyk says playback capabilities are where the largest voids in incident response tools exist. These tools allow incident events to be reenacted for staff analysis and for possible presentation of evidence. Some commercial intrusion detection systems have limited playback capabilities, including RealSecure by Internet Security Systems, Dragon by Security Wizards, and NID, a tool exclusive to government agencies and certain contractors.

Page-back utilities, like some alarm systems, are designed to send an alert when network monitoring systems catch a critical event. These utilities can be programmed into existing network monitoring products. Downs recommends Cpager95, a Windows-based tool that pages incident response team staff, and Sendpage, which does the same for UNIX monitors. Cpager95 can be found at; Sendpage at

PUBLICATIONS. Several comprehensive guides have been published to train corporations on how to set up a CIRT.

* Handbook for Computer Security Incident Response Teams (CSIRTs) by CERT/CC

* Security materials from

* Computer Security Incident Handling: Step-by-Step by SANS Institute

(@ All of the Web sites mentioned in this article can be accessed easily via the "Beyond Print" link on SM Online at
COPYRIGHT 2000 American Society for Industrial Security
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Security Management
Geographic Code:1USA
Date:Feb 1, 2000
Previous Article:Women at Work.
Next Article:Airport Security Fails Test.

Related Articles
Smart stops on the web.
Is your site being hacked without your knowledge?
New in plaintext.
Network Security Hacks, Second Edition Tips & Tools for Protecting Your Privacy.
"Greasemonkey Hacks'.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters