Printer Friendly

Vulnerability management technology: a powerful alternative to attack management for networks. (Storage Networking).

In recent years, enterprise network environments have become more complex, with an increasing reliance on digital assets to provide services that meet business demands. While there are many positive aspects for a company that embraces the Internet, the greatest challenge to overcome is the implementation and management of security solutions to ensure network and data integrity that map directly to the business problems.

The variety, frequency and complexity of attacks used against corporations are also on the rise (see Figure 1). Scripted attack methods automate the process of breaking into a network to the level of point-and-click; very little skill is required to compromise a network and disrupt business, steal proprietary data, or maliciously damage or modify information and data files. The need to deal with intrusions effectively has never been greater.

Contrary to industry expectation, the deployment of intrusion detection systems (IDS) has done little to decrease the incidence of intrusions or the damage caused by intruders. Intrusion detection systems are not designed to prevent attacks or damage, only to alert administrators that there appears to be an attack taking place. Traditional network security approaches, built around aggregated point products and reactive response models, are simply not up to the challenge.

What is required is a shift in the fundamental philosophy of network security from attack management to vulnerability management. The vulnerabilities and exposures inherent within networked systems allow an attacker to gain a foothold within the network. These vulnerabilities and exposures place an organization at risk. The weaknesses found in the operating systems, applications and services needed to run your business must be constantly assessed to identify the extent of exposure and lower the probability of attacks on critical networks. To do this, companies must focus their resources on proactive rather than reactive security operations and stop the potential damage an attack might inflict before it starts. By proactively measuring the exposure of a network to attack, a security administrator can easily quantify and qualify the risk associated with each device and take the preventative steps needed to increase the survivability of the network and to limit the exposure of key business assets.

Reactive Security: Intrusion Detection

While there are many products on the market today providing intrusion detection, none truly solve the problems faced by today's enterprise network environments. Intrusion detection systems (both host- and network-based) are incident or post-incident based controls.

Both network- and host-based IDS solutions do little to improve the overall network security in a proactive sense, as they have to wait for an incident (attack or breach) to occur before they can start to be effective. Once an attack has been detected by an IDS, the attack has already happened and the damage may already have been done. The IDS in this case can only serve as an audit tool of events that might help to reconstruct the attack and indicate the extent of the compromise. IDS vendors realize this shortfall in their technology and are making drastic changes in the way they represent their technology in the marketplace, moving away from IDS to intrusion prevention. Unfortunately, this is still a reactive solution and many customers do not want to implement in-line blocking of traffic as it may block legitimate users due to false positives. However, with more knowledge of the exposures on the network, intrusion prevention could be more effective and less likely to impact the availability of the network as a whole.

All intrusion detection products are prone to false-positive and false-negative alerts, but none more so than network-based IDS solutions. A false-positive alert is the most common condition, in which the IDS identifies what it thinks is a legitimate attack and generates an alert, while the attack is really legitimate traffic. These conditions can be caused by a number of factors (such as a badly written signature or bugs in the IDS software). The large number of false-positive alerts is an industry-wide problem with little progress being made to make significant reductions. To combat this problem, companies need to implement yet another point product solution to correlate and aggregate IDS alerts, requiring an even greater investment.

False-negative conditions are situations in which there is a valid attack, but the product fails to alert. Signature-based intrusion detection systems use an architecture of limited scalability that must fully inspect every packet against the whole database of attack signatures. The most common cause of a false-negative condition is caused by high bandwidth utilization and the failure of the intrusion-detection system to inspect every packet. As more attacks become known, the signature database increases in size making the false-negative issue even more prevalent. One very important result of too many false-positive and false-negative alarms is that they can overwhelm an IT staff with lots of "noise" and forensic work that leaves them chasing ghosts, consuming valuable security resources and eventually causing the IT staff to distrust the alerts.

Change Control

Management of any IDS solution requires extensive time and effort: Network-based IDS solutions require constant changes, such as adding signatures when new hardware or software is added to a subnet or constantly updating the system with the vendor-issued updates and new attack signatures. Host-based IDS solutions are also resource intensive and only protect the device on which it is installed. Deploying a host-based IDS solution requires extensive interoperability testing and integration. Each time a host is added, it must be configured and each time the host-based IDS is upgraded with new features, it must go through change control before redeployment.

Network-based IDS solutions that use signatures to detect attacks typically involve professional security personnel and months of tuning and tweaking to reduce the number of alerts to a manageable level. This tweaking reduces their overall coverage of attacks and an attack must have a signature before it can be detected. New or "Zero Day" attacks will not be detected, leaving systems open to compromise. In turn, many host-based IDS solutions are also limited in the protection they provide. For example, many just monitor the system logs to detect password rattling, registry or file modifications, leaving them open to network exploitation.

Clearly, intrusion detection has its place in the layered security architecture of many organizations, but the costs in time, manpower, and incident recovery costs associated with reactive controls are prohibitive. Deploying IDS technologies gives some IT staff the ability to play network "cops and robbers." It adds a more interesting twist to the IT role but it is not proactive in securing the network. The answer is to measure and assess the extent of exposure your company has to threats, then take action to limit that exposure.

Proactive Security: Vulnerability Assessment

Traditional network vulnerability assessment is used to identify a company's existing vulnerabilities and risk profile to make recommendations on improving security practices. Many aspects of a company's network can fall under the scope of a security assessment, but the two key areas are "internal assessment and "external assessment."

* Internal Assessment: Many organizations implement a security policy that outlines the procedures for business operations based on industry best practices or government regulations. An internal assessment consists of various audits that are conducted throughout the internal network, including all devices and network applications.

* External Assessment: The goal of penetration tests is to try to determine how exposed the company's perimeter is to compromise. Traditionally, an external assessment required outsourcing security services to perform an audit of the company's network from the perspective of an attacker. These audits are usually performed via the Internet, from partner networks, and remote offices using hacking techniques or commercially available vulnerability assessment tools.

Measuring the "Window of Exposure"

Exposures are defined as all information that can be gathered remotely about a device, including vulnerabilities. Examples of exposures include the way an IP-enabled device responds to network connection requests, whether specific ports are open, and how the applications on those ports respond. Devices often have different exposures when evaluated from different perspectives on the network, because transit devices such as proxy servers obscure an accurate response of an IP-enabled device. Thus it is important to have multiple views of P-enabled devices on the network to fully understand their security posture. Because enterprise networks are large and constantly changing, with devices and applications being added, removed, activated, and deactivated all the time, the results of network discovery from one point in time are likely to be outdated quickly. The process to identify these device and application exposures is called "vulnerability assessment."

The Challenges of Deploying Traditional Vulnerability Assessment Tools

Until now, security vendors have offered point-and-shoot, tool-based solutions to address vulnerability assessment, but these have proven difficult to deploy and manage across large, complex enterprise networks. Many of these vulnerability tools were never designed with the enterprise in mind, are run infrequently, and are inaccurate and incomplete in the information they provide. The manner in which traditional vulnerability assessment tools are used varies greatly and they have some common flaws that limit their effectiveness; these are discussed below.

Vulnerability assessment tools only provide a single snapshot of the network devices at a single point in time. Generally, VA tools cannot perform continuously in a production environment because they are resourceand bandwidth-intensive and very often invasive.

Most organizations scan for vulnerabilities only rarely, perhaps once a quarter. When they do scan for vulnerabilities, they tend to avoid business hours, which further obscures the results and never really provides a true representation of their network. One of the main drawbacks of traditional vulnerability assessment tools is that they cannot keep up with the constant rate of change across the network environment. Devices and applications are constantly being added and removed; therefore it is important to understand the risk that these new services present to the network security posture. Security 101: If you don't know it's there, how can you defend it?

Many vulnerability assessment tools require end-user configurations to select the type of device, operating systems, applications, and vulnerabilities that they want to detect. In many cases, assumptions about exactly what is on the network may lead to compromise if an unauthorized device is not discovered and has vulnerabilities that may be exploited. It's what you don't know that you need to know about.

Another limitation is location. Vulnerability assessment tools yield results that vary greatly depending on the location from which the scan originated. Transit devices such as firewalls, switches, routers, and proxy servers mediate network traffic in a way that obscures the results of a vulnerability scan, preventing a vulnerability assessment tool from identifying vulnerable conditions that can be exploited from another location on the network, giving you a false sense of security.

Organizations want to know where the weaknesses are in their networks so they can prevent attacks by fixing the vulnerable exposures that are discovered. Most vulnerability analysis tools use a qualitative system of "low," "medium" and "high" for measuring levels of threat severity. The problem with this system is that so many vulnerabilities are marked "high" severity that it is difficult to determine which items are the most critical to fix first.

Point-and-shoot VA tools usually leave the IT staff with an overwhelming list of uncorrelated vulnerabilities to patch without any means to validate each corrective action or provide metrics to measure success throughout the process. Many assessment tools also score each asset based on the total number of vulnerabilities. There is little or no consideration to associate an asset with a value to the business and the impact to that asset if it is compromised.

VA systems do not monitor for intrusions, so other solutions are required. The lack of network and device knowledge inherent in traditional IDS systems compounds the resource and management issue to render the solution ineffective.

In sum, current vulnerability assessment tools need to mature as a technology and move away from traditional software-based tools to a centrally managed, cost-effective solution that truly scales to large enterprise networks.

Making Vulnerability Assessment Easier: Vulnerability Management

For effective vulnerability management, organizations need constant discovery, assessment and monitoring of known exposures. This provides detailed information about every device in the environment and the ability to monitor compliance with network security policies to ensure network integrity. Vulnerability management must also allow the prioritization of exposures based on the value of that asset to the organization, not just based on the number of exposures present on the device. By using an exposure scoring methodology, vulnerability management gives IT staff a detailed and granular view of the risk associated with each identified exposure. Scoring systems are based on a number of factors and should include: how easily is the vulnerability exploited, the number of days that the exploit has been "in the wild," and the impact that the vulnerability has on the overall security posture of the network.

Successful Vulnerability Management Deployment

The goal, when deploying a vulnerability management solution, is to gain a 360-degree view of the network. One of the failures of traditional intrusion detection and vulnerability assessment products is that they cannot effectively cover the network as a whole. In general, appliance-based vulnerability management solutions are relatively easy to install, configure and maintain, as they do not require a data bridge on the hosts to retrieve the device status.

To provide the complete 360-degree view required, vulnerability management devices must be placed intelligently throughout the network. Any kind of access control, such as firewalls, routers and switches with access control lists (ACLs), and network address translation (NAT), mediates the response any vulnerability management solution receives from a device and so it is important to examine and monitor the network from these multiple perspectives. Vulnerability management-device placement plans require careful evaluation of the data flow, network topology, and existing security infrastructure.

A view with a clear route to each device is critically important when deploying vulnerability management devices, but a mediated view can also be useful. Placement of a vulnerability assessment device outside the firewall will produce valuable data from the perspective of the hacker. In most cases, the view from the DMZ into the internal network is valuable as well. Using the DMZ as a steppingstone to attack internal hosts is a common method of operation used by an attacker. To address this, a second vulnerability assessment device should be placed inside the DMZ. This positioning will allow it to scan the devices within that DMZ with no interference, thus discovering all vulnerabilities that exist on each host, rather than just those visible through the firewall. A similar scenario could be used for any access control device.

In some instances, applying a patch to a system on the network may impact the operation of a legacy application or service running on that device, which in turn interrupts the business-critical operations of the organization. In these situations, the only option is to identify those vulnerabilities and monitor for indications that they are being exploited and to receive notification of an attack only when the vulnerable IP-enabled device is impacted. This unique monitoring capability is the final piece of the vulnerability lifecycle management puzzle that many other solutions do not address.

Monitoring Threats Against Vulnerable Devices

To be effective, an intrusion detection solution requires that the sensor must be able to see 100 percent of all the packets going across the network. However, as stated previously, the false alarms generated will impact the reaction capabilities of the IT staff assigned to monitor intrusions. The lack of network knowledge inherent in traditional IDS compounds this problem and renders the solution practically redundant.

Clearly, there are many benefits in using the vulnerability data provided by vulnerability assessment tools to increase the effectiveness of intrusion detection solutions, greatly improving attack detection and reducing the resource overhead associated with incident resolution. By providing status information on all the devices (such as vulnerabilities, open ports, applications, etc.) on the network, threat monitoring can quickly identify an "active exploit" and will only alert on an attack that is valid and destined for a vulnerable host. The threat data collected by these sensors can be used to quickly evaluate the extent of compromise, impact to business, and the response needed to counter the attack without the costly time and resource requirements needed to figure Out what happened. Monitored threat data can also be correlated against attacks seen by existing IDS sensors deployed within the network to provide additional evidence collection and validation, reducing false positives by an order of magnitude or more, and allows a much more timely response when a real attack happens.

Changes in the devices and applications active on a network happen minute by minute, and so does the security posture of the network that is visible to an attacker. Traditional IDS solutions are reactive with high resource overhead and minimal impact on securing the network because they are not context aware. Traditional vulnerability assessment products are too resource intensive or destructive to be used during business hours, let alone continuously. Vulnerability assessment products also lead to a problem of perspective: The network map they produce is accurate from the place and time where the scan originated, but it does not capture the full range of exposures visible internally and externally to the network.

Vulnerability assessment must evolve from a "point-and-shoot" tool into an effective enterprise-class solution that truly enables management of the vulnerability lifecycle. To be effective, vulnerability assessment must be context-aware, with up-to-date, thorough information about the status of each device on the network. With this information, proactive, informed decisions can be made and action taken to remove the vulnerabilities within the environment and reduce the number of targets an attacker can exploit.

Vulnerability management is the process of identifying, measuring, prioritizing, monitoring, and remediating potential security risks associated with IP-enabled devices that may put a business at risk. Vulnerability management determines weaknesses within the network proactively by probing P-enabled devices for their susceptibility to known exposures, managing the process of addressing those exposures and monitoring for attacks to still-vulnerable devices. In a time when networks are barraged by accelerating numbers of attacks, the only effective defense is efficient prevention.

[FIGURE 1 OMITTED]

Tim Keanini is CTO at nCircle Inc. (San Francisco, Calif.)

www.ncircle.com
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Keanini, Tim
Publication:Computer Technology Review
Geographic Code:1USA
Date:May 1, 2003
Words:3025
Previous Article:Wire-once, provision-many: optimizing compute resources in the data center. (Storage Networking).
Next Article:Switch-fabric requirements for next generation storage directors. (Storage Networking).
Topics:


Related Articles
Integrated security: a holistic approach to data storage security. (Storage Networking).
Symantec provides U.S. Department of Defense with security intelligence.
Answering the storage security challenge.
Network configuration management: an innovative, additional layer of network security.
First Exposure Risk Management (ERM) solution.
McDATA improves security for data storage with addition of SANtegrity Security assessment.
The first step to storage security: admit you're vulnerable.
Adaptive backup as a security enhancer.
Managing risk in the storage environment: the realities of risk.
7 Myths about protecting Web applications.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters