Application Performance Monitoring: Black Art Or Incipient Science?
A perennial argument between economists and technologists is whether or not computer technology has actually increased business productivity. While this argument still rages on a macroeconomic level (although the evidence increasingly points--finally--to a qualified "yes"), business managers can't rely on academic debate to judge the usefulness of their investments in computer and network technologies. Instead, they need solid metrics that they can tie to a business case, in the context of a system that can also keep: them apprised of day-to-day performance and even help them anticipate problems before they impact the bottom line.
This is the purview of performance monitoring and reporting applications, a market that the Stamford, CT-based Gartner Group estimates reached $150 million in 2000. Gartner characterizes these applications as "end-to-end application/transaction response-time measurement," since it is the performance of the application that really matters to the end-user and thus can most closely be tied to the business case for a given network investment.
However, the market also includes applications and tools that concentrate more on network performance than on application-level measurements, and is actually quite broad in the range of capabilities offered. One way to look at the market, adapted from Gartner's, is to divide performance monitoring and reporting tools or services into three categories: application instrumentation, network "sniffers," and client-based monitoring. These may be further subdivided, as discussed later.
However, it must be noted up front that there is no one application performance monitoring application out there that can deliver every iota of information a business needs to fine tune its network and servers. Even those with the broadest coverage and largest market share, such as Concord Communications, can't cover all the bases, and in attempting to do so may fall down when it comes to integrating and correlating the information delivered. This short overview will serve as an introduction to the capabilities of these three classes, what they can and cannot deliver, and their strengths and weaknesses, to help VARs and integrators develop a short list for further investigation.
Instrumenting an application involves writing additional code within it to detect, demarcate, and report transaction milestones. It can report an application's availability, performance, response time, and workload throughput, thus measuring the actual service levels experienced by users. For organizations with service level agreements (SLAs) in place, this is invaluable. Instrumentation can also reveal what operations within an application are responsible for bottlenecks, and can be used for capacity planning and chargeback schemes.
The degree of granularity--i.e. how much detail the instrumentation can resolve--is simply a matter of programmer time and additional code. And therein lies the rub: this is a highly invasive technique, generally requiring modification of the original application. Getting it wrong can bring the application down. However, Windows or Intel-based clients can often be ARMed by noninvasive techniques such as "DLL injection," as in the case of Lotus Notes.
One help for applications that cannot be ARMed noninvasively is the Application Response Measurement standard and API, which defines simple function calls that can be embedded in application code to enable managers to monitor business transactions. An agent in the same machine captures those calls and sends them to an ARM reporting application, such those available from Hewlett Packard and Tivoli, who originated the ARM specification before it was taken over by the Open Group.
The advantage of ARM, or instrumentation in general, is that it allows the reporting application to deliver information that helps support fundamental business processes. Whatever in the application is critical to the business can be monitored and reported on. However, it tends to be expensive, resource-intensive, and slow to implement.
By contrast with application response measurement, network "sniffers" are noninvasive devices installed in the network to read application information from packets as they pass by or through the box. Such devices can deliver information such as packet size distribution by application, so you can see where the loads on your network are coming from; a variety of network-level parameters such as packet loss, round trip time, and the number of connections involved with any given application; and information about the host as well.
These devices may monitor only from one location, as in the case of products from Packeteer and NetCalibrate, or may make use of distributed probes throughout the network, as in the case of products from Brix and NetScout. In the latter case, the application may be able to make use of Simple Network Management Protocol (SNMP) or Remote Monitoring 2 (RMON2) data from agents supplied with various network elements as well, or even information from proprietary agents in network elements such as Cisco routers.
In either case, the advantages are similar. Reading information from packets is noninvasive, easily implemented, and familiar to network managers. Distributed sniffers can also serve as powerful tools for monitoring internal SLAs. (They cannot be so used for SLAs offered by an external network provider unless that provider is willing to put probes into their network.)
However, for some applications, these are outweighed by the disadvantages. It is more difficult for a sniffer to deliver application-specific information, since this requires not only deep packet inspection, which is computationally intensive, but knowledge of the operation of the application. This is one reason that distributed network probes tend to be quite expensive. In addition, a sniffer can't capture information about response time problems arising from desktop components. Sniffers also cannot gather any but the most basic information-that available from the IP and TCP headers-from encrypted traffic, making them less useful for commercially-oriented web sites. And finally, the information they deliver is not easily interpreted by business managers; it is one or more steps removed from actual business processes.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Industry Trend or Event|
|Publication:||Computer Technology Review|
|Date:||Feb 1, 2001|
|Previous Article:||Adopting Open Systems And SS7 In Remote Access Environments.|
|Next Article:||Applications For Shared Data Clusters.|