Printer Friendly

Internet Accelerators -- A Lot More Than Hot Rods.

In Roman mythology, Janus was the porter of heaven; portrayed with two faces, he was the god of gateways, looking inward and outward. As such, he might well serve as the patron deity of those concerned with web site performance, for they too must look inward and outward.

In fact, a useful way to look at technologies and products designed to improve web site performance is to use the border or access router that links a site to the Internet--often called a gateway, as a matter of fact--as the dividing line for where a given device has its major effect. "Server-facing" technologies like web switches and load balancers, proxy caches, SSL accelerators, and similar devices are designed to improve the reliability and scalability of a web site. Their major effect is on the infrastructure inside the border router. Examples of "browser-facing" techniques include content delivery networks (CDNs) such as Akamai or Digital Island, which cache static web site content out on the Internet closer to the user to avoid Internet congestion and delay; and multihoming, which involves connecting to multiple ISPs and managing the Border Gateway Protocol (BGP) tables in the border routers to create a kind of "downstream load balancing." Their major focus is on the network outside the border router. However, this distinction does not hold up for all devices. A case in point is the "Internet Accelerator," an emerging class of products--some hardware appliances and some software for NT or Unix/Linux boxes--that, while focused primarily on the network outside the border router, may also deliver scalability benefits for the web site infrastructure. The primary technologies used by Internet accelerators are HTTP proxying--converting HTTP 1.0 connections to HTTP 1.1, and HTML compression; some add proprietary twists to these or other techniques, such as caching. HTTP proxying has both network and server benefits, while compression mainly benefits network performance. A short review of these technologies and their benefits will serve as a brief overview of the Internet acceleration products currently available or announced.

I'm Not Who You Think I Am

In TCP/IP networks, a proxy server is an intermediary program that acts as both server and client. It intercepts all communications between the true server(s) and client(s), terminating incoming connections and recreating them anew as output. Thus, to a server, a proxy looks like a client (browser), and to a client, it looks like a server. Every element in an HTML page, including the page itself, is transferred as a file via a TCP connection, so an HTTP proxy is simply an application that terminates the TCP connection at the server end and recreates it at the browser end, or vice-versa, for every file transferred during a browser session.

In an Internet accelerator, an HTTP proxy can serve two purposes, one simply by existing as a proxy, and the other by virtue of converting HTTP 1.0 flows from the server to HTTP 1.1 flows to the client. The first is an example of a "server-facing" benefit, and is relatively simple to understand by way of an example: imagine that a browser is accessing a large JPEG file through a slow modem connection. Without the proxy, the TCP connection between the browser and the server will be "locked" for as long as the transfer takes. Since most servers have a fairly low limit to the number of simultaneous TCP connections they can support (in Apache servers this is about 250 or so, in Microsoft ITS, somewhat higher), these slow connections can greatly reduce the server's ability to support all the browsers that want to connect--new users either get an error code, their connection simply times out, or they get tired of waiting and leave the site.

With a proxy, however, the server can download the image to the proxy very quickly over the high-bandwidth Ethernet connection that links them--most Web sites now use Fast Ethernet internally (100Mbps), and many are upgrading to Gigabit Ethernet (1Gbps). The proxy then buffers the image in its own memory and terminates the connection on the server side, freeing that server resource for a new user, while it continues to dole out the image to the slow client. Alternatively, the proxy may establish a number (up to the maximum) of permanent connections to the server and then keep those connections full by multiplexing multiple browser requests onto each one.

This benefit is why, even though all modern web servers and browsers support HTTP 1.1, an HTTP proxy in an Internet accelerator is still useful. Even a proxy that doesn't convert from HTTP 1.0 to 1.1 will still offer a benefit, and, if you add caching capabilities to it, it can further offload the server by keeping copies of frequently-accessed objects in its own memory and delivering them without any demand on the server at all. (This is easily done since all HTML objects on modern web servers have a unique identifier called an "entity tag.") Caching proxies of one sort or another are offered by several accelerator vendors, as noted in the Table.

Of course, this benefit, however it's obtained, requires that the proxy be able to handle many more connections than the servers behind it. The paragon of ability here is apparently the WebScaler accelerator from NetScaler Inc., which is claimed to be capable of supporting up to 300,000 simultaneous connections. In other words, if your infrastructure can support it, a NetScaler can "front" for over 1000 Apache servers! In reality, with that many servers behind it, the device's buffers would be likely to run out of space if the site had too many slow connections, as in the example above. Even so, the NetScaler appears to be far and away the most capable HTTP proxy available today, with a claimed throughput of 600Mbps.

The second benefit of an HTTP proxy, which is based on the conversion of HTTP 1.0 connections to HTTP 1.1 connections, is mainly "browser-facing" or network-oriented and a little harder to explain. A brief review of the differences between HTTP 1.0 and 1.1--concentrating on the greater efficiency of persistence and pipelining--may help.

With A Capital P That Rhymes With E[fficiency]

In HTTP 1.0, each element of an HTML page (which is a file on the server) is downloaded in a separate TCP connection: that means that for every element, the server must set up a TCP connection and its associated resources transfer the file, and close the connections. That's a lot of server overhead.

Even more important, it's also a very inefficient way to use expensive Internet bandwidth, due to the "ramping behavior" of TCP's Additive Increase Multiplicative Decrease (AIMD) congestion-control algorithm. Basically, every TCP connection starts by sending only two packets (between one and three kilobytes of data). Once it receives an acknowledgement (ACK) from the browser for each packet, it sends four (one additional for each ACK), then eight, and so forth exponentially until it reaches a threshold at which the increase slows to a simple one-packet increment for each round trip. (If congestion--packet loss indicated by no ACK or duplicate ACKs--is encountered, this process starts over, with a lower threshold for transition to the additive stage.) The goal of this behavior is to discover the maximum effective throughput of the link between the server and the browser and stay below it.

But AIMD means that it takes several packets to reach the effective carrying capacity of a given link, and, since every TCP connection starts anew, an HTML page delivered via HTTP 1.0 rarely efficiently uses the bandwidth available, since it is likely to be finished with the file transfer before it reaches maximum effective throughput. Modern browsers attempt to overcome this problem by opening up multiple parallel connections to the server (generally four), but this contributes to network congestion and server load and is actually a fairly "net-unfriendly" bit of behavior.

By contrast, in HTTP 1.1, the server opens one persistent connection to the browser and keeps it open while shoveling every element of the page through it. Only after every element on the page is downloaded is the connection closed. In this case the connection lasts long enough for TCP's AIMD algorithm to discover and use the maximum effective throughput of the link to the browser. In addition to persistent connections, HTTP 1.1 also uses pipelining, wherein the browser, after receiving the base page (the first page that contains all the URLs for the other elements of the page), requests all of the additional elements as fast as it can without waiting for the server to acknowledge each request. This enables HTTP 1.1 to accelerate even distributed sites: sites whose static content (such as logos, text common to many pages, and the like) is cached at the edge of the Internet by Content Delivery Networks (CDNs) such as Akamai or Digital Island. Such caching considerably shortens the path that such static data mu st take, avoiding most Internet congestion and so accelerating its arrival at the browser.

With pipelining, it doesn't matter that some of the pipelined requests go to the original site, and some to the (more) local CDN cache, they all go out as fast as possible and so come back faster than with HTTP 1.0. Thus, HTTP 1.1 delivers both better use of server resources and more efficient use of expensive Internet bandwidth for the majority of sites.

Anyone Care For A Game Of Squash?

Compression is the second primary technology used in Internet accelerators. There are several different compression schemes offered by vendors in this space, differing in whether they are lossless or lossy, and whether they require the download of a client-side application or merely the presence of an up-to-date browser. The Table summarizes the compression type available from each vendor listed.

The easiest form of compression to deliver is HTML compression, which is part of the HTTP 1.1 specification. This depends on compression technologies such as gzip and compress, which are supported by all modern browsers (a browser's ability to support decompression of compressed HTML is verified during the initial HTTP negotiation, so non-compliant browsers get uncompressed data). HTML compression affects only the textual portions of a page and, of course, cannot accelerate content delivered from a different source from the base page. However, compression of the base page quickens the delivery of the URLs referencing all other content, so HTML compression can accelerate distributed sites somewhat. HTML compression also serves to offload the server to a minor extent by removing the burden of compression from the server CPU (all modern web servers support HTML compression).

However, since the bulk of a web page is usually graphics, some vendors also offer one or two graphic or image compression techniques. GIF and PNG images are already compressed--this is one of many techniques for optimizing web page display time. But several vendors offer GIF to PNG conversion because PNG can be compressed better than GIF, and because it uses a superior interlacing technique, it also displays faster.

JPEG, being based on a complex mathematical transformation, cannot be losslessly compressed. It can, however, be "downsampled," a process of removing information to reduce image size. Since this degrades image quality, one vendor who offers downsampling, Packeteer, in their AppCelera product, detects the throughput of the connection and delivers an image downsampled to "match" that speed. The assumption is that lower speed customers will prefer a fast image to a detailed image. However, market research into buying patterns on hard-goods e-commerce sites indicates a strong correlation between image quality and product purchase, so this technique may not fly for such sites.

Another vendor, Datagistics, offers lossless compression of all web objects through the use of a client-side application and a proprietary compression algorithm. Although many consider requiring proprietary client-side processing (either through a plug-in, a Java applet, or Javascript) to be fraught with problems, the vendors who offer some variant of this believe the benefits outweigh the possible support issues, especially since Java, in particular, has stabilized to a great degree.

For instance, FireClick offers Blueflame 2.0, which identifies the content most likely to be fetched next and pushes it to a client-side Java applet before the user requests it. If the proper content is pre-fetched, this completely avoids Internet delay. Another vendor, FineGround uses client-side Javascript to offer on-the-fly "delta-compression" of web pages, which delivers a base page on the initial download and then sends only the difference between that and subsequently-requested content. In all these cases, the compression ratio or acceleration achieved is claimed to be far beyond that offered by standard compression algorithms.

Other technologies that may be folded into an Internet accelerator include web log aggregation and Layer 7 switching (see Table). Since the logical place for an Internet accelerator is between the final aggregation point of a server cluster (such as a web switch) and the border router, it's also the logical place to collect server log data about customer behavior. Going one step further, why not make the accelerator the aggregation point itself by giving it application-layer awareness? Then you don't need a web switch; instead, the accelerator can make load balancing decisions based on URL, cookie, data type, or other application-level information.
Hardware/Software Internet Accelerators
 Software
Company Boostworks Datagistics FineGround
Product BoostWEB RAPID Condenser
Technology On the fly Lossless On the fly
Description compression for compression for delta
 all web objects all web objects. compression
 based upon Files are compressed of HTML.
 client-selected once and served by
 ratio. existing servers.
HTTP/1.1 * *
Proxy
HTML * *
Compression
JPEG * Lossy * Lossless
Compression
GIF to PNG * *
Conversion
Cache * Future
Server
Access Log
Client Plug-in JAVA
Requires Applet/JAVA
 Script
 Hardware
Company FireClick NetScaler Redline
Product BlueFlame WebScaler Networks
 TIX
Technology Predictive Caching HTTP/1.1 HTML
Description of files based upon Proxy provides Compression
 statistics gathered Layer 7 switching Appliance
 by monitoring for cache
 appliance. redirection
HTTP/1.1 * * *
Proxy
HTML *
Compression
JPEG
Compression
GIF to PNG
Conversion
Cache * RAM
Server *
Access Log
Client JAVA Applet
Requires
Company Packeteer
Product AppCelera
Technology Compression
Description based
 upon client
 connection
HTTP/1.1 *
Proxy
HTML *
Compression
JPEG * Lossy
Compression
GIF to PNG *
Conversion
Cache * Disk
Server *
Access Log
Client
Requires
COPYRIGHT 2001 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Technology Information
Author:TROWBRIDGE, DAVE
Publication:Computer Technology Review
Geographic Code:1USA
Date:Apr 1, 2001
Words:2385
Previous Article:NetConvergence's Simon Fok Sheds Light On iSCSI.
Next Article:Protect Network Security Proactively.
Topics:


Related Articles
"CUSTOMIZED".
American Hot Rod.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters