Looking to benefit from iSCSI storage?
Despite all of the commotion around iSCSI, the list of practical storage solutions is surprisingly short. Some startups have introduced specialty appliances, while a few of the established disk array vendors have retrofitted their products with iSCSI channels. You'll even find some NAS boxes getting an iSCSI facelift. Most of them approach iSCSI from the standpoint of hardware. They lock in a controller, disk drives, and network ports into sheet metal--hoping it fits your needs and price point.
Intriguing software alternatives exist as well. On the host side of the link, Microsoft, Red Hat and others provide iSCSI initiator drivers that run right on top of the operating system's TCP/IP stack, making Windows and Linux machines with an Ethernet port "iSCSI-ready."
On the storage end of the connection, shrink-wrapped software available today can turn a PC into an iSCSI "disk server." These software products enable a PC to imitate iSCSI disk arrays inexpensively with the hardware and administrative tools you are already familiar with.
Purpose-Built vs. General Purpose
The essential differences between software-enabled iSCSI disk servers and purpose-built storage devices revolve around several dimensions:
* Choice of hardware
* Configuration flexibility
Although it's easy to concede that disk servers configured from PCs would have cost, configuration and upgradeability advantages over purpose-built appliances, you might guess that purpose-built iSCSI subsystems outperform disk servers built on general-purpose components. Nothing could be further from the truth.
[FIGURE 1 OMITTED]
The same factors that made hardware-centric database machines obsolete in favor of portable database software are now at play in the storage market.
Back to Basics
iSCSI provides a fantastic means to access disk drives over a LAN, removing the barrier of buying, installing and learning an entirely new networking infrastructure for the purpose of storage. Unfortunately, real disk drives don't come with Ethernet plugs. Instead, they have EIDE/ATA, SCSI, Fibre Channel or SATA cables coming out of them. To complete the iSCSI connection, something must convert SCSI packets sent over the GigE cable into a language and wire compatible with the disk drive. That "something" essentially pretends to be an iSCSI target device--this is the role played by intelligent controllers in a storage array or appliance.
[FIGURE 2 OMITTED]
With cost competitiveness in mind, suppliers of iSCSI solutions often use slower, cheaper disk drives with relatively poor access times. Good controllers compensate for such drive characteristics by caching I/Os in electronic memory to mask mechanical latencies from the applications. Caching thus improves the apparent response time from disk.
To avoid wasting capacity and provide more multi-user flexibility, the controller offers features that carve up each physical disk drive into smaller logical drives that better match each application's capacity needs. For example, an 80-GB disk can be sliced into eight separate 10-GB LUNs (logical unit numbers). Each 10-GB logical drive can be independently allocated among various application hosts, just as if they were eight physical disks.
Embedded software (sometimes known as firmware or microcode) in the controller makes protocol translation, caching and fine-grain LUN allocation possible. Like any other software, firmware runs on a computer--usually a custom motherboard designed around the CPUs that were current at the inception of the development project. Depending on the original design date, controller CPUs will be two or three generations behind processors shipped with current PCs. The same is true for other components that make up the memory subsystem and the network ports. Hence, the purpose-built appliance starts with a performance disadvantage relative to off-the-shelf PCs running the latest and greatest architecture.
Another business consideration further handicaps purpose-built hardware: lack of substantial sales volume. The number of disk arrays sold by a storage vendor pales in comparison with the large volume of PCs sold each year. Component costs for specialty parts bought in smaller lots runs much higher than for the mass-market equipment. The specialty devices suffer from higher prices and fewer inventory turns, forcing storage suppliers to stretch out their investment over more years, further compounding an already stale hardware platform.
What we are experiencing with storage subsystems today is pretty much what intelligent database machines were running up against several years ago. The purpose-built database hardware could not keep up with rapidly evolving off-the-shelf PCs. When the database vendors came to that realization, they conceded the hardware platform to the server suppliers and shifted their attention to portable database software, focusing on the quality and richness of the solution. Each year SQL Server, Oracle, Sybase, as well as other database implementations, deliver better value in part because Intel and AMD OEMs offer faster, smaller and cheaper systems on which to run the software.
Disk Servers: An Alternative Made Possible by Shrink-Wrapped Software
Like the name "database servers," the term "disk server" has been coined for a new class of storage devices that combine off-the-shelf PCs, general purpose networking cards and commodity disk drives with shrink-wrapped storage control software. The firmware functions buried inside a purpose-built storage array have been reimplemented in a portable form that runs on any standard PC, ranging from very low-cost, reasonably fast machines to moderately priced ultra highperformance systems. Just pick the storage service software options you require, then match the server hardware to the anticipated workload and capacity demands.
Head-to-Head Price/Performance Comparison
Industry standard benchmarks help users objectively measure the price/performance advantage of the disk server approach. In March 2004, the first SPC Benchmark 1 results for a disk server were published (www.storageperformance.org/results.html). At roughly half the price per SPC-1 IOPS than the nearest external array, a software solution harnessed the high power and low cost of a general-purpose server to set a new price/performance mark against conventional disk subsystems. That's an impressive advantage that will continue to get better as newer PCs, HBAs and disk drives become available. Next year, customers can take advantage of faster and cheaper platforms without waiting on vendors to incorporate those technologies into their arrays--at a much higher price.
The Technical Hurdle
Major breakthroughs were necessary to separate the embedded storage control software from the hardware. Creating an iSCSI target driver was just the beginning, made easier by LAN-ready servers with TCP/IP in the operating system. The real difficulty lay in implementing the entire collection of advanced storage control functions typically found on a high-end storage controller, including caching, LUN management, point-in-time snapshots, auto provisioning and remote replication. Arguably, only seasoned developers versed in high-end storage control have successfully maintained very high performance and complete software portability while coupling redundant disk servers to remove single points of failure. Such redundancy becomes paramount in mission-critical systems to ensure that disk servers take over for each other in the event of a hardware failure or a planned outage. But now all these capabilities can be found in products from leading storage ISVs.
[FIGURE 3 OMITTED]
Disk server software is packaged in different ways. Generally, the products are structured in price tiers that correspond to the size of the environment and the types of features needed. This pay-as-you-go approach makes it possible to license the minimum software set required today then seamlessly add features and/or capacity when the upgrades are needed. One might start with the entry-level iSCSI implementation and later introduce the snapshot feature. Some offer Fibre Channel host connections alongside the iSCSI ports for higher, more deterministic I/O performance required by larger hosts.
Putting iSCSI Disk Servers to Work
The accompanying figures illustrate some practical applications for iSCSI disk servers. Figure 1 highlights an environment where one of the existing servers will soon exhaust its internal disk space and has no internal expansion bays left. The iSCSI disk server could be configured from servers similar to those already on the floor, only dedicated to the task of supplementing disk capacity for all the servers on the LAN.
The disk server may also be used to enhance the survivability of critical data by holding mirror images of internal disk drives out on the LAN. Figure 2 shows how the mail server uses the software mirroring utility (RAID-1) in its operating system to maintain an up-to-date copy on the disk server.
In the event the mail server hardware is damaged (Figure 3), an administrator could assign the redundant copy to a surviving machine, turning it into a contingent mail server until a replacement system becomes available
To maximize the flexibility and performance you get for your purchase, you should include software-enabled disk servers in your search for iSCSI solutions. This fresh approach from independent software vendors opens the door to a wide selection of hardware platforms, price points and optional features unavailable from purpose-built appliances or arrays. Disk server software represents a formidable change in how storage will be architected going forward--just as portable database software revolutionized that market.
RELATED ARTICLE: TOE Cards & iSCSI HBAs
No discussion on iSCSI performance feels complete without covering TCP offload engines (TOE cards) and iSCSI host-bus adaptors (HBAs). Again, some surprising news here: the iSCSI protocol has a reputation for being CPU-intensive since it relies on computationally heavy TCP/IP. There is a school of thought that suggests offloading the TCP/IP processing to a network card in an effort to free up host CPU cycles. It's actually not such a new idea. TOE cards were first introduced when TCP became popular. However, customers soon realized that it was preferable to size the server to handle networking and application processing than to expect a network interface card (NIC) to keep up. In other words, it was cleaner and more cost-effective to use host CPUs than to offload work to specialty NICs. Nevertheless, TOE cards and NICs may well supplement onboard CPUs on disk servers that have numerous iSCSI ports. The offload cards may not do much to accelerate response time, but may help scale the concurrent number of I/Os from a given platform.
Augie Gonzalez is director of product marketing for DataCore Software (Ft. Lauderdale, FL)
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Connectivity; SCSI protocol over TCP/IP|
|Publication:||Computer Technology Review|
|Date:||May 1, 2004|
|Previous Article:||SAS: reinventing flexible storage in the enterprise.|
|Next Article:||Simplifying storage: combining the iSCSI standard with SAN functionality.|