Printer Friendly

What goes around comes around in storage: old ideas find new applications for today's SANs.

If you're involved in strategy creation in the storage industry, the best guidelines can sometimes be things you've already experienced in the past. Though the phrase "Vision is not seeing things as they are, rather it is seeing things as they will be" encourages thinking out of the box, seeing things as they will be might bring us back to familiar surroundings. There are several good examples of older architectures and past technologies originally used in mainframe environments that are experiencing a rebirth on Unix, Linux, and Win2K systems. Let's look at a few of these items.

Solid-State Disk--Round Two

The solid-state disk (SSD), a storage device based on DRAM chips for data storage that saw its market come and go, is now coming back again. The first SSD was delivered to the MVS mainframe market in 1978 by StorageTek and sold for $8,800 per megabyte-much cheaper than add-on memory-and had a maximum capacity of 90MB. The SSD served the mainframe industry as a virtual memory extension for paging and swapping programs in and out of memory. The arrival of expanded storage, a bus extension for additional main memory capacity, signaled the end of the SSD market--for a while.

In the early 1990s, a few small companies were building SSD's for select applications running on Unix, but market visibility was low and price per megabyte was still high. During the 1990s, Unix, NT, the Internet, and, later, Linux popularity increased, becoming the largest storage markets for data-bases, and the heavy I/O loads they generated created response-time bottlenecks. Twenty-five years after their first appearance, SSD's are still a niche market but are becoming the new stealth weapon for system programmers and storage administrators who struggle to deliver consistent response times necessary to meet service levels.

Based on high-density DRAM chips, rather than rotating disk media and moving heads, the variable and lengthy seek and rotational times for rotating disks are eliminated, leaving, only a very short access and data transfer time to complete an I/O operation. There are no cache misses or back-end data transfers on an SSD. Typical I/O operations on an SSD occur between 30 and 40 times faster than on a rotating disk. An SSD is a quick fix for severe I/O performance problems and doesn't face the ongoing access-density challenges of higher-capacity disks. These devices are fault-tolerant architectures and protect data from all types of device failures, not just from the loss of electrical power.

The high price per megabyte for SSDs had historically been the single-biggest challenge in justification. Today, the prices for SSDs are below $10 per megabyte, and the fault-tolerant capacities reach 50 gigabytes. SSDs are now creating a new market and have found new life, with the primary use being a database accelerator (Oracle, DB2, SQL, Informix, and Sybase), as databases are typically the most I/O-intensive of all applications.

The SSD is also gaining popularity as a SAN accelerator. The implementation of SSDs as an accelerator within SANs may prove even more beneficial in the long-term as data can move asynchronously from disk to SSD to contain hot files and reduce congestion on other cached disk subsystems. Future software developments are aiming at automatically moving hot-files to and. from the SSD as the workload dictates. Typically, as much as 3% of online data is classified as "hot data" and is ideally suited and cost-justified for an SSD. The first life for the SSD was targeted as a virtual memory extension and maybe should have been called more appropriately solid-state memory. A quarter century later, the second life for the SSD is targeted for ultra-high-performance disk applications.

SANs or Shared DASD Again?

The recent flurry of SAN surveys by the Aberdeen Group of 150 medium to large companies, as well as InfoPro and a variety of other formal and informal customer samples gives both confirmation and further insight into the use of SANs. After over four full years of media hype and conference agendas packed with SAN information, some of the initial beliefs about SANs haven't really materialized.

One of the initial and primary promises for SANs was to provide connection for multiple heterogeneous servers to a common, shared storage pool. The common belief was that uniting and managing storage between servers running different operating systems was a huge and growing problem needing attention. True data sharing, a concept that allowed heterogeneous operating systems to share the same physical copy of data was also viewed to be a most significant problem that SANs would address. Now over more than four full years of SAN deployment, surveys find and confirm what was suspected that only about one-fourth of SANs are connected to heterogeneous operating systems and that the actual demand to share data is not nearly as large as originally thought. What is happening with SANs? The remaining three-fourths of SANs are in reality basically switching disks between servers running the same (homogeneous) operating system.

Does this sound familiar? The concept of switching disk subsystems between servers, all running the same operating system, had become a common and widespread practice on mainframe computers by 1970, and was called "shared DASD" (direct-access storage device). The current state of the SAN market indicates that sharing or pooling disks between similar server platforms is the primary implementation for a SAN. As highlighted at a recent storage conference, the vast majority of SANs are not much more than shared DASD implementations, except for having moved from mainframes to the larger Unix, Win2K, and Linux storage markets.

From SRM to SRM...

The first SRM (system resource manager) originally appeared in the mid-1970s, bringing automated, system-wide management capabilities to mainframe computers. The SRM was implemented as part of the operating system, then called MVS, and provided active management and was policy based. Using numerous load-balancing algorithms, the SRM dynamically allocated and de-allocated resources, such as memory, CPU cycles, and I/O consumption, to tasks based on their relative priorities to insure that the higher-priority workloads were completed as required to meet business objectives. The SRM became the fundamental component to reducing the amount of human management needed to operate a mainframe through proactive management.

The second wave of SRM offerings (storage resource management) arose from the need for simplified SAN administration and is focused on the storage piece of this overall equation, as opposed to managing the resources of the entire system as in the original SRM version. Today's SRM software products are commonly used with Unix, Linux, and Win2K to provide a view of the more complex storage network environments from a single console or single pane of glass. Device administration software is provided by tools that alter the storage environment including policy definitions, device zoning, load balancing, and capacity allocation making everything easy to view and measure. Initial offerings were reactive requiring the storage administrators to initiate corrective. actions.

Like the original SRM nearly 30 years ago, these tools are strategically evolving to include and deliver proactive or system-initiated management, reducing much of the manual effort required of storage administrators and significantly reducing the overall storage TCO. As storage functionality continues to migrate outboard from the servers into the storage network, the role of SRM positions it as a critical enabler. SRM futures are becoming very important to understand as storage networking evolves. Does DFSMS sound familiar to anyone?

There are other examples of past concepts and ideas gaining new momentum today that had an entirely different beginning when they first appeared. Most of these concepts were pioneered, popularized and proven by mainframe computers 25 to 30 years ago. Does it seem like everyone is trying to duplicate much of what the mainframe successfully delivered in the past? Things are changing so fast that even the future is obsolete, but what we may really need to know: Is the past really obsolete?
COPYRIGHT 2002 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Moore, Fred
Publication:Computer Technology Review
Geographic Code:1USA
Date:Sep 1, 2002
Words:1309
Previous Article:Extensible utility: implementing XML for databases. (Internet).
Next Article:Digital dogfight! FCC's gift to TV sets everywhere.
Topics:


Related Articles
Compaq Bets On SANs: ENSA Is Their Ace In The Hole.
An Overview Of Network Storage Options For The Acronym-Impaired.
Understanding The Storage Paradigm Shift.
The Role Of Tape-Based Storage In Storage Area Networks.
Interconnecting Fibre Channel SANs Over Optical And IP Infrastructures.
Without Management There Is No Infrastructure.
Intelligent storage provisioning takes a load off: automated provisioning tools in short supply.
Simplifying storage: combining the iSCSI standard with SAN functionality.
There's a "great white" inside every SAN: and this man-eater's name is complexity.
Plugging into utility storage for enterprise-class application servers.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |