Printer Friendly

Thanks to standards, storage gets innovative.

At risk of severe understatement, the world was a very different place 10 years ago than it is today: In 1995, the Dow Jones Industrial Average surged past 4,000 for the first time; Microsoft Corp. released an operating system upgrade called Windows 95; and a company called Netscape prepared for an initial public offering of securities.

But in the world of computing storage technology, things remained largely unchanged in 1995 from the way they'd worked for years. Mid-sized companies and large enterprises paid dearly for complicated and proprietary systems that coupled data storage so tightly alongside computing functions you couldn't wedge the two apart with a crowbar.

Who could blame them? Large computer companies had a natural interest in building and servicing tightly integrated computing systems that ran on custom silicon, employed proprietary operating systems and embraced a longstanding model in which storage and servers were seen as inseparable siblings living under a common architectural framework. They were expensive, monolithic systems, but they worked. Brilliant inventors and engineers coaxed the things to life, and maintenance agreements, although expensive, gave customers confidence their hardware would protect and serve for years to come.

Still, what seemed like the right solution to storing and managing data suddenly began to show its age as a broader computing and networking revolution burst into the marketplace in the late 1990s. In the desktop space, common computing architectures and open standards had greatly reduced costs and prompted tremendous innovation in applications as developers seized the opportunity to work within openly accessible and widely deployed systems. In networking, the Internet revolution and its attendant open standards liberated information from the prison of closed systems, allowing an exchange of data and knowledge like we'd never imagined.

Storage, however, largely sat on the sidelines as these common architectures and open standards propelled other elements of networked computing into something vast and extraordinary. Because storage was embedded within tightly coupled and proprietary computing platforms, customers who wanted to pursue new storage applications had two options: live with what they had, or call the original vendor to develop yet another proprietary solution.

That wasn't the only drawback. Around the same time, a flood of new data produced across enterprises and businesses of all sizes demanded that raw storage reservoirs themselves expand. Accustomed to the fact that data servers and storage were joined at the hip as part of a prevailing computing model, IT professionals solved this burgeoning need for ever-greater storage, ironically, by buying more servers. Through the pristine lens of hindsight, it was the rough equivalent of buying a new set of razors merely to solve the problem of a dulled blade. But there was little choice.

The problem, of course, was that adding tightly coupled server platforms purely to attain more storage introduced a spiral of complexity to data storage and management. Not only was it an expensive way to play catch-up to ever-expanding storage needs, it produced untenable data management routines in which teams of technicians were devoted just to managing individual server array backups.

Bright idea

With storage needs growing, innovation in storage end-applications lagging and customers feeling the economic pinch of proprietary and codependent server/storage architectures, a bright idea emerged: Why not challenge the longstanding and seemingly immutable symbiosis between storage and computing? Similar decouplings had unleashed tremendous energy and innovation before--in memory, in processing and in software applications, for example. By separating storage from tightly integrated computing systems, went the thinking, it would be possible to scale each attribute independently, and to enjoy a broader range of solutions as multiple developers attacked storage from new vantage points, introducing both innovation and economic efficiencies.

In the late 1990s, serious momentum developed behind this idea of decoupling data storage from data serving, a feat that had been accomplished previously only in the mainframe environment--and only with considerable cost and complexity. But there was a drawback. The only practical way to pry storage away from its companion computing intelligence without conceding connection speed was to link disk drives and servers through the proprietary but rapid-transfer connection platform of fibre channel. It worked, but it had limitations. True, the storage pool component had been stripped away from the computing element. But the connection between disk drives and servers remained locked in the realm of the proprietary fibre channel approach--a relatively unsophisticated solution that fell short of delivering true networking benefits and failed to deliver a standards-based foundation that would invite broader innovation.

Today, that's changing. In fact, history is repeating itself yet again thanks to the evolution of off-the-shelf storage components, standards-based connection approaches like iSCSI, and networking protocols like Ethernet.

Modern Ethernet connections capable of rendering multi-gigabit per second transfer rates now obviate the need to rely on relatively cumbersome fibre channel approaches in order to liberate storage and storage management from a tightly coupled server, and place it onto the network. In replacing fibre channel, Ethernet has toppled the final barrier to standards-based network storage. There now are standards that can be leveraged across the entire ecosystem of data storage and management: processing, memory, storage and (Ethernet-based) networking. Working from these collective standards, it's possible not only to free storage from monolithic computer systems, but to apply intelligent and creative software solutions that drastically reduce the costs and complexity of installing, using and upgrading storage area networks.

For instance, storage technology requisites such as fault tolerance, self-healing and extremely high availability can be applied not through firmware, but through networking software approaches. Effectively, we're taking the intelligence away from the storage system and placing it in the network. That approach, coupled with the new reliance on open standards, reduces the market-entry barriers for innovators that are now sparking a renaissance in storage hardware itself. Built around common and inexpensive off-the-shelf components, new storage systems are being produced today that do one thing very well, and that is to store data.

The operating system software that manipulates data storage has similarly been liberated from the tightly coupled, monolithic, do-it-all platforms of old, and is now at work within the network itself. Just as common OS platforms in the desktop world ushered in a new era of application innovation--everything from email clients to CRM tools--an open standards approach to storage area networks is now beginning to invite software application developers to do what they do best, which is to innovate relentlessly and build more and better efficiencies into the way customers can manage stored data.

Fast fading are the days when a single company tried to do it all. Thanks to the adoption of standard architectures that allow us to bust apart yesterday's monolithic platforms, in the next 12 to 24 months we will see more storage application innovation come to the market than we've seen in the entire decade that preceded. Customers, not vendors, will dictate what combinations of storage, components and applications they use. And we'll get there for less money. Utilizing standards-based hardware platforms allows customers to quickly leverage new technologies as they come to market, select best-of-breed vendor partners, and control costs. For example, LeftHand's SAN/iQ software--the intelligence behind the LeftHand storage area network--runs on industry-standard storage servers, bringing the benefits of standards-based environments to enterprises looking to invest in an IP SAN.

Computing's history is marked by a relentless pattern of democratization and compartmentalization. We continue to achieve order-of-magnitude improvements by, essentially, blowing things apart and inviting creative people to work their magic on individual components of a greater whole. The economic key to this building-block approach is a willingness to support open standards that produce astonishing scale and commodity-like cost reductions on one hand, and invite tremendous creativity on the other. This benevolent confluence has propelled amazing innovations in processing, in memory and in applications. Now it's time for storage to benefit, too.

Bill Chambers is founder, president and CEO of LeftHand Networks, (Boulder, CO).

www.lefthandnetworks.com
COPYRIGHT 2005 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Networking
Author:Chambers, Bill
Publication:Computer Technology Review
Geographic Code:1USA
Date:Oct 1, 2005
Words:1317
Previous Article:Network-centric IP SAN: a new approach to unleashing the full potential of your IP network.
Next Article:ILM for life sciences: the next big storage play?
Topics:


Related Articles
ADAPTEC AND CROSSTOR PARTNER TO ENABLE ETHERNET-ATTACHED SOLUTIONS THAT INCORPORATE STORAGE OVER IP TECHNOLOGY.
Milk and honey: reaching the promised land of heterogeneous storage management. (Storage Networking).
How iSCSI is changing the storage landscape.
Disk array storage considerations as part of TCO strategies.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters