Printer Friendly

Storage in utility computing: 7 critical questions for IT.

It is easy to understand why the concept of utility computing has become latest trend in Information Technology. Analogous to traditional public utilities such as electric or water service, utility computing comprehends an essentially demand-based business model for the full range of IT infrastructure capabilities including servers, storage, databases and network resources.

While the concept is a compelling one, utility computing will not become a practical reality until IT managers can find answers to some fundamental, and as yet unresolved, questions. As long as companies utilize IT infrastructures based on servers, storage devices and networking systems from multiple vendors, the question of interoper-ability will remain a potential stumbling block. The solution will require comprehensive industry standards to ensure that systems work reliably across the heterogeneous environments found in real-world enterprise datacenters.

[FIGURE 1 OMITTED]

This article provides an overview of storage--one of the most important subsets of utility computing--including some important questions IT managers should ask before outsourcing their storage to a utility.

In Concept: Compelling Benefits

Within the utility computing model, the utility may provide services from the same infrastructure to multiple clients at the same time (Figure 1). According to its proponents, utility computing can provide customers with the cost benefits of a smoothly scalable 'pay-as-you-go' system that can meet changing business needs, without large capital expenditures. Subscriber companies can theoretically switch-on a given level of service, and then scale up or down, to meet their dynamically changing requirements.

All of this makes the promise of utility computing very compelling:

* Users would have unlimited access to computing resources across the globe

* Companies would pay only for the services they actually use, avoiding major up-front capital expenditures and eliminating the problem of under-utilized and non-productive resources

* Enterprises would have the ability to quickly scale their IT infrastructure to meet changing business requirements, while focusing on their core business processes and areas of competency

* In theory, subscribers would be able to flexibly meet dynamic shifts in demand for computing and storage, without additional investments in hardware, software and systems integration.

While the concept of utility computing has recently garnered a great deal of press attention, executing on the promise may be difficult. This is due to the fact that utility computing is considerably more complex than conventionally outsourced computing services.

7 Questions for IT

Because of the importance of mission-critical data in modern business, the ability to provide access to reliable and secure storage and to guarantee it with appropriate service-level agreements is a make-or-break proposition for utility computing. While the service-based utility concept is extremely promising in principle, IT managers are well advised to thoroughly investigate all the storage-related complexities of any proposed utility infrastructure before reaching for their checkbooks.

A great way to start is to ask some basic questions:

1. How will the utility organize and access the storage infrastructure?

2. Can the utility protect data integrity and ensure controlled access?

3. Does the utility have a unified framework for the management of servers, storage and network resources, and what about heterogeneous infrastructure environments?

4. How will the utility ensure fault-tolerant data access, archive and retrieval services, and disaster recovery?

5. How does the utility allocate storage resources to particular jobs?

6. How will the utility measure Quality of Service (QoS) for storage?

7. Can the utility provide users with dynamic billing?

Before making a decision to move forward, it is crucial for IT managers to thoroughly review the underlying organization and architecture of the utility's computing infrastructure. While many of today's utility computing vendors promote complete and ready-to-implement solutions, these may be point solutions that may not adequately address all the relevant issues of infrastructure organization, access, data integrity, unified resource management, fault tolerance, resource allocation, QoS or dynamic billing, not to mention to the question of cross-platform interoperability. When the utility is also a large hardware or software vendor, the infrastructure will in all likelihood be based on that vendor's vertical solutions. IT managers should carefully evaluate how the utility's infrastructure interfaces with their own unique systems and business process requirements.

Organization, Access and Cost

Networked storage infrastructure is designed to be largely transparent to the user, but how the storage infrastructure is organized can have an important bearing on the ultimate costs paid by utility subscribers. Storage virtualization is one of the fundamental concepts of utility computing because it provides users and service providers with a unified view of available storage resources. Virtualization means that physically disparate storage devices are treated as a single common pool of storage resources. This functional abstraction insulates users and applications from the underlying complexity of networked storage resources. It is typically achieved through Storage Area Networks (SANs) designed to optimize disk utilization and minimize costs.

Storage management is one important area where the utility can help to keep costs down. The utility's staff will include storage management specialists who are responsible for using virtual storage to minimize overhead. In actual practice, the fee charged by a utility for storage services can provide a more accurate gauge of Total Cost of Ownership than the basic cost paid by an IT organization to maintain its own storage infrastructure. This is due to the fact that when an IT organization manages its own storage internally, costs may not include all the management costs which can be many magnitudes greater than costs of raw storage. By comparison, under the storage utility model the service charge normally reflects comprehensive storage costs, including management.

The underlying technology of the infrastructure can also have a cost impact. Storage virtualization can be implemented a variety of ways, ranging from proprietary architectures to open standards such as Fibre Channel (FC) or Internet SCSI (iSCSI). Today, FC SANs remain the most widely deployed in enterprise data centers. Each SAN architecture requires its own management software, and the infrastructure is typically not shareable across different SANs. FC SANs use FC drives, switches, cables, and host bus adapters, and FC infrastructure is relatively more expensive than Ethernet infrastructure used in iSCSI.

As Ethernet technology continues to advance, the performance gap between iSCSI and FC continues to narrow, and the cost advantages of iSCSI increasingly come into play. By adopting iSCSI for their SAN architecture, service providers can reduce the need to manage two separate infrastructures, because iSCSI SANs utilize the existing Ethernet infrastructure including routers and switches. An added benefit is that data center personnel are already trained in managing Ethernet infrastructure, so iSCSI SANs can also help minimize training costs. The overall result can be substantially lower costs for utility subscribers.

Data Integrity and Security

Because data is the most valuable asset for any modern business, the utility service provider must be able to protect mission-critical data for subscribers while ensuring safe and controlled access. The utility should provide mechanisms to prevent, detect and recover from security breaches. The challenge is compounded when multiple companies share the same storage infrastructure. The utility needs to provide an extremely strong authentication mechanism to grant access to selected data. Managing data integrity and security when data is located at disparate geographies is a very complex task, and a solution remains to be developed.

Manageability

Management of computing, storage and network resources is a key element of the utility computing model, which explains why much of the current development effort in utility computing focuses on this area. Storage systems must be designed to be "autonomic" or self-managing. In addition, storage systems must be capable of self-configuration and self-initialization.

The principal focal point of interoperability is the interface between the infrastructure of the utility and the infrastructure of the customer. Utilities need to provide managed storage services to customers across a wide spectrum of computing platforms. Today's approach to manageability involves separate solutions for compute, storage and network resources, each based on different standards. A practical computing utility model for storage will require a unified framework for the management of all of the entities within a data center.

The manageability standard must comprehend all of the following functions:

Configuration: This means that features, servers and software can be added or changed without bringing the system down. Other parts of the system must be able to recognize the changes as they occur and adapt accordingly, with minimal human intervention.

Self-healing: The system must be able to recognize a failed component, take it off line and repair or replace it. For example, if a file becomes corrupted, the system can locate a copy on a mirror site and replace the damaged file. If a server goes down, the system automatically routes traffic to a backup server.

Protection: The system monitors who is accessing resources. It blocks and reports any unauthorized access.

Optimization: The system constantly monitors and tunes storage, databases, networks and server configurations for peak performance.

Fault Tolerance

The utility computing storage infrastructure must have extremely high aggregate fault tolerance, even when individual components have lower fault tolerance. This is achieved through redundancy in subsystems and components, and failover-awareness throughout the software stack, including applications, middleware and operating systems. The failover mechanisms must be transparent to users and applications. It is one thing to provide fault tolerance within a relatively controllable enterprise-computing infrastructure. It is quite another to do so in a utility environment which must meet the requirements of multiple customers who may have a variety of service level agreements and QoS expectations.

In enterprise deployments, fault tolerance is achieved in a variety of ways, ranging from RAID and local copying to replication of data at mirrored sites connected by a dedicated link. In the ideal scenario, in the event of failure at one site, storage services are maintained by the mirror site with no perceptible interruption. Utilities will need to provide their customers with a range of reliability and availability options that can be tailored to each customer's requirements. When evaluating a utility computing vendor, companies must be highly aware that off-the-shelf solutions that may be adequate for enterprise use may not work within the relatively more complex infrastructure of a utility.

Provisioning: Allocation and Assignment

Which storage devices will be used before a particular job is started? How much will each be used? The system must be capable of adding or deleting storage or computing resources, as needed, using front-end software tools. When a new storage entity is added to the pool, it must be discovered by the system and provisioned with the right firmware and operating environment. This capability enables a bare-bones storage entity to be discovered and provisioned from a central console.

In addition to the initial discovery phase, the storage entity must be manageable throughout its life, which requires tracking, monitoring and updating firmware and software. In a utility model, before a task can be accepted for execution the required storage and computing needs must be estimated and adequate resources must be allocated. The quantum of storage allocated to the user must be increased or decreased as usage changes, and these real-locations must be transparent to the application.

Quality of Service

The utility must be able to guarantee a given Quality of Service level throughout the each application execution cycle. There are several ways to measure QoS for storage, including access latencies, read/write performance and failure rate (measured by various degrees of system availability, such as .999%, .9999%, .99999% or higher). Multiple QoS levels must be maintained for utility computing. In addition, the service provider must be able to guarantee the QoS for each client. The question of how to provide multiple levels of QoS from a given storage infrastructure is extremely complex, and it is yet to be answered.

Metering and Billing

The core concept of utility computing for storage is that usage must be metered and billed. There is currently no accepted standard unit for storage utilization--or for a unit of storage. For example, the "value" of a terabyte of storage can be affected by multiple vectors including read/write latency, availability and time-of-day--in addition to added features ranging from failover protection to security mechanisms. All these variables may affect the billing rate of that same terabyte of storage for a given application. Today's bestavailable solution is for vendors to allocate a set of storage entities to a given client and manage them on the client's behalf. While there are many approaches in development, there is currently no solution for dynamic metering or billing.

Industry Initiatives

Intel and other industry leaders are now working to begin the development of industry standards. The Storage Management Initiative-Specification (SMI-S) of the Storage Networking Industry Association (SNIA) is one attempt to develop standards for storage management. In addition, the Server Management Working Group, operating under the aegis of the Distributed Management Task Force (DMTF), is chartered to develop manageability standards across the entire IT infrastructure based on DMTF's Common Information Model (CIM) abstractions. The ultimate goal of the effort is to deliver an industry standard platform-independent, server hardware management architecture across a variety of data-center environments.

Go Beyond Marketing Hype

Reliable, predictable and secure service delivery is the heart of the utility computing model. But as the electric utility metaphor aptly illustrates, even well-engineered grids can sometimes leave their customers in the dark. Moving beyond the ever-present marketing hype to practical implementation requires IT managers to ask some tough questions.

While many existing technologies and solutions may be applied to the delivery of storage within the utility computing model, none of them can be used as-is. Answers to the seven critical questions posed in this article in many cases depend on accepted industry standards capable of addressing the utility computing architecture at every layer: operating systems, platforms, applications and silicon building blocks. No such standards exist today. Until the industry can develop standards to ensure cross-platform inter-operability and answer such fundamental questions as "What is a universally accepted unit of storage?" the real promise of cost-effective and reliable utility comp will remain out of reach.

E. P. Komarla is engineering manager, Storage Components Division, at Intel Corp. (Santa Clara, CA)

www.intel.com
COPYRIGHT 2004 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 
Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Networking
Author:Komarla, E.P.
Publication:Computer Technology Review
Geographic Code:1USA
Date:Mar 1, 2004
Words:2333
Previous Article:Architecting a tiered data center: simple fundamentals bring great returns.
Next Article:How far can tape guide rollers go? Is a 3-piece design the future?
Topics:


Related Articles
SANs: Toward The Information Utility.
SNIA: Toward The Information Utility.
Answering The Co-Location Storage Question.
Critical power: backup protection for your critical systems: your critical IT systems could be bombarded daily by nine different power problems....
Utility computing: slowly but surely ... it's coming.
Transparent capacity management.
The road to utility computing.
Looking back.
Plugging into utility storage for enterprise-class application servers.
The state of utility computing: on-demand computing today.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters