Printer Friendly

Transparent capacity management.

Information Lifecycle Management, Storage Grids and On-Demand Computing are hot subjects in today's storage arena. The good news? These processes will help companies rein in their runaway storage environments by managing them better, faster and less expensively. The bad news? None of these strategies is here yet, and when they do get here most storage environments won't be prepared to support them.

However, the bad news is not necessarily inevitable. Companies can take steps now to adopt strategies such as ILM tomorrow, and those interim steps will immediately result in better, faster and less costly storage management. One of the important requirements for strategic storage management is the ability to distribute data among storage destinations without impacting end users or applications, and the inability to do this has been one of the major obstacles to optimizing storage environments. Modern IT departments struggle to manage data and storage resources, and must frequently add capacity and reconfigure throughout their networks, usually requiring downtime to do it (and sometimes corrupting data in the process). Transparent capacity management is the key to solving this issue.

Transparent capacity management allows IT administrators to perform management tasks without interrupting end-user access to data or applications, or having to depend on batch windows. Administrators can use transparent capacity management to dramatically increase storage utilization, effectively incorporate disk technologies like SATA, and optimize storage to meet compliance requirements. All of these features are available today, and also build the foundation for next-generation technologies like ILM and storage grids.

Capacity Management: Where it's Going

For most administrators, capacity management means constantly reviewing storage devices and working to forecast usage rates to make sure there is enough available storage until the next convenient time to take down the storage to add new disk. When an administrator does have to take storage offline, it typically requires extensive coordination with end users, late nights and weekends, and liberal amounts of coffee.

Unfortunately, the situation has gotten progressively worse as 24X7 requirements have put pressure on available downtime windows and compliance and business requirements have resulted in exploding data volumes. In these difficult and demanding environments, a manual capacity management process proves nearly useless.

Today, transparent capacity management is replacing manual capacity management as an important method of controlling storage volumes. Transparent capacity management adds storage capacity, loads data and manages large storage sets while working invisibly to end users and applications. Transparent capacity management allows administrators to distribute resources across multiple storage devices to increase utilization, eliminate over-provisioning, and ensure continuous access. This is particularly useful in environments like Web filer farms, where IT must efficiently allocate capacity to constantly changing data stores. Transparent capacity management enables them to distribute data in real time among separate drives, significantly increasing their farm's utilization and performance.

File System Virtualization

Transparent capacity management requires continuous data access while masking the underlying data location from end users and applications. Most importantly, transparent capacity management tasks must be executed while simultaneously allowing end users to access and update open files. File system virtualization provides both these capabilities and is the technical foundation for transparent capacity management.

The first approach to file system virtualization was to create a proprietary file system that had virtualization features within it. This provided benefits but didn't extend virtualization beyond its own proprietary file system. It also required a huge migration project to the proprietary file system before any benefits could accrue.

There are two subsequent approaches to provide transparent capacity management. The first is to create a proprietary switch, file system and namespace on a dedicated device and place that device between end-users and existing storage systems. This approach requires mount point changes and introduces a single point of failure and performance bottle-neck. The second approach is to provide file system virtualization by leveraging industry standards, integrating with global namespace solutions such as DFS, and relying on existing network switches such as Cisco Catalyst switches.

While both approaches enable transparent capacity management, the industry standard approach works within existing computing environments and does not require either mount point changes or software agent deployment on clients or servers. This architecture leverages existing network switches and performs capacity management by issuing standard file system calls and automatically synchronizing any changes across multiple storage destinations. At the same time, it monitors all client traffic and synchronizes it to source and destination storage resources. The file system virtualization technology maintains data integrity by avoiding collisions between client access and data movement, and only acknow-ledges the data movement to the client after it receives acknow-ledgements from both the source and destination storage resources.

Transparent Capacity Management and Next-Generation Storage

A company with transparent, real-time capacity balancing saves money and time in the present, and also builds an infrastructure to support next-generation automated data movement like ILM, storage grids and utility computing. These technologies will further decrease data movement costs, and will dramatically improve resource usage, networking management, compliance and data protection. But they will only work properly in consolidated, optimized and highly manageable storage environments where transparent capacity management virtualizes distributed network resources. These highly automated technologies cannot run in an environment where IT must first negotiate user and application downtime.

The Table shows how transparent capacity management and, by extension, file system virtualization impacts these next generation technologies.

Implementing Transparent Capacity Management Today

If transparent capacity management is a critical factor to all of these strategies, why aren't all companies doing this now? Unfortunately, most companies are still struggling with consolidation projects. If it takes them months to move data from expiring Windows NT servers or old generation NAS filers to new storage tiers, they can hardly perform capacity management on a weekly (not to mention daily) basis.

Where should companies start? The best place to start is to alleviate a major pain point in the organization while simultaneously increasing the efficiency of their storage environment. For many companies the place to start is with the additional use of SATA. The use of nearline storage can dramatically reduce the overall cost of total storage.

Deploying file system virtualization technology in conjunction with introducing a new storage tier adds additional capacity and allows an administrator to free up space on primary storage devices for use by more mission-critical content. With transparent capacity management, administrators are free to not just move old unused data but they can work with active volumes and directories as well. The biggest impact for end users might be to first free up space on a primary storage device by taking the least recently accessed directories and moving these to a nearline device. Next, administrators can select the directory that accounts for the most access operations and move this to a less utilized storage device. In this way, administrators can balance not just capacity but performance as well.

File system virtualization allows administrators to move data across different file systems at their own pace. Based on file system virtualization technology that doesn't require mount point changes, administrators can apply transparent capacity management across their storage networks to relieve capacity and performance bottlenecks. Rather than focusing exclusively on storage consolidation projects, organizations would improve efficiency, utilization, and administrator stress levels by focusing on transparent capacity management--an approach that can also be used to speed and ease file server consolidations.

Organizations are concerned with cost-effective storage management, now and in the future. Instead of adopting a wait-until-they-come-out stance on next-generation storage management, companies should prepare now by consolidating storage and using transparent capacity management for increased storage efficiency and cost savings.
 Transparent capacity
Technology Description management

Information ILM enables companies to ILM depends on continued
Lifecycle prioritize data based on data access regardless of
Management (ILM) business requirements changes to the physical
 like accessibility, location during the data
 protection, security and lifecycle. Transparent
 compliance. It stores capacity management
 data on the most virtualizes storage
 effective storage medium devices so ILM can freely
 for any given point in make storage
 the data's lifecycle. assignments.

IBM's On-Demand On-Demand Computing On-Demand Computing can
Computing automatically provisions provision storage
 disparate network resources, but needs data
 resources, managing them access virtualized across
 as a single system. disparate storage
 resources.

HP's Adaptive Adaptive Enterprise Adaptive Enterprise needs
Enterprise synchronizes business and transparent capacity
 IT operations with management to enable
 changing business needs. rapid provisioning and
 It depends on an automate data flow.
 infrastructure built with
 adaptable, modular
 systems.

Storage Grid Storage Grid is an Storage Grid depends on
 architecture where file system
 interconnected storage virtualization to access
 systems distribute data anywhere in the
 workloads across storage system regardless of its
 resources. Asset physical location.
 management is
 centralized.

Utility Computing Utility Computing is the The foundation of Utility
 concept of delivering Computing is automated
 storage on an as-needed provisioning and
 basis, grouping hardware virtualization.
 into resource pools that Transparent capacity
 dynamically adjust to management provides the
 changing requirements. invisible data movement
 and virtualized resources
 for Utility Computing
 procedures.


www.rainfinity.com

Jack Norris is vice president of marketing at Rainfinity (San Jose, CA)
COPYRIGHT 2004 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Management
Author:Norris, Jack
Publication:Computer Technology Review
Geographic Code:1USA
Date:Aug 1, 2004
Words:1500
Previous Article:Policy-based data management in ILM.
Next Article:IP SAN for dummies.
Topics:


Related Articles
Cheque processing solution. (Management News and Products).
NCSA EXPANDS ITS LEGATO SOLUTION TO MANAGE AND ACCESS 1.5 PETABYTES OF MISSION-CRITICAL INFO.
Don't hesitate to automate: lower storage costs by automating storage resource and data management. (Automated Storage Management).
Disk array storage considerations as part of TCO strategies.
Network file virtualization.
The network-centric file management appliance: overcoming the challenges of enterprise file services.
The evolution of hierarchical storage management.
Network file Virtualization.
NAS virtualization simplifies file storage management.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters