Printer Friendly

SRM Workflow and Automation. (Storage Management).

Managing data storage is a continuing challenge for storage administrators. They must steadily improve availability, quality and performance of storage services in order to support their company's business objectives. Now their task is even more challenging as they must manage an ever-increasing amount of physical data in the face of flat headcount and shrinking resources. Consolidating to network storage, SAN, and NAS can improve operational efficiencies can significantly reduce this management burden, especially when as part of an active, integrated storage resource management (SRM) solution.

Effective SRM solutions are based on:

* A holistic data model that incorporates the logical and physical relationship of the storage elements from the application, through file systems, volume managers and databases, to the HBA, through the fabric, to the storage array and down to the spindle.

* Well-defined application centric policy and rules.

* Service-level agreement enablers.

* Service-level objective (SLO) executors.

* Well-defined managed element (devices, file systems, databases, volume managers) characterization, policy, and rules for provisioning and managing these resources.

* Best-practice workflows that enforce the policies, and rules that ensure SLO adherence within the known characteristics based on the holistic data model and the competing application service level objectives.

* Automation of the workflow steps to enable consistent per formance and operational efficiencies in analysis and provisioning of storage.

Best Practice Workflows

Storage management in a shared storage network environment requires storage management processes as well as the SAN infrastructure and device monitoring and provisioning capabilities. An integrated, active SRM solution will incorporate a change management process that utilizes best practice policies and rules through a workflow.

The necessity of the workflow is driven by the fact that there are multiple constituencies involved in the provisioning and management of a shared storage infrastructure:

* Storage consumers, usually the systems administrators/database administrators, of the line of business or application group

* Storage administrators

* Storage architects

* Storage and IT management

No longer are the storage resources under the exclusive control of the local server administrator. Application and line-of-business system administrators and database administrators request storage in this new paradigm based on their application or business needs. Service-level objectives for performance, availability, and recoverability for a given application or application type drive the provisioning process.

Industry best practices show that central storage architects should set up policies on how they want to deliver these classes of service in a standardized fashion through their storage infrastructure. The workflow system must therefore enable the architects to define their policies and rules and should provide a template of best practices out of the box. It must allow for the definition of service level objectives and classes of storage. It must allow for the line of business administrator to choose the class of storage and service level objectives requested and the amount of storage desired.

Furthermore, the workflow should allow the storage group to review the request and even simulate execution to see if the service level objectives can be met. The workflow should enforce best practice policies at each step of the provisioning process. Best practice policies for RAID levels, striping, LUN masking, LUN mapping, zoning, and performance tuning should be able to be enforced by the integrated workflow, policy engine, optimization logic, and the holistic data model. Tasks should be distributed from the workflow to the appropriate automation modules or administrators and complete audit trails must be maintained.

As the complexity, number of ports, arrays, and servers increase in the SAN, it is imperative that automated storage management be an integral part of the system. The workflow system must be able to embrace this complexity and still ensure the implementation of best practices. It should deal with failure in the workflow process due to unplanned obstacles in provisioning, changed states of managed elements failing components, misrouted or disconnected cables, etc. Rollback to a known must be a central part of an automated workflow environment.

In optimum circumstances, these best practices are depicted graphically in a workflow that safely defines individual steps or processes that can be performed either manually or automatically. Executing poorly defined workflows or scenarios will increase the rate at which issues, such as using incorrect worldwide names, or improperly configured ports occur in the organization. The old computer axiom "garbage-in, garbage-out" applies here as well.

Best-practice workflows must be automatically customized to the network storage infrastructure that is already deployed. There are many choices in a typical environment for striping, mirroring, and performance tuning. This varies based on the types of arrays, current RAID pools, fabric layout, port utilization, spindle utilization, volume managers, and replication software available. The workflow suggested should consider these choices in its suggested best practice definition of tasks to fulfill a provisioning request. Finally, best-practice workflows should have a detailed audit and timing capability so that it is clear what steps were taken to achieve a specific objective while assigning accountability to the business units that are being supported by IT.

IT and data managers are notorious for being risk adverse and conservative. As such, it is an unrealistic expectation of software vendors that a "fully enclosed," one-touch, or completely automated storage provisioning event will be readily accepted by this community. An effective best-practice workflow should enable automation when the IT administrators are comfortable and have confidence in the solution while providing the option to manually execute the task. Administrators should also have the capability to stop, rollback, and delegate or continue the execution process at any point.

What initial best practices should be implemented? One of the first processes that will provide the most benefit to IT administrators will be a detailed best practice for provisioning new storage for a new application or the expansion of storage for an existing application.

Provisioning new storage or expanding existing storage is one of the most common but error-prone tasks that storage administrators perform. Critical applications must remain completely available, meaning the expansion is done under time pressure and duress. There is little time to be certain that the provisioning will not impact other service level objectives. This screams for automation and process control, implemented through workflow and policy integration. This is exacerbated by the fact that without an integrated SRM, an average-sized SAN requires interaction with up to 20 different tools. The best practices must be implemented independent of specific device functionality. The workflow and integrated SRM solution should shield the user from having to deal with multiple user interfaces and procedures for these component specific tasks, while implementing these best practices.

Using workflow and automation best practices allows less-skilled personnel to perform routine tasks in the networked storage environment and grants storage administrators real solutions to manage the ever increasing and complex storage environments. Storage architects can concentrate on improving their standard best practice processes and IT managers and executives can improve the service level of their applications and services while reducing costs and increasing operational efficiencies.

Mike Koclanes is co-founder and chief technology officer at CreekPath Systems (Longmont, Colo.)
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Koclanes, Mike
Publication:Computer Technology Review
Date:Feb 1, 2003
Previous Article:Open-file solutions: choosing the best package for your enterprise. (Storage Management).
Next Article:Speeding up the network: D2D backup lets VARs beat the Bottleneck Bugaboo. (Nectivity).

Related Articles
Storage resource management: the next generation; elementary storage resource management graduates into business-centric, policy-based automation.
Goodbye to old, hello to the new SRM confusion: enterprise storage resource management fulfills the promise.
What goes around comes around in storage: old ideas find new applications for today's SANs.
Storage management: to automate or not to automate? Just do it!
Fujitsu's Softek announces general availability of second-generation SRM solution.
The rise of storage process automation.
Storage resource management: might be the secret to optimizing your storage.
Mainframe storage management.
No quick virtualization fixes: achieve the goals of virtualization through holistic storage management.
The scramble is on to solve recovery puzzle: virtual tape, MAID, SRM are some leading solutions.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters