Printer Friendly

Disk array storage considerations as part of TCO strategies.

The First Step: Understand the Business Environment in relation to Storage TCO

Information availability drives the success of today's real-time business, its partners, and its suppliers. Empowering businesses with up-to-date and aggregated information has created a new wave of real-time, data-intensive enterprise applications. The dynamic nature the market, competitive challenges, and constantly changing user needs has further intensified the need for IT infra-structures to dynamically adapt and respond to meet business requirements. Furthermore, the sheer diversity of today's information users, many of whom now access information through 24X7 Web portals, has resulted in unpredictable information utilization patterns and processing requirements. Within this dynamic environment, data growth for many organizations, especially those in the small-mid-enterprise (SME) space now exceeds 100% annually, challenging traditional storage architectures' abilities to efficiently and cost-effectively manage and scale the storage infrastructure. Because of these limitations of traditional storage architecture, TCO must be considered carefully as data multiplies throughout the enterprise.

[FIGURE 1 OMITTED]

Server and application architectures have evolved to meet the demands of today's unpredictable, highly distributed, real-time processing environments. Clustered architectures aggregate processing power across application modules, while increasing performance and responsiveness and providing near-linear scale. Clustered application architectures also provide new levels of resilience for application processing across clustered blade or classic server infrastructures. Application clusters also dramatically reduce the operating and capital expenditures associated with data center infrastructures, thanks to the commoditization of hardware and the simplification and dynamic nature of administration.

The Next Step: Evolve Storage Infrastructure Away From Excessive TCO

Today's enterprises benefit from the deployment of clustered processing and application architectures with dramatically increased uptime and resilience, configuration flexibility, and immediate scalability. In contrast, traditional storage area networks (SANs) continue to feature dual-processor, chassis-based, static architectures, which not only limit storage flexibility, but also demand large capital investment risks.

These chassis-bound architectures leverage mainframe-era designs to deliver reliability against unplanned downtime events. The necessity of planned downtime to expand volumes, change RAID levels, or alter overall storage configurations results in minimal configuration flexibility. Additionally, the scalability of capacity and performance is limited, based on the expansion capabilities of a fixed two-controller architecture. Monolithic architectures do not allow organizations to protect investment and take advantage of technologic advances, for example, with regard to drive capabilities, since traditional SANs require that all drives within the system be identical and controller upgrades require downtime events of the entire storage subsystem.

This limits overall storage responsiveness and increases storage operating expense (OPEX) and total cost of ownership (TCO). Modern storage architectures must evolve to address issues including:

* Traditional SAN architectures require significant acquisition and deployment investments, thanks to their monolithic, tightly integrated design. Traditional SAN architectures inject dual hardware expense and management complexity for standalone, single-chassis protection against unplanned downtime. Even so, these large capital investment (CAPEX) outlays are often overshadowed by the significant expense involved in configuring and deploying the SAN. SAN deployment and/or reconfiguration usually requires expensive vendor professional services, with the result that upgrade decisions/pathways are often deferred because of the high cost involved. This delay in availability of critical applications has significant financial and competitive implications.

* Traditional SAN architectures severely limit the responsiveness of the computing infrastructure. Configuration, deployment, management and maintenance are static and complex in nature. Traditional SAN architectures require significant planned downtime to change or upgrade storage, server, or application infrastructures. These architectures also require dual-pathing software on the host to enable SAN interconnectivity and to prevent storage disruption. This software adds significant time and management overhead, while preventing the true server and storage independence promised by the concept of SAN.

With traditional architectures, desired infrastructure changes are often postponed for weeks or months, and then implemented during planned outages at the expense of overall information availability. Planned downtime accounts for over 80% of data unavailability, yet is accepted today as "status quo" within the industry. In today's Web-based environment, the impact of information unavailability reaches far beyond internal users--impacting revenue generation, just-in-time (JIT) supplier delivery, and partner performance.

Additionally, traditional SAN storage suffers from low-capacity utilization rates. With traditional SANs, independent storage arrays are allocated into unique logical unit numbers (LUNs) of fixed size, which are allocated to individual servers. Since expanding the capacity allocated to each server requires planned downtime, the common practice is to over-allocate each server LUN in advance of actual need. Once allocated, capacity cannot be utilized elsewhere when other server or application requirements grow, even if it is empty and unneeded by its own associated server.

* Traditional SAN architectures are expensive to control and scale--mirroring the glass-house expense of their mainframe design foundations. Expensive service professionals must respond to changes in the storage infrastructure or failures in hardware components. Upgrades, even within families of products, often require service expertise and data regeneration with consequent planned unavailability. As SAN capacity scales, the continual addition of highly trained and expensive inhouse resources (or even more expensive vendor consultation) is required to manage larger and more complex solutions, significantly impacting overall TCO. Because of their RAID group virtualization techniques, these architectures utilize significantly less than 100% of the available storage resource, forcing over-investment and contributing to sub-optimal returns on these investments.

Now more than ever, enterprise requirements demand storage innovation--specifically, the availability of a storage architecture that responsively supports dynamic business operations while providing new levels of resilience and scale. Storage infrastructures must evolve beyond their legacy-based heritage by taking advantage of proven clustering techniques that form the foundation of today's dynamic IT infrastructures.

The Last Step: Optimizing TCO via Distributed Storage Clusters

Distributed storage cluster (DSC) architecture exploits the proven benefits of clustering within an agile storage infrastructure, enabling enterprises to respond quickly and transparently to business changes. Resilient, uninterrupted storage availability provides optimal TCO through the only solution that addresses the full spectrum of events that cause both planned and unplanned downtime. Several breakthrough technologies have created fiber-distributed storage clusters that set the TCO standard for resilience, responsiveness, and scale (Figure 1).

MLOR: Unmatched Resilience Distributed storage clusters are designed for Multiple Layers of Resilience (MLOR) across a distributed, N-way storage architecture. Innovations in resilience include failover at the controller and cluster level, virtual server redirects across geo-distributed locations, and dynamic server and application management. This resilience reduces TCO by elimination of downtime and degradation in many cases where traditional SANs would suffer performance degradation, data loss, or outright failure.

The DSC architecture features modular components, including controllers, drive bays, and control platforms, which are deployed as independent modules across fiber-distributed networks. Flexibility and resilience are enhanced by allowing components to be placed where they are needed versus in a single chassis--dramatically mitigating risk in the event of an outage in one single location. The architecture leverages an inherently networked design, providing enterprise IT the power to optimally design and deploy a networked storage solution, matching storage network configurations to meet business reliability and cost requirements. DSC configurations can range from a single interconnect path between two nodes, to dual paths between nodes, to a full N-way mesh configuration.

N-Way Clustering: Multiple distributed controller nodes (as opposed to static storage processors in non-distributed architectures) can be deployed across fiber networks when and where needed (Figure 2). All controllers provide an active-active I/O workload and communicate between all controllers within the cluster. Additional controllers may be added as needed to provide greater resiliency and performance scale. This design enables controllers placed in multiple, fiber-distance locations to continue to process storage requests even if all but one controller fails--thanks to active-active N-way clustering. Servers and applications continue to access, process, and deliver business information--transparent to failure events.

Reduction of TCO through Failover and Fastback Without Server Software

When considering storage TCO, attention must be paid to the role of server (host) software. To eliminate costs associated with server software (acquisition, installation, maintenance, patching, updating, etc.), distributed storage cluster failover and fastback provide resilience within and across the cluster to transparently optimize storage resilience and continuous application performance. All failover and fastback is done without requiring any host-side software. The operating systems, applications, and users are unaware that storage events have occurred.

Intra-Controller Failover/Fastback: Due to their legacy design, traditional SAN controller failover mechanisms significantly impact performance during failover operations, as one storage processor and its throughput are completely lost--even in the case of minor hardware or cabling issues. In contrast, the DSC architecture is designed to provide multiple points of failover within the controller itself, delivering continuous, uninterrupted storage performance and consequent application uptime. Cluster nodes will continue to provide storage access to applications, transparent to any internal failover, when traditional SAN controller failover would negatively impact performance. When the problematic component is repaired, a single click returns the controller to its previous operational state--transparent to any application processing.

Clustered Failover/Fastback: Automatic failover enables computing resources to be transparently redirected across the cluster to alternative ports or controllers without business interruption. The server "sees" the same storage configuration even though it has been redirected to a new controller, thanks to an innovation called "moving targets". In the event of a controller failure, or if the controller is removed from the cluster for maintenance, its storage requests are transparently and dynamically transferred to other controllers with no operator intervention--using the moving target feature. When the controller node is repaired, a single click returns the cluster to its previous state--transparent to application processing.

Full Server and Application Transparency and Independence

The DSC architecture requires no additional server software for resilient SAN connectivity. Transparent failover/fastback is enabled without host-based components, enabling full processing resilience while dramatically simplifying SAN deployment and upgrades, as well as server and application changes and upgrades. This architectural sophistication, termed 'Zero Server Footprint', has been in place within innovative virtual storage architectures for several years. It efficiently and effectively localizes the storage, server, and application layers--providing true infrastructure independence.

Independence of these two layers--computation and storage--is achieved without sacrificing horizontal scaling of either layer. Changes required to scale performance or capacity or to alter availability configurations at the storage layer are made without changes to the host. Similarly, faster application (server) processors or upgraded versions of operating systems and applications can be immediately deployed without storage infrastructure changes--all within a highly resilient processing infrastructure. The tightly coupled, interdependent relationship between computation and storage deployed within traditional SAN architectures, which forces enterprises into compromised business choices and downtime events--i.e. excessive TCO--is completely eliminated. The resulting infrastructure independence accelerates business responsiveness while reducing administrator and lost opportunity costs, as well as planned downtime overhead for both the server and storage infrastructure.

Ensured Data Integrity

Distributed storage clusters are architected for maximum data integrity in the case of site failures. In traditional SAN architectures, data integrity is at risk and recovery may be complex and time consuming, thanks to caching and inherently complex layer-dependent remote replication mechanisms. The Magnitude family's storage architecture is inherently designed for maximum data integrity in the case of site failures. It eliminates traditional design components that put data integrity at risk in the case of failure/disaster. The architecture also features the industry's simplest--yet most powerful--geographic replication and resilience tools, delivering the industry's fastest time-to-recovery. The DSC architecture provides return-to-state in seconds or minutes versus the hours or days required by traditional SAN architectures--greatly enhancing TCO by reduction of labor and time.

[FIGURE 2 OMITTED]

Dynamic Responsiveness Eliminates Planned Downtime

With today's unpredictable, Web-based workloads, the ability to scale, prioritize, and optimize storage processing is critical to avoid disruption, slow response, or transaction delay. Reflecting the power of clustered application architectures, the DSC architecture also virtually eliminates the need for planned downtime. IT staff can dynamically update storage cluster configurations to meet required business changes--during business hours, without specialized expertise.

Leveraging Business-Responsive Storage

The DSC architecture delivers simplified storage operations that enhance flexibility and expand availability. Routine tasks that once required planned downtime, most often during after-hours or week-ends, can now be performed during normal business hours. The architecture enables transparent online addition, replacement, or removal of disk drives, servers, or virtual storage volumes, in an operating system and application-transparent environment. IT personnel can dynamically manage the storage infrastructure in real time--organizations can install, tune, maintain, upgrade, and alter storage configurations continuously to match their business requirements--without ever sacrificing information or application availability. For example, the DSC architecture can:

* Update or change storage configurations to meet application needs without disruption.

* Respond automatically to application requests such as Microsoft Virtual Disk Service (VDS) and Volume Shadow Copy Service (VSS).

* Expand volume sizes or reallocate volumes to higher-performance or higher-availability physical units (disk drives).

* Redirect computing resources to alternative storage locations as needed, in seconds or minutes.

A policy-driven environment transparently adapts the storage configuration to business needs without manual intervention. Automated provisioning and configuration take place based on application-, time-, or event-based triggers, providing continuous availability without the need for emergency staffing or after-hours/overtime efforts. Application-to-storage application program interfaces (APIs) from industry leaders including Microsoft are inherently supported, enabling the distributed storage clusters to dynamically respond to changes requested by the application or operating system itself.

www.wwpi.com

Increasing IT Effectiveness

The DSC architecture enables organizations to leverage current staffing resources to manage increasing storage volumes. Intelligent control eliminates the complexity of traditional SAN architectures by abstracting physical complexity into a highly intuitive management environment. Time-to-competency is dramatically reduced, and the imperative for specialized, expensive, vendor-centric or in-house storage management personnel is eliminated. Automated provisioning and application-driven response dynamically manage the cluster without operator intervention, eliminating the need for emergency staffing, overtime hours, and crisis management. In today's restricted budgetary environment, the ability to manage more capacity with fewer, less-costly resources presents attractive economic benefit.

Leveraging Clustered Infrastructure Expertise

Thanks to the transition of the processing infrastructure to cluster- and blade-based architectures, today's computing and application administrators understand the issues underlying clustered solutions. Applying similar clustering concepts to storage increases the scale and performance of the storage pool while also providing significant leverage of expertise across IT personnel. This leverage is made possible by applying a distributed, clustered-storage architecture, which in turn supports a similar architecture of application and computation resources. In addition, the DSC architecture delivers a dynamically manageable and highly simplified storage management environment.

Multiple Dimensions of Scale

The inherent component architecture of distributed storage clusters scales seamlessly from edge-to-core. Storage clusters are optimally designed, geographically deployed, and independently scaled based on business or application needs. Capacity and performance are incrementally scaled, without application disruption. Data access, replication and management are seamless across all clusters and within all clusters, edge-to-core. Investment in current and future storage technologies is protected, thanks to the architecture's unique support for different size and speed disk drives within the same cluster, or distributed between clusters.

Edge-to-Core Scale

Distributed storage clusters scale seamlessly from edge-to-core. Data is easily and transparently shared or replicated between edge or cluster configurations with no business disruption. As storage requirements expand, capacity and performance are incrementally and transparently scaled, again with no business disruption. Investment in current and future storage technologies is protected, as drives of diverse capacity and speeds are easily intermixed and fully utilized across the clusters.

Optimized Aggregate Application Performance

The DSC architecture is designed to provide optimized, continuous performance for today's aggregated, clustered application workloads. The architecture allows independent, incremental performance scale across both the storage pool and scalable storage access. The DSC architecture is inherently designed to scale application access to the storage pool by transparently--in other words, without any application disruption--adding distributed controller nodes to the cluster. It is the only architecture inherently designed to support true parallelized, multithreaded I/O access between applications/ servers and the storage pool. Applications, or individual modules within a clustered computing infrastructure, take advantage of parallelized I/O access to the entire storage pool, delivering unmatched scale and responsiveness to business needs.

Dimensional Storage Pooling Enhances Scale and Response

Inherent in distributed storage clusters is the capability to deliver a fully virtualized storage pool, enabling performance and capacity to be incrementally provisioned and scaled simply by increasing the number of drives deployed. A proven industry innovator, Xiotech was the first to recognize that optimal application performance is best derived from full virtualization of the storage pool. This approach departs from traditional, legacy SAN designs that allocate fixed-storage groups, partitions, or drives to specific applications. Instead, enterprise application workloads are allocated in parallel, or striped, across optimal blocks on all available spindles in the storage pool.

Dimensional Storage Pooling maximizes both individual volume and aggregate cluster performance by utilizing the optimal number of drives for the virtualization of each volume within the cluster, as well as utilizing the entire aggregate drive population across all volumes defined in each cluster. Xiotech's virtualization also enables the intelligent abstraction of physical complexity into simple, easy-to-understand logical views--simplifying storage and performance configuration, and thereby reducing training and expertise levels required of IT staff.

Summary

Distributed storage clusters deliver new levels in storage value and previously unattainable reductions in TCO. The DSC architecture represents the next evolution in SANs, delivering unmatched resilience, responsiveness, and scale.

Clustered storage dramatically changes the way storage is deployed and managed across the enterprise. For the first time, a single architecture delivers the ability to scale from edge-to-core, resulting in the storage industry's most flexible investment and deployment options. A modular design enables on-demand scale of capacity, performance, and availability, matching storage investment to ongoing business requirements. Leveraging clustering's industry-proven, dynamic adaptation, storage can be dynamically reconfigured to meet changing business drivers in real time--without business disruption. The savings in overall infrastructure TCO, storage operational costs and capital investment dollars dramatically impact bottom-line results.

Distributed storage clusters are the storage architecture for today and tomorrow, delivering reduced TCO, true competitive advantage, unlimited business responsiveness, and accelerated return on investment.

www.xiotech.com

Rob Peglar is vice president, technical solutions, at Xiotech Corporation (Eden Prairie, MN)
COPYRIGHT 2004 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:TCO: Disk Arrays; Total cost of ownership
Author:Peglar, Rob
Publication:Computer Technology Review
Geographic Code:1USA
Date:Jul 1, 2004
Words:2985
Previous Article:TCO analysis: where D2D fits--part 1.
Next Article:Maximizing storage ROI with fast external cache.
Topics:


Related Articles
Think strategically, act tactically in 2004: how will your company master the ten biggest challenges?
Data management needs drive the shift to 2.5-inch enterprise disc drive platform.
iSCSI deployment in business IP storage network.
SAN-based data replication.
TCO issues in disk technologies.
TCO analysis: where D2D fits--part 2.
The cost benefits of a SAN: an analysis of total cost of ownership (TCO) of an iSCSI SAN, fibre channel SAN, and direct-attached storage.
Managing and scaling IP SAN.
4Gb/s: coming soon to a SAN near you.
Data storage sticker shock: the need to recalculate data storage TCO.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters