Printer Friendly

Caching Requires Proper RAID Controller Configuration.

This article is the second in a two-part series. The first part appeared in the February issue of CTR.

Write caching is based on another simple principle--it takes a few microseconds to store write data in a controller's cache versus a half dozen milliseconds to store it on disk. Writing to (or reading from) cache is over 1,000 times faster than writing to (or reading from) disk. There are two types of write caching: write-back and write-through. With write-back caching, a write is written to cache, the I/O is acknowledged as "complete" to the server that issued the write, and some time later, the cached write is written or flushed to disk. When the application receives the I/O in complete acknowledgement, it assumes the data is permanently stored on disk. With write-through caching, sometimes referred to as conservative cache mode, writes are written to both the cache and the disk before the write is acknowledged as complete. Write-through caching improves I/O performance with applications that frequently read recently written data.

Caching is a cost-effective way to boost I/O performance. However, unless the RAID controllers doing the caching are configured in dual active pairs and designed with cache coherency and robust recovery mechanisms, caching can cause incorrect data to be delivered to applications and corrupt databases when elements in the I/O path fail.

Cache Mirroring

One element in the I/O path that obviously jeopardizes data integrity if it fails is the RAID controller. Data written to a write-back cache is vulnerable until it is made permanent on disk, which is done later as a background task when spare cycles are available. If a controller with write-back cache enabled fails, the writes in its cache may be lost and since the controller has already acknowledged the I/Os as complete, the application is unaware of the data loss. In database parlance, this type of data corruption is called the "lost write" phenomena. The application thinks the writes were written to disk but the writes never made it past the controller's data cache.

In single controller array configurations, there is no dependable cache recovery mechanism that protects cached writes against controller failures. However, external storage arrays with dual active RAID controllers can provide a reliable cache recovery mechanism called mirrored caching. During normal operations, the dual active controllers share the I/O workload; however, if one controller fails, its partner assumes the entire workload.

In dual active RAID configurations with cache mirroring, writes are written to the caches in both controllers before the write is acknowledged as complete. Then, if a controller fails, its partner completes the write operations that were in process at the time of the failure by flushing its write buffer to disk, restoring the database to a consistent state. The surviving controller, then, transparently fails over the host port address of the failed controller and assumes the workload of the failed controller in addition to its own.

Cache Coherency

High availability computing environments require I/O subsystems with "no single point of failure." The only way to achieve this objective is with redundancy built-in throughout the I/O subsystem and with the data protected by a parity or mirrored array. In the event of a host-side path failure, both RAID controllers in a dual active pair must be capable of responding to I/O requests with the current state of stored data regardless of the path the I/Os travels to reach the controller. High-end RAID controllers typically solve this problem with a memory bus between mirrored caches in the two controllers to maintain synchrony between the caches. A more cost-effective approach for NT environments is a controller-to-logical volume access control strategy that locks data areas before I/Os are serviced and prevents applications from accessing stale cache data. Features like cache mirroring and cache coherency have been available in RAID controllers for mainframes and high-end Unix systems for some time; however, a new generation of RAID controllers that include these features and priced for NT servers is beginning to hit the market.

Self-tuning RAID arrays based on system manager specified storage policies are in their infancy, but will become a reality and significantly lower cost-of-storage-ownership. Until then, optimizing storage configurations depends on evaluating application I/O access patterns, collecting performance statistics, configuration experimentation, and understanding RAID data organizations and the nuances of controller cache operations.

Kevin Smith is the senior director of business management and marketing for external products at Mylex Corporation (Boulder CO).
COPYRIGHT 2000 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Technology Information
Author:Smith, Kevin
Publication:Computer Technology Review
Date:Mar 1, 2000
Next Article:Everything You Wanted To Know About DVD-R And More.

Related Articles
Optimizing Your RAID Array Performance.
RAID and the SAN.
Optimizing RAID Controllers.
CMD RAIDs The Market.
Zzyzx Rockets To The Top.
How To Select A RAID Disk Array.
Clustered Servers And Redundant I/O Ports.
Maximizing storage ROI with fast external cache.
RAID-on-a-chip [ROC] processors change the storage equation: Part 1.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters