Printer Friendly

2008 forecast: mapping your data center caching strategy.

Caching has long been deployed to optimize performance of systems and applications. It is a proven concept with implementations ranging from registers on a CPU, to system memory, to wide area data services. Cache enhances existing resources by temporarily storing frequently used data for faster delivery and processing. Traditionally, cache has been a scarce resource requiring selective deployment and excessive management overhead. Adding memory to the data center to improve performance is a growing trend. With the emergence of large capacity centralized caching solutions, IT managers now have a more powerful and flexible way to address I/O performance bottlenecks, signaling a big breakout for data center caching in 2008.

Today's workloads push the data delivery capabilities of traditional disk-based storage systems to the limit. Fueled by ever increasing server performance and explosive growth in virtual servers, modern applications demand instantaneous access to huge amounts of shared data. As applications scale to thousands of users and servers, this frequently results in performance limitations and I/O bottlenecks--inherent issues with large scale shared I/O.

These performance problems manifest themselves in a number of ways, the most typical being application brownouts and the inability to meet required service levels due to slow response times. Server utilization levels drop as CPUs sit idle waiting for data to be delivered from slow mechanical disks. 'Hot spots' can occur on storage systems and disks when there is heavy demand for a particular file or piece of data. Identifying and fixing these problems is a critical challenge for server and storage administrators.

While caching is one of many approaches that data center managers use to address performance issues, traditional caching techniques have limitations. Adding memory to individual server or storage systems can help to an extent; however cache size is physically constrained and resources cannot be shared across multiple devices. Architecting and tuning applications to perform well in a distributed caching environment is complex and requires excessive management to make effective use of this scarce resource. Cost is a significant factor as well. When additional cache is required, customers have historically been forced to purchase full systems rather than just the memory needed.

Centralized caching is a new model for deploying memory in the data center that brings all the benefits of cache without the limitations of current approaches. Large capacity caching appliances attach to the network, allowing customers to scale memory independently from other systems and share resources across thousands of servers and multiple applications. Centralized caching appliances deliver up to one million I/O Operations per Second (IOPS), over 6 GB/sec of throughput, and response times under 1/2 millisecond.

As a result, customers can match their memory investment to specific performance needs. Rather than acquire servers that are overconfigured with memory, or multiple storage systems just to get the required memory footprint, data center managers can invest in a dedicated cache resource specifically tailored to deliver high performance I/O. Much like the transition from direct to network attached storage, customers can reap the utilization, management, and cost advantages of deploying cache in the network. As performance needs grow, more cache resources can be added on the fly, scaling to terabytes of capacity if needed. Data center managers now have much greater flexibility and control over how memory is deployed, mapping it to specific application requirements.

Unlike many new technologies, centralized storage caching deploys transparently and does not require changes to existing applications or systems. It also complements other strategic storage technologies such as global namespaces and file virtualization. Both of these technologies are powerful tools for managing large amounts of data. When coupled with centralized caching customers can develop best of breed solutions that excel at both performance and capacity management. As the technology improves, expect a wide range of applications and industries to adopt this model in 2008. Strong momentum already exists in such industries as animation, energy exploration, financial services and business analytics. These are powerful methods to understand, improve, and manage performance.

Jack OBrien is director of marketing for Gear6. www.gear6.com
COPYRIGHT 2008 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:O'Brien, Jack
Publication:Computer Technology Review
Date:Jan 1, 2008
Words:676
Previous Article:2008 forecast: if you build it, they will come.
Next Article:2008 forecast: media asset management in the video world.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters