Think strategically, act tactically in 2004: how will your company master the ten biggest challenges?
1. The security capability for data protection and network access must be dramatically improved.
As the value of data grows, IT and storage infrastructure protection has become a more well defined discipline than ever before. The storage infrastructure has become much more reliable in the past decade. The new enemy for data survival may no longer be from technology failures such as a disk crash, a tape that won't read, or a blue or blank screen from a system or server crash. The primary threat is now coming from malicious people and from a growing list of environmental disasters. Storms, fires, electrical failures, floods, viruses, hackers, piracy, insider threats, foreign countries, falsified accounting records, and stolen files are now causing the majority of computer downtime and data loss. In a three-month period in 2003 alone, parts of the U.S. and Canada experienced major natural disasters including the Northeast Electrical Blackout, Hurricane Isabel and out-of-control fires in Southern California. Growing data security issues are forcing companies to implement dramatic and costly improvements to existing IT security systems.
For 2004, the number one storage priority has become data protection. 2004 budgets should prioritize resolving any weakness in the current infrastructure.
2. Storage management deployment is not keeping pace with storage growth.
As the number and types of storage devices continue to proliferate, the complexity of managing storage grows non-linearly. Annual storage growth ranges from 40-70% for some applications while the deployment of storage management tools has not kept pace. Storage management complexity can be simplified by adding NAS for certain applications, SANs for others, adding virtualization software, implementing SRM, unifying block and file storage systems into a singular architecture, and implementing server-less storage management functions. All of these options improve storage management but cost money and require skilled personnel to implement before the benefits can be realized.
As a result of several years of budget cuts and downsizing, many businesses simply don't have sufficient resources to invest towards closing the storage management gap. Some businesses have decided to not manage storage and simply add more "cheaper" hardware as a more painless solution. Unless you are a mainframe centric (z/OS, iSeries) business, the gap between the total amount of data and what is actually managed is widening and this is expected to continue through 2004.
For 2004, consider implementing one or two of the above storage management strategies that offer maximum benefit to your business and save the others for later.
3. There is no real answer in sight for widespread interoperability.
For nearly twenty years, IT organizations have hoped to arrive at a point in time where any product can interoperate with any other. The reality is that this goal isn't getting much closer. Standards committees meet endlessly to reach consensus while vendors quietly resist too much agreement fearing a loss of their value proposition or competitive edge. The real progress made in interoperability remains centered on a vendor working closely with (meaning providing a few APIs) a group of their close business partners but a true open architecture is not in sight.
For 2004, accept the reality that true open systems are not going to be here for years and plan to see more proprietary, high-value solutions from vendors. Choose those that help your business the most, even if they are proprietary.
4. Consider storage technologies that change the rules (disk).
The use of low-cost disk subsystems is expanding beyond the desktop. A new SATA-based storage subsystem is expected to arrive in 2004 that is being positioned to create a new level of the new storage hierarchy called MAID (Massive Arrays of Inactive Disks). In a MAID array, the disks spin and consume power only when necessary. If this sounds somewhat analogous to an automated tape library with the exception that disks are substituted for tape cartridges, you are headed in the right direction.
The MAID concept uses the long-standing storage industry observation that approximately "80 percent of the activity goes to 20 percent of the physical storage." Power-up time for an inactive SATA disk takes about 10 seconds. MAID provides traditional levels of RAID data protection capability needed for higher SATA availability. The use of MAID isn't suitable for all applications, but will be targeted for backup/disk-recovery, low-activity reference data and fixed-content data. An early look at MAID projects pricing levels between low-cost SATA arrays and automated tape libraries, potentially changing the rules of disk level pricing.
For 2004, evaluate new emerging storage technologies that can significantly reduce (i.e.; >20%) your existing storage TCO (Total Cost of Ownership).
6. Consider storage technologies that change the rules (tape).
Some architectural issues remain facing traditional tape storage and include: 1) the time to first byte of data (milliseconds for disk and seconds for tape); 2) tape supports sequential access only (disk supports random and sequential access); and 3) the recovery time for data is longer for tape than for mirrored or replicated disk (minutes or hours on tape compared to seconds normally for mirrored disk).
Fortunately, the solution involves the merging of both tape and disk into a new architecture. New advanced automated libraries are embedding disk arrays as a front-end cache to a tape library. The disk storage serves as a high-performance cache for the larger-capacity and lower-cost tape library and handles many of the IO operations that previously came from the tape subsystem. Pre-established policies regarding file size, usage patterns and size determine when the data is moved directly between the disk and the automated library without passing through the server.
For 2004, consider upgrading existing automated tape libraries to those that combine embedded disk arrays improving performance for backup, recovery, fixed content and archival data.
7. Disk and tape storage vendors need to provide scalability in both capacity and performance.
Disk and tape drive suppliers historically focused efforts on constantly driving capacity to higher levels. As a result, systems that scale capacity and much faster performance have become throughput bottlenecks. As disk-drive capacity increases faster than disk-drive performance, utilization levels on larger capacity disks will continue to decline in order to reduce device contention and maintain acceptable levels of performance. Larger disks reduce the hardware cost-per-gigabyte, but also create performance bottlenecks. Today's disk drives can contain up to 320 gigabytes and roadmaps indicate paths to over 500 gigabytes per disk drive in the next 3-5 years--though the trend is finally showing signs of slowing. Tape cartridge roadmaps are now projecting 1 terabyte or more of native capacity and it is imperative that as capacity increases, performance increases accordingly for disk and tape technologies.
For 2004, as vendors continue provide lower cost-per-gigabyte storage systems based on higher capacity devices, users should insist that performance increases accompany the capacity increases.
8. Intelligent storage networks.
At the center of the new storage network architecture is a future version of today's network switches, directors and routers. Sometimes referred to as the Storage Domain Director, an advanced, fault-tolerant storage switching architecture is unfolding, revealing that centralized and outboard management of distributed storage resources will become a reality. A key goal for the intelligent storage network or switch fabric is to significantly minimize the number of storage management touch points. Many types of storage management applications are candidates for hosting in the storage network. These include controlling SAN traffic, storage consolidation, SRM, HSM, backup/recovery, snapshot copy, replication, and out-board data movement between disk and tape subsystems (server-less functions).
Though a debate continues over where to locate the storage functionality, there is widespread agreement on the need to make storage management independent from the attached servers. Many companies are now delivering fundamental pieces and building blocks of the intelligent fabric, but the complete vision will take a few more years to arrive.
For 2004, keep a sharp eye on emerging components of the intelligent storage network or switch fabric as it evolves. This initiative may provide the best chance of all to close the expanding storage management gap.
9. Know your data: Data lifecycle management.
Understanding what happens to data throughout its lifetime is becoming a critical aspect of effective data management. What data is mission-critical, vital, sensitive, or non-critical? Each of these classifications has its own optimal backup/recovery and availability requirements. If data is the most valuable asset of most businesses, why do we know so little about it?
The probability of reuse of data has historically been one of the most meaningful metrics for understanding optimal data placement and has been a key premise for HSM (Hierarchical Storage Management) systems to be more effective. For nearly all data types, the number of references significantly declines as the data ages. The lower frequency of access as data ages has been a fundamental concept of the HSM concept for over 25 years. However the general assumption that the value of data declines as it ages is no longer valid.
While the Nearline concept was becoming widely accepted in the 1990s, the common belief was that archival status was the last phase before deletion or end-of-life for data. Fifteen years later, the game and rules are different. New government regulations for transmission and retention of data alter the way data is managed as it ages The back-end of the data lifecycle is swelling, not shrinking as was the case previously, and retention policies are more effectively based on data value and compliance issues, not just reference activity. This mandates that a universal, standard data classification must emerge. We now realize that all data is not created equal.
To cost-effectively implement a data lifecycle management strategy, two or three levels of storage are frequently used. These include primary storage, always disk-based for highly active data or for data that is typically accessed within a monthly timeframe. Secondary storage can be virtual tape for enterprise systems or often SATA disk systems for data that has a lower activity level but hasn't reached archival status yet. The third level, long-term storage remains the realm of magnetic tape libraries. The issue of moving large amounts of data up and down the hierarchy and in and out of a server is a growing performance concern begging for device-to-device data-transfer capability.
For 2004, develop a strategy to continually classify your data based on criticality and value to your business. Then implement the storage solution(s) that optimally meets those needs.
10. Implement value-oriented metrics to determine true value.
Most users today still look at the hardware purchase price as their primary purchase criteria. This is increasingly unfortunate and reflects the now old and out-of-date viewpoint that hardware is where the value of the IT infrastructure exists. This is like measuring the value of the television industry by the number of sets sold (the old rules) rather than the value of the content being transmitted by television (the new rules). With hardware prices falling at 35-40% annually, the value of the storage industry shifts from atoms to bits.
For 2004, accept that the lowest price may not be the best solution for your business and may cost you more in the long run. Choose a solution that provides the optimal overall value, the best ROI, or the lowest TCO.
|Printer friendly Cite/link Email Feedback|
|Publication:||Computer Technology Review|
|Date:||Dec 1, 2003|
|Previous Article:||SMB migration to IP SANs.|
|Next Article:||CTR salutes those who made a difference in 2003: when the going got tough, the tough kept going.|