Tiered storage: an idea whose time has come; it's just one driver of the new storage dynamism.
One thing is certain: the days of what consultant Jon William Toigo calls "promiscuous purchasing" are quite dead, and may never resurrect. IT managers are choosing where to spend with greater care than ever, often overseen by red-pen wielding financial officers and generalist executives who understand storage very little and understand cutting budgets very well. IDC's Robert Gray identifies a reassuring move towards better budgets. The end user community is in a tough spot: capacity demand for storage is growing on an average 30% rate year-over-year, but budget has not historically kept pace (see Image).
There continue to be a number of drivers pushing the need for more and better storage products and solutions. The new consciousness of the need for disaster recovery and business continuity is one, and a variety of regulatory mandates worldwide is another. But at this point in time, the vulnerability of data has not spurred spending significantly. Even more remarkably, some businesses are balancing the costs of compliance with regulations with the costs of non-compliance, including fines. This is an ill-advised trade-off. The Federal government may not be technologically sophisticated, but it certainly knows how to fine and prosecute defiant businesses.
When the bean counters have finished whipping budgets into bleeding submission, what is going to be the important storage direction going forward? I believe that the future is focused on tiered storage, which can be called information lifecycle management--or whatever the catch phrase of the week is. In the toils of a budgetary inferno, IT management cannot afford to treat all kinds of data alike. Today's urgent e-mail will be ready for archiving in a matter of days--in some cases, hours.
It is inevitable, therefore, for enterprise IT to take up the challenge and make the hard decisions as to which data goes on primary storage, less expensive, lower-performance secondary storage, or least-expensive archival storage. Both storage consultants and product sales personnel in the storage industry will need to be in a position to make sensible suggestions and recommendations to their customers. And those suggestions have to revolve around a heterogeneous data center; suggesting that an enterprise obsolete an existing investment or acquisition is a great way to lose business.
SAN and NAS
It should come as a surprise to no one that the implementation of a tiered storage strategy would involve the use or expansion of storage architectures like NAS and SAN. It is therefore worthwhile to consider where these architectures are going. I'll organize SAN approaches by connectivity strategy.
At the present time, Fibre Channel-based SANs are the worldwide standard. This is for two reasons: FC works, and FC is here. Over the past year or two, the FC community has been able to make some progress in reducing costs. Total Cost of Ownership analysis still works a bit against FC, since specially trained personnel are called for to manage a FC SAN.
But cost leadership is almost certain to yield to iSCSI, where a server with a NIC can class as an iSCSI initiator. The pacing item in the iSCSI world is the development of iSCSI targets. A number of companies, large and small, are addressing this issue. An incomplete list would include Network Appliance, EqualLogic, Xiran, Intransa, Adaptec, Iomega, LeftHand Networks and ATTO.
Don't assume that SCSI is standing still, either. The International Committee for Information Technology Standards (INCITS) announced the approval of the Serial Attached SCSI (SAS) standard, with the American National Standards Institute (ANSI) accepting it quickly as an American National Standard. The new SAS SCSI standard, developed by INCTS Technical Committee T10 and designated as INCITS 376, is the next generation to parallel SCSI.
The SCSI family of standards provides for many different transport protocols that define the rules for exchanging information between different SCSI devices. This standard defines the rules for exchanging information between SCSI devices using a serial interconnect. This standard also defines rules that permit SAS hosts to communicate with Serial ATA devices using the same serial interconnect.
"In terms of increases in speed and performance, SAS has a longer life than parallel SCSI," said John Lohmeyer of LSI Logic Corporation and Chair of INCITS/T10. "An important differentiation is that all serial interfaces are self-clocking, so unlike parallel SCSI, data rates can be pushed higher and higher." This adoption must be a new record for T10, which is well known for agonizing over standards acceptance.
Network Attached Storage (NAS) also serves as money savers, intelligently deployed. But, as suggested before in Computer Technology Review, the challenge to NAS appliances is not from other NAS vendors, but general-purpose servers. Since NAS handles file level transfers so well, many users are using NAS appliances as file storage servers.
Over general-purpose servers, many NAS appliances offer advantages. Capacities are usually greater, and software is more robust than usual at less cost than conventional server bundling. High-end NAS devices are provided by Network Appliance, EMC, Hewlett Packard, IBM and Dell.
But NAS is beginning to have serious opportunities in the small-to-medium business space. Newer entrants into the field are targeting this space, and the names to watch for include Apple, Snap Appliance, Maxxan, Dot Hill, Inostor, Iomega, Silicon Graphics and Spinnaker (currently in the process of being acquired by Network Appliance).
Most analysts agree that the next important spending cycle in mass storage will be in storage software and storage services. On the services side, consultants are likely to see an up tick in business due to the complexity that is storage management. Regular readers are aware that integrators and VARs need to be seen not as sales representatives but as consultants to IT management, addressing real-world pain points.
The software space, for all the current economic softness, is likely to become serious business over the next year especially. One man's solution is another man's problem: SANs, for example, want SAN management. The storage management software side will likely grow in importance and revenue, but it must meet fairly exacting criteria to reach a lofty eminence.
The problem that storage software addresses is the problem of complexity. Different operations within the storage management paradigm need either simplification or automation. Matching RAID sets, assignment of LUN and target addresses, LUN masking, applications workload, and zoning for I/O load balancing are only a few of the operations that lead to storage management success. The effective use of virtualization must be the simplified use of virtualization; otherwise the very word will make IT managers hesitate.
Configuring software must needs become a point-and-click operation, regardless of disk type, size and speed. That same configuration activity must be accomplished without needing to reconfigure every server in the storage network. Adding capacity, which is currently more important than performance in many circles, must become a matter of plugging in a new array.
Storage software that emphasizes ease of use will also help open up markets for architectures like SAN. A simplified SAN management suite will help push SANs out of IT elites of the Fortune 500 into a broader market.
Then there is the problem of openness. It is often considered the goal of good storage software. The post-millennium data center needs software tools that are vendor-agnostic. Two reasons are present here: investment protection and streamlined centralization. This is a time where even legacy mainframes are not discarded, but are used as enterprise servers. This is also a time of considerable merger and acquisition activity in almost every industry. Merged companies come with their own data centers, and the differences need to be managed from the headquarters data center.
Therefore, truly open systems (those that are operating system and hardware agnostic) represent an important enduser pain point that storage software needs to address. Widespread availability of APIs is part of the solution. The effort of SNIA's Supported Solutions Forum in forging an overlapping tech support from different companies is an important step forward as well. But the bottom line in creation of openness is a puzzle: How does a storage vendor create an open architecture while defending product differentiation?
A new, dynamic mass storage industry will be driven by the efforts of large, established vendors, start-up companies, skillful integrators and VARs, and professional consultants. I believe that the good health of the mass storage community is a matter not of if, but when. But getting there will be a matter of migrating innovative technologies into broadly implemented solutions.
Storage Budget Directions for 2004.... Down 26% Flat 19% Up 55% Insight: Twice as many users are expecting an up year for storage as are expecting a down year. Source: IDC Note: Table made from pie chart.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Storage Management|
|Publication:||Computer Technology Review|
|Date:||Nov 1, 2003|
|Previous Article:||Storage challenging business creativity: hard choices ahead for management and government compliance.|
|Next Article:||Tape libraries central to intelligent tape: tape proves its value in data center, disaster recovery.|