SAN mid-year report card: the enterprise is the activity hotbed.
It's a different picture today. Storage area networks are an integral part of enterprise data centers, loosely defined as Fortune 1000 firms and their equivalents in government and education. They are also well represented in smaller organizations with complex data management and retention needs, such as law firms and smaller financial companies, and are making forays into the mid-ranged market. Ron Engelbrecht, VP and general manager of Wichita Operations at LSI Logic, said that the enterprise storage space remains "the hot bed of SAN activity. From point of view, over the last two to three years we've seen a pretty dramatic shift from the direct attached model to the SAN model. These customers are out of proof of concept and into full scale implementation."
Ken Steinhardt, director of Technology Analysis at EMC, lists the top three drivers of SAN deployments: consolidation, control and continuity.
Consolidation. SAN consolidation allows users to wring more value out of their SANs. Centralized SANs offer more shared resources, better management capabilities, and improved data protection.
Control. Management packages that span multi-vendor SANs give storage administrators better reporting and monitoring capabilities.
Continuity. SANs allow companies to deploy common, simplified business continuity across the business structure and multiple platforms.
Jon Greene, FalconStor's director of Product Management said about the SAN market, "It's clearly a growing market, and I think a lot of the growth been driven by the realization of very discrete benefits."
SAN vendors aren't limited their efforts to the enterprise. They are also marketing heavily to the mid-ranged market, whose SAN purchase decisions often revolve around sharing large tape libraries. However, not all storage vendors believe that marketing SANs to this market is necessary.
Bill Smoldt, STORServer's executive VP, calls aggressively marketing SANs to the mid-ranged tier "quite controversial and questionable." He doubts the need for SANs in many mid-ranged businesses. "People who have no need for SANs have put one in. They've blown a whole year's IT budget on it. People have been talked into SANs when they just don't need it. Gigabit Ethernet is much lower cost than the SANs." He added that "the people who need that segment to adopt SANs are the vendors." Smoldt, whose company manufactures a backup appliance, contend that many companies benefit from using storage appliances instead of SANs, particularly in file-based environments over fast IP networks. Many mid-ranged companies have adopted SANs anyway, citing the need to store large amounts of block-based data and to share tape libraries.
SANs remain strong in the enterprise space, with both disk and tape holding their own. On the enterprise side, the tape library space continues to be strong with 90% Fibre Channel attached. Disk is still the best bet for primary production and nearline environments, while tape is the best long-term storage medium. (Thanks to security and compliance issues, fewer people than ever are throwing data away. In data lifecycle terminology, "deletion" has been replaced by "retirement.") Combining tape and disk is popular, with major tape vendors offering storage products mixing tape and disk arrays (virtual and/or physical), and large front-end memory caches.
Major Technology Trends
SANs are complex structures with a huge variety of assets, architectures, and uses. However, there are a few major technology themes that impact SANs across the spectrum, including consolidation, tiered storage management, remote SANs, IP SANs and fabric-based intelligence.
Consolidation: Many SANs started life as individual storage networks for departments and workgroups. Enterprises are now consolidating these smaller SANs into a few much larger storage area networks.
Tiered storage management: Managing data lifecycle includes migrating older data from primary production environments to nearline disk or tape storage, then to long-term tape archives.
Remote SANs: Not all SANs are consolidated; many firms deliberately retain remote SANs and then connect them in data protection schemes.
IP SANs: Limited IP SAN products are available, and companies are interested in adopting them for workgroups and specific applications and as adjuncts to FC SANs.
Fabric-based intelligence: Switch vendors are implementing intelligence in the fabric, though the type of intelligence--and how comprehensive it should be--is open to debate.
IBM's Scott Drummond, program director of Storage Networking, sees consolidation happening on a massive scale. "We're seeing a tremendous amount of storage consolidation happening. And the area we're attacking most aggressively--because we've already got consolidation in the Unix space--is Windows and NT servers. We see a lot of people who are starting to buy off in the economic downturn on the economical advantage of consolidation." (IBM practices what it preaches: it has folded dozens of its SANs into a few major data centers.) Drummond lists consolidation value drivers a improving server utilization, cutting down on the costs of multiple tape backup systems, managing more storage with less people, strengthening security measures, and accessing more high availability features. LSI's Engelbrecht cites these drivers for consolidation projects:
* Lower costs: To reduce staff and overall costs, IT departments can eliminate numerous smaller servers with long backup and maintenance windows, and can replace them with fewer, larger and faster storage systems.
* High performance: High-end storage isn't cheap, but it's priced considerably lower from what it was. For example, companies can cost-effectively replace older and slower disk drives with state-of-the-art 15K-RPM drives. These drives are not only much faster but also sport high-speed controllers and interconnects, and large memory caches.
* Improved data availability: Modern storage systems also have high availability standard features such as multiple global hot spare drives and robust power redundancy features. They present great value for the money in centralized systems.
Consolidation often means centralizing the physical infrastructure, but it can also refer to centralizing the logical layer. In this case, a single management software application runs across distributed SANs and provides unified access to geographically remote end-users, and allows unified monitoring through single browser windows.
Tiered Storage Management
Backup, replication, snapshots. mirroring--no matter how firms protect their block-based data, they must guarantee its integrity and availability over the long term. This takes a long term data archiving strategy and the management tools to match. Ideally, companies set critical data parameters around crucial business policies and can utilize storage targets accordingly--appropriating bandwidth here, shrinking backup windows there, and providing different levels of data protection and transport to different applications and data. Elements of a tiered-storage strategy exist; for example, HSM packages work in production environments by archiving inactive data from primary storage to less expensive disk such as ATA arrays. From there, HSM migrates data to third-tier storage, usually off line tape.
However, while sophisticated SAN customers understand and appreciate the concept of tiered storage, what they really want doesn't exist--comprehensive, simplified data lifecycle management packages that will help them migrate their data based on enterprise-based policies, and will guarantee permanent data integrity. Jonathan Otis. ADIC's Senior VP of Technology said, "It takes a policy based lifecycle data management system to guarantee data integrity. We're not there yet, no one is. ADIC is looking at two years, and we think we're ahead of the curve."
In spite of large-scale consolidations, most enterprises have deliberately retained some remote SANs and connected them to the central data centers. There are many ways to accomplish this--multiplexing, storage over SONET, FCIP (Fibre Channel over IP), WANs. Major reasons for connecting SAN islands include:
* Business continuance: Companies link multiple SANs to do remote replication and mirroring.
* Access sharing: Depending on the cost of the transport, it can be cost effective to share large capacity assets like large arrays or enterprise tape libraries.
* Connecting data orphans: A critical application might store data on its own storage device, which in turn backs up to a remote SAN.
Critical information and high transfer rates demand more expensive transports, while more limited data transfers can operate over wide area networks. Cost is a concern in smaller enterprises and the mid-ranged market, which has seen a lot of concentrated activity around storage over SONET. This transport offers fast and secure storage connections without the high multiplexing price tags. This is particularly useful in the mid-ranged markets, which are very price-sensitive. IBM's Drummond said, "There is a general acknowledgement by the industry that we need to make this more accessible to the mid range, because that is the mass market we want to gel a higher percentage of."
WANs can connect SANs in low traffic situations, such as using a data center SAN to make a secondary backup to a remote tape library. However, most types of batch storage (replication, online backup and clustering, remote mirroring and snapshots) require faster connections that are optimized for storage traffic. "These types of connections are better for high performance, long distance storage because storage requires very low latency and very high throughput--not IP networking's strong suit. Matt Williams, Akara's Enterprise Product Manager said. "No matter what you do to IP, there are performance limitations. You normally don't see it because you have a lot of smaller applications aggregated together. But storage is a different issue."
Some companies run identical, fully mirrored SANs to protect vital data. This is an expensive proposition; but to increase ROI, companies will use the mirrored SANs for additional processing duties. Ear example, two SANs will actively host different application suites while mirroring each other. Companies will also use remote SANs for secondary and cluster backups. These operations do not replace local backups, but enhance them.
Fibre Channel overwhelmingly remains the primary SAN network type, but IP-based storage is seeing product and interest. It's a more realistic interest today than it might have been two years ago, when many companies expected to be able to flip a switch and turn on their simple (and cheap) IP storage network. Storage networking is neither simple nor cheap, but IP is an interesting niche storage networking market. FalconStor's Greene said, "The overwhelming number of SANs today are Fibre Channel. But we do have a significant number of IP SANs, and now that iSCSI is a standard instead of a proposed standard, we expect to see an increase in IP SANs."
IP networks largely depend on iSCSI connections, which transport storage traffic at high speeds and low latency rates. A number of companies are also looking at FCIP (Fibre Channel over IP) to make economical, long distance Fibre Channel SAN connections. IP SAN adoptions, such as products from StoneFly Networks, are largely concentrated in work-groups but they aren't limited to that area. Giant laboratories CERN and Almaden, for example, wanted to use single data repositories but couldn't afford to fibre up 30,000 servers. They plan to use iSCSI instead. Mark Nagaitis, director of Product Marketing for Infrastructure and NAS Division at HP, said, "In terms of trends, clearly one of the interesting trends that we're working on aggressively is the whole area IP storage, and iSCSI storage in particular."
Switch vendors are actively adding more intelligence at the fabric level. For example, McData is developing switches that can handle open system backup and replication operations in heterogeneous environments. Mark Stratton, McData's director of Solutions and Alliances programs, said about these types of developments. "Customers will find a new efficiency point where they can do more with less. They won't have to buy these proprietary vendor softwares but will have a more open environment. We're looking at managing more of what's in the fabric, also providing more information about relationships of what we're managing. We'll continue to be as aggressive as we can in this space."
Other examples of fabric based intelligence are provisioning technologies such as embedded policy engines, virtualization and volume management features for the enterprise market. Internetworking is another development, where the Fibre Channel fabric can host nonproprietary subnets. Storage net working standards, such as SMI-S (formerly Bluefin), will be crucial for open internet-working development.
Some switch vendor startups are developing backbone switches with full-scale storage management capabilities. These heavy-duty switches are meant to support consolidated enterprise SANs with huge port densities and large scale scalability needs. Michael Wells, executive VP of Marketing at Sandial said, "As the model of SAN begins to get exploited, the need for extending that director class market into a greater capacity platform, a backbone platform, seems to be the next natural progression in the high end solution. You're ending up with an edge, a core and a backbone."
Wells sees network infrastructure changing and evolving from an efficient way to share libraries and arrays, to a utility model with the ability to fundamentally utilize and control data according to application demands. For example, a switch might use a policy engine containing bandwidth requirements for each connection it controls. The switch grows or shrinks the bandwidth according to the policy that governs the connection's data traffic.
Mark Davis, Candera's senior VP of Marketing and Business Development, is an evangelist for fabric based utility storage. "The tactical issue is the issue around rationalizing costs around infrastructure that have been overbuilt over the last couple of years. The strategic issue is that you have to move to a utility method of delivering services. From a CEO perspective, that's what they need to see."
Even enterprise storage buyers are sensitive to return on investment, and they prefer to see incremental changes in storage, and make incremental investments to manage the assets they already own. Both mid-ranged and enterprise markets are hard sells for very expensive new arrays and innovative new products that are unproven in the marketplace. Major concerns include:
Companies like their heterogeneous environments, but they don't like having to make them work together. Open environments are tricky to manage and storage administrators want better and simpler management tools.
Simplify storage provisioning: Provisioning arrays can take days of an expert's time, and aided or automated provisioning can significantly cut down that time. Many storage vendors are moving in this direction but the technology is tricky--it depends on virtualization, which is challenging in multi-vendor environments.
Prove ROI: IT needs to prove good returns on investment for its capital and ongoing expenditures. SANs are extremely useful and can be vital to data protection, but they are expensive to purchase, deploy and maintain.
Manage risk: Storage vendors must prove that their new product (switch, array, management software, whatever it might be) won't take a SAN down.
SANs are not storage nirvana--they're complex, they're costly, and they're in a constant state of flux. But what ADIC's Otis said about storage area networks and tape libraries is largely true about established SAN technologies: "A couple of years ago, there was little interoperability or management, but today is a confidence period. You can pick what you want to buy, you can buy them and make them work. They're not simple--they're not complicated, but they are complex. If you follow the rules and do things in order, they'll work."
|Printer friendly Cite/link Email Feedback|
|Author:||Chudnow, Christine Taylor|
|Publication:||Computer Technology Review|
|Date:||Jul 1, 2003|
|Previous Article:||Storage technologies--the long view.|
|Next Article:||Next generation COMDEX--lean, mean and focused? An exclusive interview with MediaLive's Michael Millikin.|