You Want To Attach Storage Where?
They reply, "Oh ... uhhh ... let me get back to you on that."
Adding capacity seems as if it should be a no-brainer because, nowadays, multigiga-byte storage devices are practically commodity items. So many connectivity options are out there that almost anything looks like it can be plugged in almost anywhere. Straight into the server. Or maybe the network. Or maybe it should be the centerpiece of its own network. Maybe it's not a no-brainer after all. Maybe there are too many choices.
The client's best choice probably depends on the answer to a different question--a classic question in the computer industry that's never been irrelevant: Who owns the data? It's arisen in many different contexts over the years and is still loaded with philosophical baggage. (It sends an especially emotional charge through people who are fixated on privacy rights, firewall protection, and the control of intellectual property.) Yet in this context, sticking to the literal version of the question as it bears on the storage attachment dilemma, it means: "Who's creating or maintaining the data and who therefore should have the easiest access to it?"
If the data is perceived as fundamentally "individual," meaning that one person or one workgroup is responsible for it, then the server those people use most is the logical place to add capacity. If the data is "departmental," meaning that a whole department (broadly construed) is responsible for it, then their LAN or WAN should probably get the new capacity and "enterprise" data that has to be shared organization-wide, should logically be stored in such a way that everybody involved can get ahold of it, which points toward a SAN. Crossing at right angles against this otherwise easy division, though, is "strategic" data--access to which has to be severely limited to a widely scattered but very small number of individuals.
Whatever the approach, attaching new SCSI or Fibre Channel devices to the back end of a server is a fairly simple exercise for experienced technicians. Once the box is in place, the server "owns" the storage in the sense that it manages the data on behalf of the people connected to it.
Network-Attached Storage (NAS) originally meant devices sitting on client networks, i.e., in departmental LANs and WANs. At first, building NAS was just a matter of connecting various drives (disk or tape or both) together, but there's been a shift, over the past year or so, toward more specialized handling of data in networked file systems and toward common (make that "browser-like") user interfaces. Most enterprises that set up servers as file systems map each new drive with the next available letter (F:, G:, H:, etc). As people realize that they may not need a dedicated NT or Unix server just for file serving, they are more an more attracted to specialized (the buzzword is "thin") servers that--no surprise--also costs less.
Where capacity requirements or the number of prospective users is larger, though, the advantages of creating a SAN can't be ignored. ASAN, at least if it's built by a single vendor or a historically tight partnership, offers a common interface across all the attached devices, whether they're disk or tape-based and a built-in backup architecture. In some cases, even existing NAS thin servers can be connected to the SAN by means of a switch like any other server in a network.
I've been skeptical of SANs for some time, feeling that they've been more theory than fact, but vendors are deploying SANs that work. I confess I also thought that some SANs were make-work projects, but the reports I hear now suggest that today's SANs do what they're supposed to do and that the vendors involved are--at least within partnership arrangements--sharing their growing expertise with one another.
Still uncertain are the prospects for true multivendor cooperation. As one vendor who's worked with partners on SANs told me, "Cooperation is usually good, provided we're all patient, but true coexistence is more complex."
As, doubtless, it must be. A proprietary SAN with a single point of control over (say) all the protocols is simpler to design and to implement and vendors tied together by corporate ownership or by longstanding arrangements can probably overcome most obstacles almost as easily. All their devices, hubs, switches, etc., will have their specs and their connectivity requirements either defined up front or easily discoverable by the team members, but in ad-hoc situations where a SAN is cobbled together by vendors who are to each other little more than distant third-parties, it takes an awful lot more work all around and the client will just have to wait.
What's emerging as a SAN "industry" can't survive on proprietary networks. Multivendor architectures--for all the hassles involved--represent the only sustainable future.
Clients look at the Internet, a multivendor environment if ever there was one, and say, "We want ours to work that well. A network is a network. Why should things be different when it comes to storage?"
Well, maybe clients shouldn't ask that if they don't want to hear the long answer. Maybe they should stick to the basic question: "Where, exactly, are we going to plug in the new box?"
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Industry Trend or Event|
|Publication:||Computer Technology Review|
|Date:||Sep 1, 2000|
|Previous Article:||After A Long-Awaited Delivery, IBM Births A Bouncing Baby LTO.|
|Next Article:||The Technology Of Copyright Infringement.|