Printer Friendly

Roundtable: "switched on storage arrays" Part 3 of 3.

This is Part 3 and the conclusion of the storage roundtable at which editor-in-chief Mark Ferelli joined a panel of experts to discuss some of the underlying factors driving the storage industry's move toward switched storage back-ends. Parts 1 and 2 of this discussion were published in the June and July issues of Computer Technology Review. Again, we offer sincere thanks to Vixel Corporation for coordinating this roundtable event.

Mark Ferelli: Arun, let's focus on cost-per-megabyte. What do you think?

Arun Taneja: You mean lowering the cost-per-megabyte with the switched architecture? I think many of the points have actually been well articulated here already. You're getting better performance per controller, you're getting more drives behind a single controller, and you're getting better manageability.

All of these factors actually give you an opportunity to really scale the product a lot farther than you might have earlier. If you look at all of those elements that I mentioned, they all essentially take you toward a lower cost-per-megabyte. Relative to, if I could only put 40 drives behind a controller and max-out in performance, and now I can take that to 100 drives behind that same controller and enjoy the same performance or a better performance. What I have just done is, I have reduced the cost-per-megabyte. All of the essential elements that apply to switching, they all direct you toward a lower cost-per-megabyte.

Mark Ferelli: RAID systems, NAS systems and, in some cases, SAN systems--especially now that NetApp is working in both sides of the street--seem to be very much involved at this point with embedded storage switching. What I would like to know is if you folks see it moving into additional storage markets or storage systems, such as blade servers, tape libraries, different places like that. James, would you like to take a swing at this first?

James Myers: One of the challenges we have as vendors, and probably one of the reasons this switched back-end has been adopted more quickly in the higher-end storage subsystems, is because some of those switched components are a bit costly, quite honestly. They will come down as they get implemented more broadly in the market space, but the real challenge here is really on the marketing side and that is to be able to demonstrate to potential users of our solutions--or whatever the storage system or target might be--that there are clear value propositions associated with using them.

Until we're able to articulate that as vendors, there will be some resistance to that added cost being included in the solution and, as you move down into the lower end solution in the markets that are more price-sensitive, it's going to take a little different approach, I think, by the vendors. But I think it will come. And, again, as more and more experiences are incurred in the real world out there, in the higher end, it will naturally migrate down into the lower end.

Mark Ferelli: Mark, what do you think?

Mark Nossokoff: I think he's right. It is just coming out now in larger systems and, assuming the benefits are proven, it will emerge elsewhere. But until it does, you may see some attempts to move it into server blade systems--particularly when they're really dense server blades with lots of the blades in there--once you access the shared storage back there. The switched storage architecture back-end makes perfect sense; but much beyond that, I think it is a "time will tell" kind of thing.

Mark Ferelli: Good enough. Jim, how about you in this particular space?

Jim Beckman: There are a couple of different avenues. Obviously, from a server stand-point, they could definitely benefit from a switched architecture. Tape libraries? I am still questioning the long-term viability of tape. We'll see with the ATA arrays and lower cost storage. As far as people moving tape requirements to disk, there seems to be a shift going on. If you look at some of the storage area network environments where you have director-class products and then switched-class products, it's kind of a fight between the centralized and distributed type of environment.

I think, over time, there's going to be this battle as to whether you embed a switch into a larger device and put everything inside of one enclosure or multiple enclosures attached to that one, switched controller. Or whether you move to more of a modular type of approach like server blades, and have a switched architecture where you can attach those blades and more of a node type of architecture.

So, I think that time will tell. But the smaller storage arrays are starting to get a lot more play and in the storage area network environment there's more viability for the small modular arrays, and some of that will bleed over into the server environment as well.

Mark Ferelli: Jim, you work enterprise high-end rate, but do you see additional markets or systems that embedded needs to be looked at?

Jim Beckman: You mean as far as more of the modular type of market or the distributor type of market?

Mark Ferelli: All of the above, actually.

Jim Beckman: Depending on the customer, it depends on what kind of architecture they have from a storage standpoint. A lot of enterprises have a tier-type of architecture where they have a centralized data center. And then they have either regional offices or metro-type of offices.

As the bandwidth availability from switching continues to grow, you'll be able to move that switching out into the offices--maybe in the form of iSCSI, along those lines--and provide switching at the office layer and then provide a transport back to a main data center, so there's a better sharing of data and better availability between the offices and the main data centers.

Mark Ferelli: Chris, you swing between SAN, NAS, all of the above. How about the other markets or systems that embedded can go into? Where do you see it going?

Chris Bennett: Well, I think it's pretty clear that as the value of switched architectures at the back-end scales, the value gets increasingly big--the bigger the systems get. I think that's why they are being deployed on the higher end systems first. But I think the cost of technology drops when they become better understood. 1 would expect that they're going to get down to lower echelons in the storage industry.

I think there's some limit at which it drives the incremental cost--even if it is relatively small, it won't make any sense. That would be for small--Three, four, five--sort of drive systems. It probably isn't enough of a benefit in the manageability aspect to make that worthwhile. But with regards to the blade server architecture, I actually think there's a natural linkage there. If you think about the way blades are architected and deployed, that's customer type. That's very familiar with switching. They have to deploy the whole server infrastructure with the massive switching infrastructure, and 1 think that there'll be a natural adoption because there would be a large number in spindles involved and being attached to the server funds.

Mark Ferelli: Brian, how about you? From your point of view, there has to be quite a number of applications that make sense.

Brian Reed: Yes. I'll comment on at least one of them. Tape vendors, for example, have the same type of limitations that we all have, that the system has been described here, on RAID and NAS vendors.

They all have shared based architectures with some number of tape drives. And what's happening this year is, you're beginning to see native Fibre Channel tape drives being deployed by tape vendors and they have the same type of limitations where they can't do drive isolation, performance limitations and they need to have some type of Fibre Channel switching within their tape libraries. So, I think you'll begin to see all the tape vendors also implement switching with in the systems.

Mark Ferelli: Arun, give me a 50,000 foot view. Which happens to make sense for you?

Arun Taneja: Actually, in regards to the tape scenario that Brian just mentioned, I think there's this whole question about LIPs or the L-I-P commands. That's very disruptive to tape drives and you've got a much better control on that in a switched environment. So, 1 do see that there's going to be applicability there in larger libraries. And I think server blades, for the reasons that someone mentioned earlier on, it's an embedded type of architecture by definition. So, this would play very well in that server blade environment.

Mark Ferelli: Let's look at our crystal balls for a moment, shall we? What's the future of switched-based architectures--not only applications but technologies, as well? Which ones are going to be switch-based? Which ones will switched-based allow you to track as an enabler? Arun, give us a start on this, would you?

Arun Taneja: I think history has already proven that switched architectures have brought a lot of benefits over the last ten years to, not only, the storage side but other aspects of computing, as well. So, I think the overall future this has got to be very bright. I think it's not just a technology that's going to be applicable to Fibre Channel or Fibre Channel arbitrated loop-type of devices like disks, but I think there's going to be applicability for serial ATA and SAS devices, as well. They're going to need improvements in RAS, and particularly if you want to take the SATA drive into the enterprise. Fundamentally, by their inherent architectures, they may or may not have the levels of RAS that you're going to need at the enterprise level.

So, I think there's certainly applicability there that can be brought in by devices like this. The up thing is, all the representative vendors are here. I mean, we're all moving toward a lights-out type of a data center. We are moving toward autonomic computing. Call it, from a storage perspective, utility storage. All of those terms are being bandied about but really it is a true direction. That's where we want to end up is the lights-out type of a data center.

So, in those environments, there is no tolerance for downtime. There is no tolerance for 'you've got to be able to predict failure' and, of course, you want to deal with it if there is a failure. But, ideally, you want to be able to predict this in order to keep the efficiency of that lights-out data center at a high level. Within all of these contexts, I think switched architectures will play a pretty good role.

Mark Ferelli: How about from the provisioning point of view? Jim, would you think about that for us?

Jim Beckman: From a provisioning point of view, obviously, the switched architecture provides better scalability. Bandwidth attempts to be the constraint across the board, from the data center all the way to cover the last mile. As the cost of switched architectures continues to decrease and we're able to move that out and increase bandwidth out to--basically, to the home--the amount of data continues to grow. There's a lot of replicated data out there that customers have, and there's really not a good way to get it from one place to another or a way to access that information directly, so they replicate the data and then move it from one place to another.

So, a switched architecture increases that overall bandwidth and allows better access to information and it allows you to then provision users to be able to access that data directly from a centralized location.

Mark Ferelli: Mark, we talked a little bit about the maintenance aspect. Does that still click for you?

Mark Nossokoff: Yes. Arun touched on a couple of things I'd like to reiterate, too, and especially in the kind of preventive and autonomic computing. The switched back-end architecture and the diagnostic point of it are just now coming into their own. We've begun to really see all the benefits along those lines, and some of the diagnostics and preventive capabilities that can be implemented in the storage systems there.

The switched architecture will become a common standard part of the storage infrastructure back there and will enable some of these autonomic computing or autonomic storage in the computing implementation.

Arun also mentioned some of the other drive technologies that are being discussed and defined: the SATA and the SAS. By their very definition, and they're initially kind of point-to-point and require switching in order for the performance and some sort of switching or hubs or what-have-you for their performance and connectivity. So, again, the switching is just going to be a standard part of the storage infrastructure moving forward.

Mark Ferelli: Now, there is a creature out there called a "storage brick." Consistent with the storage industry's creativity and naming, tell me, Brian, where do storage bricks play?

Brian Reed: Well, the root of filling the irreplaceable unit has always been the hard-disk drive. What you're beginning to see is the concept of vendors who want that lowest component to be some bigger subset of multiple hard drives together--maybe 8 or 16, or whatever, is a brick.

So, in other words, it has a bunch of hard-disk drives and you won't care if a couple fail because the whole brick would still be a field replaceable unit and you may replace it every five years. But, within that, to be able to enable doing things like storage bricks, you need to have a switched-based architecture within the brick and a switched- based architecture that connects the bricks together.

In the future, what you'll begin to see is vendors that will begin to make some very scalable high-performance systems based upon a large number of storage bricks.

Mark Ferelli: Now, when we're all looking at embedded storage switch technology, some have preferred to buy it and others have elected to build the technology in-house. What I'd like to know is what the considerations are for building switches in-house and which are they for buying the technology? Jim, why don't you talk about the in-house side of the equation?

Jim Beckman: I think that really depends upon the solution provider and the capabilities that they have. There are a lot of storage companies that are more of a start-up storage company where they do not have large manufacturing capabilities.

So obviously, from a development standpoint, they're going to need to look outside to find a switched architecture to embed in their technology. Hitachi is obviously a fantastic manufacturer of technology products and has the ability, research and development, to be able to build the switch in-house. It actually provides a few advantages and that allows us to have components that were developed in common and work together from one end to the other.

Actually, from a code standpoint, it allows the code to be common from one end to the other and provides us with unique diagnostic and maintenance capabilities because all of those components were developed in-house. And it allows us to work out some of the kinks in communicating from one piece of the technology to another.

Mark Ferelli: James, how about buying out?

James Myers: Really, the key thing is, most of us vendors have a lot of bright ideas. I think there's no limit to what we can dream up, that we can do. And a lot of times our R&D funds are fairly limited. So, what we have to do is make some tough decisions on what we fund and what we don't and, generally, what you'd like to do is you'd like to internally fund those things that involve differentiated intellectual property where you're really going to be unique in the industry.

So, by basically buying the switch technology from the outside, it really gives you a chance to focus those limited funds on more critical things. But also it's very much time to market. Many times, internally developed things don't have the same time to market as we can get from the outside, and especially if they're very concerning to us because the market is indeed very competitive.

And I think, finally, the last point would be on our cost basis. By leveraging the economies of a much larger OEM volume across several storage suppliers, you can get the cost down versus internally developing--even though, from an internal side, sometimes you can integrate things together, which does help reduce costs. But, certainly, there is something to say about the economies of scale of larger volumes.

Mark Ferelli: Thank you, James. I promised that we would be something resembling "on time" in this conference. So what I'm going to do is ask Arun to give us an overview of some of the advantages, in terms of storage switching architecture, and to wrap us up that way.

Arun Taneja: Usually I deal with issues where there are a bunch of pros and where there are a bunch of cons. You've got to sit down and say, "Okay, on the whole, does this make sense?" for the particular product that one is trying to build. There are always design constraints, and ups and downs, and so on and so forth.

In this particular situation, I'm really hard pressed to find very many places where this would not be a good thing. I think the applicability of this--on the back-end switching side of this, I'm specifically referring to that here--I think the applicability is actually very strong. Because, how can you argue in terms of improving performance, improving scalability and lowering cost-per-megabyte? We talked a lot about serviceability improvements and predictability, in terms of components that are maybe degrading or what-have-you.

These are "all very positive things. And in the grand scheme of things, at least based on what the numbers that I've seen, the cost that something like this adds to the back-end is--as someone pointed out earlier on--you have to be able to market that cost. But because the scalability and the performance, and things are so much better if marketed correctly, those costs can be dissipated and they really become not costs, they become pure advantages.

So, really, all in all, what's not to like? This is a very good technology. I certainly endorse it.

Mark Ferelli: Gentlemen, I've been dealing with analysts for a lot of years and none of them ever say, "What's not to like?" So this bodes very, very well for the switching technology we've been discussing today. I want to thank you all very, very much for your participation. Our special thanks to Vixel for sponsoring this roundtable and for keeping things organized and on track.

ROUNDTABLE PARTICIPANTS

Jim Beckman: director of product marketing, Hitachi Data Systems.

Chris Bennett: director of platform and system planning product marketing group, Network Appliance.

James Myers: strategic product marketing manager, Hewlett-Packard Network Storage Solutions.

Mark Nossokoff: senior member of Storage System Strategic Planning, LSI Logic.

Brian Reed: vice president of business and market development, Vixel Corporation.

Bon Rumer: director of strategic marketing, Datacom Products Division, Vitesse Semiconductor.

Arun Taneja: founder, president, senior analyst, The Taneja Group.
COPYRIGHT 2003 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Connectivity
Publication:Computer Technology Review
Date:Aug 1, 2003
Words:3206
Previous Article:IP SAN or Fibre Channel SAN?
Next Article:SCSI Cables: more than a couple of connectors and a few pieces of wire.


Related Articles
Raidtec launches 800MB/s SAN Storage Array, FibreArray 2208.
EMC and Data Matrix Architecture. (Storage Networking).
Smart networks: embedded devices and intelligent storage. (Storage Networking).
Fibre Channel security.
Employing IP SANs to address business needs. (Storage Networking).
Roundtable: "switched on storage arrays": Part 2 of 3.
Consolidate storage without losing control.
Evaluating the requirements for the storage network backbone.
4Gb/s: coming soon to a SAN near you.
NEC selects Emulex InSpeed for enterprise-level storage solutions.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters