Printer Friendly

Zone security considerations for SANs: overcoming inadvertent overwrites. (Storage Networking).

Securing a SAN from external assaults is not particularly hard, since most SANs are located behind thick firewalls with zealously guarded network access. The real challenge to SAN security is not external predatory assault, but inadvertent overwriting. Overwriting isn't sexy, but it can be catastrophic.

The problem is native to SANs' any-to-any connectivity. SANs connects hosts and storage devices over a network fabric and creates virtual storage pools from multiple sources. Like a spider web, all sorts of SAN elements can see each other over the fabric, hosts, storage subsystems, libraries, switches, hubs, routers and host bus adapters. To place some order on the incipient chaos, SANs use unique identifiers called logical unit numbers (LUNs). Located in SCSI buses, each LUN under the SCSI-2 specification can support up to eight LUNs per target. Units represent a variety of storage elements, including individual disks, groups of disks, or individual parts of multiple disks defined by a RAID controller or other intelligent storage controller. (The eight-LUN limit is expanding as newer protocols expand LUN addressing capabilities. SCSI-3 specifies an encoded 64-bit identifier, although both storage device and its host's HBA must use SCSI-3 to use the expanded LUN capability.

LUNs allow SANs to break its storage down into manageable pieces. For example, storage management software virtually partitions a 12GB disk into two or three segments of 3 or 4GB each, each with its own LUN identifier. It then assigns each LUN to one or more servers in the SAN. If a LUN is not mapped to a given server, that server cannot see or access the LUN. But the servers that can are not always polite about it. Here's how it works: When the SAN initializes its SCSI systems, each SCSI bus's HBA driver will discover the targets that are attached to the bus. In turn, the targets report the LUNs they are connected to, and the HBA passes on the numbers to the initiating systems. These systems can then access the LUN-addressed storage units. The problem is that SCSI-2 allows multiple initiators to access the same LUNs at the same time--with each host initiating a different operation. This results in multiple initiators overwriting each other's data, which is never a good thing in a SAN. Or anywhere else.

Operating systems do offer native levels of LUN protection. UNIX can assign user access rights to specific LUNs and protect them that way, so as long as the UNIX environment consistently observes the same rights, they'll secure their LUNs. Windows NT handles LUN security by writing a signature on each one and guarding it against duplication. Unfortunately, Windows NT has a whatever-I-see-is-mine mentality, and assumes that each LUN it finds must belong to the Windows scheme. This overrides the UNIX security features, and mass chaos ensues. This type of problem is critical in SAN environments, which might sport hundreds of HBAs, storage subsystems and controllers, and supports hundreds to millions of nodes.

Several technologies are available to handle LUN security. Hu Yoshida, Hitachi Data System's CTO, lists the four primary approaches to securing LUN-addressable data:

Host software: When servers request data from a storage pool, host-based middleware intercepts I/O requests and routes them over the network to a specialized file server. This file server reserves target LUN identities before passing on the request to the storage pool host, and releases them when the operation is complete.

Host bus adapter utilities: HBA utilities use small bits of software code called drivers to mask LUNs. LUN masking keeps the unit numbers invisible to unauthorized hosts, and is based on the unique WorldWide Name (WWN) that is stamped on Fibre Channel node chip sets.

Switch zoning: Provides LUN masking down to the port level for all nodes that the switch can see. All hosts connected to the same port will see all the LUNs that port addresses, though the switch cannot mask individual LUNs that belong to the port.

Mapping within a storage controller: Maps Fibre Channel HBA WWNs against the controller's LUNs. This allows multiple host bus adapters (HBAs) to access different LUNs through the same storage port.

Host Software

In this model, middleware intercepts 110 requests from requesting servers (initiators) and redirects them to a controlling file server. This server processes file pointers, secures and locks the LUNs, and sends the I/O request to the actual storage pool host. Host software centrally manages security and locking through to the block level by managing allocations, authorizations, authentications, and locks. The server communicates across the SAN using standard file systems such as NTSF, though some vendors such as Data Direct use proprietary file systems.

For example, Tivoli's SANergy is a SAN redirector that assigns a SAN server to act as a metadata controller (MDC). The MDC receives the 110's data request, identifies its targets such as logical disks on a RAID array, mounts the requested LUNs, formats them with their native file system, and handles the redirected file requests. Although this process takes an extra step, the MDC only transmits file pointers, not the entire file. And since SAN speeds are so much faster than LANs, latency is not an issue. Other approaches are less extensive: HP/Transoft, for example, uses a Qlogic HBA and modified driver to allow users to drag and drop LUNs between Windows NT systems without rebooting. The software presents a storage pool as a single logical unit.

HDS's Yoshida points out that this level of security is an absolute requirement for future data sharing. However, the best implementation is not yet possible--ideally the faster hardware level should redirect the 110 requests, but only when it can handle file levels as well as block levels outside the host.

Host Bus Adapter Utilities

The HBA utility model secures LUNs with a LUN masking driver. LUN masking renders LUNs invisible to specified file servers, permitting only authorized servers to see the LUNs on a storage subsystem. Emulex and JNI, for example, enable their drivers to discover LUNs in each Fibre Channel node on the SAN, noting both the LUN and each node's unique WorldWide Name (WWN). Armed with this information, the drivers post LUN and WWN lists to the storage administrator, who then assigns them to authorized hosts. Upon rebooting, the host will only see the LUNs the administrator has assigned to it.

The procedure has the advantage of being independent of Fibre Channel infrastructure (hubs, switches and routers), different vendors' storage devices, and host-based middleware. However, LUN masking is at its best in smaller storage area networks where the number of WWNs and LUNs are manageable. LUN masking is terribly complex and unwieldy in larger installations with thousands of LUNs and nodes. There is also the problem of swapping out a WWN-named node that controls a set of LUNs. When the administrator substitutes a new node, not all servers may recognize the new WWN as controlling that set of LUNs.

Switch Zoning

Switches can also mask LUNs in a procedure called switch zoning. Switch zoning occurs at the port level for all nodes that the switch sees, and only allows LUN access from hosts that can access the port via that switch. Switch zoning is not identical to LUN masking, since it can only mask a port to hide the LUNs behind it and cannot mask individual LUNs from hosts connected to the same port. A few storage devices have LUN masking utilities and can combine with the switch to mask its own LUNs.

Many switches base zoning on the flexible WWN. In this case, all attached nodes log in to the switch and register their WWNs, then the switch assigns an address to each one and creates a lookup directory. This method speeds up server access because HBAs can locate their targets through a fast lookup instead of running a discovery process through many thousands or millions of nodes. WWN-based zoning is dynamic--nodes can be moved between different port addresses without changing its WWN-based zone--but because WWN zoning can be spoofed, it is not as secure as some other methods. Brocade is implementing security precautions by zoning and enforcing at the switch hardware level, encrypting at the switch, and using password encryption. (Security policies can only be changed at a given switch using the encrypted password.)

Some switches manage zoning through hardware ports, which is not as flexible as WWN zoning but is more secure. Administrators can zone switches using out-of-band management interfaces from the LAN or across the Web. Since the switches only impact their own attached devices, the administrator must separately administer individual switch utilities. Some storage devices have LUN masking utilities and can combine with the switch to mask its own LUNs.

Mapping Within a Storage Controller

LUN masking at the storage controller level allows a storage subsystem to mask its own LUNs. For example, HDS's Freedom Storage 7700E contains a LUN masking utility in its storage controller. The storage administrator uses a remote console to map Fibre Channel HBA WWNs to the 7700E's LUNs, which masks the LUNs from unauthorized HBAs. The advantage of controller-based LUN masking is that it can work in point-to-point mode or through various infrastructure elements such as hubs and switches, and since it is based on the WWN, it is independent of physical addresses. Disadvantages include the need to remap after an HBA failure, and LUN masking only applies to the controller's own storage.

LUN security is essentially LUN masking and zoning. No one method is ideal for all environments, and storage administrators should carefully consider their environments and storage management needs before deciding on security methods. And LUN masking is hardly foolproof: It is all too easy to make mistakes with hardware-based zoning, and software-based zoning is vulnerable to spoofing and sniffing. Yoshida suggests a strict authorization process to access LUN masking features, combined with multiple masking procedures that cross-check between the HBAs and the storage subsystems. Encrypting is also an important procedure, especially with SAN data that is transferring over remote connections.
COPYRIGHT 2002 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Chudnow, Christine Taylor
Publication:Computer Technology Review
Geographic Code:1USA
Date:Oct 1, 2002
Words:1663
Previous Article:A new breed: IP SANs show great promise for networked storage. (Storage Networking).
Next Article:Storage automation: the future of Serial ATA, tape, CIM, and IP connectivity. (Storage Networking).
Topics:


Related Articles
SANs On The Internet: Meeting The Challenge.
Getting From Vendor-Centric To Data-Centric Challenges SANs.
Smart networks: embedded devices and intelligent storage. (Storage Networking).
Storage area networking and data security: choosing the right solution for maximizing storage and integrity. (SAN/NAS Backup).
Path managers keep SANs on the right track: guide storage admins through a forest of network devices.
Fibre Channel security.
Fibre channel dukes it out with IP: they're battling over cost and complexity.
Pitfalls and promises: will IP storage supplant Fibre Channel? (Storage Networking).
Where does an IP-SAN solution fit?
IP SAN security a matter of dedication: SAN's strength is also its weakness.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters