Shared SAN Cluster Configurations Get A Firm Foundation.
A SAN is a storage networking architecture that allows for more efficient use of storage capacity by offloading storage from the LAN (Local Area Network) A communications network that serves users within a confined geographical area. The "clients" are the user's workstations typically running Windows, although Mac and Linux clients are also used. to a dedicated storage network. However, one of the features SAN administrators most want has only recently been widely available: the ability to virtualize To cause a virtual technique to be performed. See virtualization. data across a heterogeneous SAN. What appeared to users as mere network data appeared to SAN administrators as a nightmare of complexity, disk additions, and bad games of "guess the server" as they tossed data around from storage device to storage device.
A new product family that adds virtualization An umbrella term for enhancing a computer's ability to do work. Following are the ways virtualization is used.
Partitioning the computer's memory into separate and isolated "virtual machines" simulates multiple machines within one physical computer. and other benefits to clustered server environments is VERITAS SANPoint Foundation Suite HA. The suite extends VERITAS File System See VxFS. and Volume Manager to support concurrent data sharing The ability to share the same data resource with multiple applications or users. It implies that the data are stored in one or more servers in the network and that there is some software locking mechanism that prevents the same set of data from being changed by two people at the same time. among clustered servers in a SAN. It also incorporates VERITAS Cluster Server's file system capabilities and internode in·ter·node
1. A section or part between two nodes.
2. An internodal segment.
in communications across the servers. This impacts a number of applications, including highly available configurations such as databases, Web farms, workflow applications with large files, and offhost backups running on separate servers. VERITAS claims four primary features:
Concurrent access The ability to gain admittance to a system or component by more than one user or process. For example, concurrent access to a computer means multiple users are interacting with the system simultaneously. to shared tiles. Multiple servers mount and access the same file system on shared media, with no required modifications to existing applications.
File system integrity in a shared environment. Controls access to the file system structure using a global lock manager, and manages cache coherence and locking. Systems accessing shared file systems always see the same information.
Fast failover for high availability Also called "RAS" (reliability, availability, serviceability) or "fault resilient," it refers to a multiprocessing system that can quickly recover from a failure. There may be a minute or two of downtime while one system switches over to another, but processing will continue. environments. Provides robust application-level failover from VERITAS Cluster Server Veritas Cluster Server (also known as VCS) is a High-availability cluster software, for Unix, Linux and Microsoft Windows computer systems, created by Veritas Software (now part of Symantec). .
Clusterwide management of SAN data. Allows clusterwide logical device naming and volume and file system operations.
The suite works at the server level in clustering groups. Clustering consists of servers connected to the same storage devices, accessible by the same clients, and coordinated by a cluster server See Microsoft Cluster Server. application. They offer a number of distinct advantages to IT administrators, including failover in the case of server failure, the ability to repartition re·par·ti·tion
1. Distribution; apportionment.
2. A partitioning again or in a different way.
tr.v. re·par·ti·tioned, re·par·ti·tion·ing, re·par·ti·tions
To partition again; redivide. workload on multiple servers, alternate network links in case of link failure, disaster recovery, and minimizing the management of multiple individual systems.
The Foundation Suite incorporates VERITAS Volume Manager The Veritas Volume Manager, VVM or VxVM is a proprietary logical volume manager from Veritas (now part of Symantec). It is available for Windows, AIX, Solaris, Linux, and HP-UX. A modified version is bundled with HP-UX as its built-in volume manager. , File System, and interconnection technologies from Cluster Server. It also adds unique technologies from Cluster Volume Manager and Cluster File System.
Cluster Volume Manager (CVM) makes use of the foundational technologies of Volume Manager (VxVM). VxVM aggregates disks or hardware-based RAID arrays into flexible logical volumes. Operating on a three-tier storage object architecture, VxVM aggregates block ranges on physical disks into plexes, which represent complete and consistent copies of the volume content. Each plex offers failure tolerance and data mapping Data mapping is the process of creating data element mappings between two distinct data models. Data mapping is used as a first step for a wide variety of data integration tasks including:
Just as CVM is based on VxVM, CFS CFS
chronic fatigue syndrome
n.pr See syndrome, chronic fatigue.
CFS Chronic fatigue syndrome, see there is based on VERITAS File System (VxFS). The Cluster File System (CFS) enables several clustered servers to mount and use a file system as if all applications using the file system were running on the same server. It uses a master-client model to manage file system metadata on shared volumes, with the first server to mount each CFS file system becoming its master while all other cluster nodes become clients. The CFS master node makes all metadata updates and maintains the metadata update intent log, while applications access the user data in files directly from the server on which they are running.
CVM offers features found in VxFS including managing space by concisely mapping files up to a terabyte in size, fast recovery from most system crashes by tracking recent file system metadata updates, online capability to extend and defragment To reorganize the disk by putting files into contiguous order. Because the operating system stores new data in whatever free space is available, data files become spread out across the disk as they are updated. active file systems, and quick I/O (Input/Output) The transfer of data between the CPU and a peripheral device. Every transfer is an output from one device and an input to another. See PC input/output.
I/O - Input/Output features that bypass database kernel locking by treating files as raw partitions. This last feature also enables 32 bit applications to avail themselves of a system cache larger than 4GB.
CFS extends these capabilities to clusters and adds the following:
* Freezes the file system state throughout the cluster. This allows administrators to perform certain operations on applications that require a consistent on-disk image of a file system.
* Allows both clusterwide and local file system mounting, allowing administrators to choose to share data among cluster nodes.
* Enables a node by node upgrade of CFS itself, allowing the cluster as a whole to operate throughout the upgrade process.
Some examples of Foundation Suite applications include continuous availability environments, parallel applications, workflow files, and backup operations.
Continuous availability. Failover abilities are a basic feature of cluster management. In this model, servers in a cluster configuration serve separate file systems. If one server fails, the other identifies the failure, mounts the failed server's file system, and restarts the client application. But like other types of non-virtualized storage models, IT administrators must still assign and reassign storage space on individual servers, hopefully without taking down a critical application or system. Storage virtualization Treating storage as a single logical entity without regard to the hierarchy of physical media that may be involved or that may change. It enables the applications to read from and write to a single pool of storage rather then individual disks, tapes and optical devices. simplifies administration, improves service and critical data availability Refers to the degree to which data can be instantly accessed. The term is mostly associated with service levels that are set up either by the internal IT organization or that may be guaranteed by a third party datacenter or storage provider. , and improves performance due to I/O load balancing The fine tuning of a computer system, network or disk subsystem in order to more evenly distribute the data and/or processing across available resources. For example, in clustering, load balancing might distribute the incoming transactions evenly to all servers, or it might redirect them if the volumes holding the file systems are striped across more disks.
Parallel application: Web server. Server farms are a popular configuration of high-transaction environments such as Internet server clusters. Most of them feature a load balancing facility in addition to server failover and the ability to add servers and data copies at will. This model is demanding of administrator time and resources since multiple copies of data on servers make clusterwide updates quite challenging. This results in a high cost of incremental storage and administration, and undermines the integrity of data by maintaining multiple copies. Shared data clusters enable all Web servers to work from one data image, no matter which server handles a request. Adding capacities no longer requires restructuring the operation.
Workflow application: Video production. In workflow applications, a single piece of work flows (or lurches) from server to server. Among these types of applications, video post-production, and its huge files, is possibly the most demanding of space and resources. Clusters and shared storage eliminate the need to transfer these files via tape or over the network, but the problem still remains of locking out one user's access to the data while a previous workstation is still manipulating it. A cluster can ideally provide universal interconnection of computers and data, so video objects are passed between stations only when not in use by another workstation.
Backup Application: Block level incremental database backups. Vendors have flooded the backup market with devices and applications, seeking to relieve IT administrators of horrendous backup realities. Increasing application demands, larger databases, and complex systems make it difficult to back up and restore in a reasonable period of time without denying service to network users. VERITAS has identified two fundamental backup problems: 1) Obtaining consistent point-in-time backups of large file systems or databases without blocking application access, and 2) backing up large file systems or databases without disrupting operational client traffic or server I/O. When used with VERITAS NetBackup, the Foundation Suite allows off-host backups from different servers in the same cluster, accessing the same shared data. This allows administrators to create point-in-time snapshots of critical data such as Oracle databases, then to use a separate server running NetBackup to back up as per the snapshot.
Shared data cluster configurations offer a number of advantages over single host or shared-nothing clusters. In addition to providing this technology, the Foundation Suite leverages the File System and Volume Manager products and offers cluster virtualization, fast failover, virtual single file systems, and global locks to manage access to data and provide cache coherency Managing a cache so that data are not lost or overwritten. For example, when data are updated in a cache, but not yet transferred to its target memory or disk, the chance of corruption is greater. Cache coherency is obtained by well-designed algorithms that keep track of the cache. .