Crossing The ESCON/SAN Border?This article, excerpted from Marc Farley's book "Building Storage Networks," was reproduced with the permission of The McGraw-Hill Companies. Copyright 2000. Osborne/McGraw-Hill.
Mainframe I/O (Input/Output) The transfer of data between the CPU and a peripheral device. Every transfer is an output from one device and an input to another. See PC input/output.
I/O - Input/Output processing is rich in functionality but is rather expensive to own and operate. Many industry experts have been predicting the demise of mainframes for years. But the reliability and capabilities of mainframe I/O processing make it almost impossible for many companies to pull the plug on their mainframes. Therefore, the requirement to transfer data between mainframes and open systems platforms is real. Corporate restructuring through ERP (Enterprise Resource Planning) An integrated information system that serves all departments within an enterprise. Evolving out of the manufacturing industry, ERP implies the use of packaged software rather than proprietary software written by or for one customer. implementations and data warehousing See data warehouse.
data warehousing - data warehouse have been the major impetus behind the desire to transfer data between mainframes and open systems servers. Both depend heavily on the ability to exchange data between participating systems and integrate information throughout the organization.
File transfers over TCP/IP TCP/IP
in full Transmission Control Protocol/Internet Protocol
Standard Internet communications protocols that allow digital computers to communicate over long distances. and SNA (Systems Network Architecture) IBM's mainframe network standards introduced in 1974. Originally a centralized architecture with a host computer controlling many terminals, enhancements, such as APPN and APPC (LU 6. protocols have been a staple in cross-platform data transfers for years and there are many products to choose from. To increase the performance of these operations, SAN to ESCON (Enterprise Systems CONnection) An IBM S/390 fiber-optic channel that transfers 17 Mbytes/sec over distances up to 60 km depending on connection type. ESCON allows peripheral devices to be located across large campuses and metropolitan areas. storage gateways can be employed using ESCON channel speeds of 17MB/sec, as opposed to 1MB/sec to 5MB/sec over data networks.
Examining ESCON/SAN Data Transfers
In any data transfer there has to be a sender and a receiver, and the vehicle used to make the transfer can be independent of both. Therefore, data transfers between mainframe and open systems processors can potentially take one of three forms:
* Open systems initiated
* Mainframe initiated
* Data mover Also called a "storage router," it is a device in a backup system that manages the transfer of data to the backup storage. See LAN free backup. initiated
We'll now examine each of these methods.
Network and Storage Transfer Functions. In order for data to be transferred successfully across the ESCON/SAN border, there must be a working network connection and there must be some way to store it safely and then locate it after the transfer is finished.
On the network connectivity side, working with S/390 I/O requires playing by the rules of the mainframe world. This means that any entity that initiates a data transfer has to be able to do it in a way that works with the access method imposed by the S/390 system. Without intimate knowledge of the mainframe's access method and I/O process, there will be no reliable data transfer.
This matter of safely storing transferred data is a very real problem, not just for mainframes and open systems but between open systems servers too. There needs to be some way of ensuring that transferred data doesn't overwrite (1) A data entry mode that writes over existing characters on screen when new characters are typed in. Contrast with insert mode.
(2) To record new data on top of existing data such as when a disk record or file is updated. data that is already safely stored. That means processing the transferred data through the file or database system that controls the placement of data on storage. Unfortunately, storage to storage transfers do not use file or database systems and cannot yet ensure data will not be overwritten during the process.
The ESCON One-Way Street Noun 1. one-way street - unilateral interaction; "cooperation cannot be a one-way street"
unilateralism - the doctrine that nations should conduct their foreign affairs individualistically without the advice or involvement of other nations
2. . In an ESCON world, there is no concept of a device suddenly waking up and deciding to initiate a data transfer. Mainframe devices speak only when they are expected to. This pretty much shuts the door on initiating an ESCON/SAN data transfer from an open systems server, storage or data mover.
For that reason, device-to-device data transfers from mainframe systems are going to be "pushed" from the mainframe to the open systems side--at least until the access methods are implemented on open systems machines that allow direct access to mainframe storage subsystems and devices. The possibilities of achieving this with FICON (FIber CONnector) An IBM mainframe channel introduced with its G5 servers in 1998. Based on the Fibre Channel standard, it boosts the transfer rate of ESCON's half-duplex 17MB/sec to a full-duplex 100MB/sec. are somewhat realistic and make up the final discussion in this article.
Mainframe Direct Access of SAN-Resident Data. Mainframes encounter different problems when trying to access data located in a SAN on open systems storage subsystems. In order for the mainframe system to access the open systems data directly, it must have access to the file system or database system. For this to happen, a filing system and access method needs to be developed for mainframes that allow them to access SAN devices/subsystems and directly read data created by open systems servers. This is certainly within the realm of possibility, but again it is also not a trivial development effort.
ESCON to SAN: Not Too Promising. The conclusion to this discussion is that ESCON to SAN high-speed, device-to-device data transfers are more in the realm of fiction than fact. A few companies specialize in making these kinds of transfers possible through the use of specialized device emulation and application code. However, in general it is not supported by the respective architectures. That makes it difficult and expensive.
FICON as a Data Transfer Enabler. The limitations outlined above could be rectified through the use of FICON as an Upper Layer Protocol (protocol) Upper Layer Protocol - 1. (ULP, or upper-layer protocol) Any protocol residing in OSI layers five or above.
The Internet protocol suite includes many upper layer protocols representing a wide variety of applications e.g. FTP, NFS, RPC, and SMTP. (ULP (1) (Upper Layer Protocol) Refers to a protocol at a high layer of the protocol stack, such as the application layer or a layer between the application layer 7 and transport layer 4 (see OSI model). ) in Fibre Channel SANs. With FICON as an ULP, there is no need for a gateway like there is currently between ESCON and SAN environments. In other words Adv. 1. in other words - otherwise stated; "in other words, we are broke"
put differently , the connectivity to all storage subsystems, whether they are mainframe or open systems based, can be accomplished over a single common network infrastructure.
Fig 1 shows a Fibre Channel SAN that carries both FCP (Fibre Channel Protocol) See Fibre Channel.
FCP - Flat Concurrent Prolog.
["Design and Implementation of Flat Concurrent Prolog", C. Mierowsky, TR CS84-21 Weizmann Inst, Dec 1984]. and FICON traffic.
The breakthrough aspect of FICON is that gateway systems and device emulation are no longer needed for the two sides to access each other's devices. That reduces the complexity of the situation considerably and opens the door to several new approaches to FICON/SAN data transfers, including:
* Data movers capable of processing CCWs, channel programs SCSI commands, and FICON/FCP protocols. These could be mainframe storage controllers or intelligent open systems data movers. They perform the actual transfer work.
* Data mover control. FICON/SAN data movers would operate under the control of systems software on either the host or open systems side.
* Data conversion engines receive the transferred data and convert it to a format that is usable by the receiving system.
* Data receivers provide the ability for transferred data to be stored safely and properly in the receiving system's storage; including updating the respective file or database system with the correct metadata information.
Fig 2 shows these components in the context of a SAN. There are any number of ways these components could be implemented to create a solution for data transfers between mainframe and open systems storage. While these ideas may seem slightly far-fetched to some readers they are not necessarily improbable; given the pressure on mainframe systems to conform to Verb 1. conform to - satisfy a condition or restriction; "Does this paper meet the requirements for the degree?"
coordinate - be co-ordinated; "These activities coordinate well" open systems flexibility, it is not out of the question for IBM (International Business Machines Corporation, Armonk, NY, www.ibm.com) The world's largest computer company. IBM's product lines include the S/390 mainframes (zSeries), AS/400 midrange business systems (iSeries), RS/6000 workstations and servers (pSeries), Intel-based servers (xSeries) or others to make these breakthroughs.
Implementing An Allocation Layer Function In Mainframe To Open Systems Data Transfers
As mentioned, data that comes from a mainframe into the SAN is similar to any data transfer that occurs on a device-to-device level in the SAN-there has to be some way to place the data onto a device or subsystem where its location can be managed correctly by a file or database system.
A distributed file or database system that separates the space allocation function from the higher level application view of the system could allow this to happen. Data arriving at a subsystem with a space allocation function would be processed and written to free space in storage. Then this space allocation function could complete the process by updating the metadata used by the higher level application view running in a system on the SAN. Thereafter, the transferred data would be reflected in the image presented to users and applications.
Fig 3 shows a SAN that provides direct access from a mainframe to data residing on an open systems storage subsystem. Data starts on the mainframe and is sent through the Fibre Channel network over the FICON ULP to an intelligent storage subsystem that supports both FCP and FICON coimnands. Regardless of the protocol used, the data is first processed by an allocation layer function in the intelligent storage subsystem.
Data Sharing The ability to share the same data resource with multiple applications or users. It implies that the data are stored in one or more servers in the network and that there is some software locking mechanism that prevents the same set of data from being changed by two people at the same time. With Mainframes
If file transfers can be accomplished with FICON, the next integration step would be data sharing between mainframes and open systems. S/390 systems are already very good at sharing data among themselves through the IBM's Parallel Sysplex IBM's System/390 clustering architecture. It allows multiple System/390 computers to work together as a single system. It supports data sharing with guaranteed integrity, extensive resource sharing and sophisticated workload balancing. Coupling Facility The hardware and software that turns an IBM mainframe Base Sysplex system into a Parallel Sysplex. It is made up of special microcode built into the machine, and the CFCC operating system (Coupling Facility Control Code). .
The Coupling Facility acts like a distributed lock manager A distributed lock manager (DLM) provides distributed applications with a means to synchronize their accesses to shared resources.
DLMs have been used as the foundation for several successful clustered file systems, in which the machines in a cluster can use each other's that manages the access to blocks of data held in systems' local caches. Each system using the Coupling Facility coordinates its cache operations with the Coupling Facility and is connected to it over a high-speed link. As each system accesses data from disk it registers those data blocks with the Coupling Facility and sets a local cache variable called a local state vector
It is possible that open systems computers could someday be able to participate in a Parallel Sysplex cluster by implementing Parallel Sysplex technology over Fibre Channel and FICON. Other processes would be needed to make the necessary data format conversions as well as providing the link and device protocols needed. All of this would be a significant amount of work, but could be made easier as FICON becomes a published standard.
Fig 5 shows a hypothetical installable file system A file system that can be added to an operating system that is designed to handle multiple file systems. Multiple file systems allow different types of file structures to be accessed. See IFSMgr. running in an open systems machine, accessing a shared storage subsystem, also being accessed by a mainframe processor.
In the next few years, the Years, The
the seven decades of Eleanor Pargiter’s life. [Br. Lit.: Benét, 1109]
See : Time development of FICON as a Fibre Channel ULP stands to bring significant changes to the way open systems servers can access mainframe-resident data. The development of specialized storage subsystems and installable file systems- or installable access methods, to be more correct could allow open systems to play on an even field with mainframe processors for the access to corporate data.
Marc Farley is the vice president of marketing at SanCastle Technologies (Huntington, NY .