Crossing The ESCON/SAN Border?
Mainframe I/O processing is rich in functionality but is rather expensive to own and operate. Many industry experts have been predicting the demise of mainframes for years. But the reliability and capabilities of mainframe I/O processing make it almost impossible for many companies to pull the plug on their mainframes. Therefore, the requirement to transfer data between mainframes and open systems platforms is real. Corporate restructuring through ERP implementations and data warehousing have been the major impetus behind the desire to transfer data between mainframes and open systems servers. Both depend heavily on the ability to exchange data between participating systems and integrate information throughout the organization.
File transfers over TCP/IP and SNA protocols have been a staple in cross-platform data transfers for years and there are many products to choose from. To increase the performance of these operations, SAN to ESCON storage gateways can be employed using ESCON channel speeds of 17MB/sec, as opposed to 1MB/sec to 5MB/sec over data networks.
Examining ESCON/SAN Data Transfers
In any data transfer there has to be a sender and a receiver, and the vehicle used to make the transfer can be independent of both. Therefore, data transfers between mainframe and open systems processors can potentially take one of three forms:
* Open systems initiated
* Mainframe initiated
* Data mover initiated
We'll now examine each of these methods.
Network and Storage Transfer Functions. In order for data to be transferred successfully across the ESCON/SAN border, there must be a working network connection and there must be some way to store it safely and then locate it after the transfer is finished.
On the network connectivity side, working with S/390 I/O requires playing by the rules of the mainframe world. This means that any entity that initiates a data transfer has to be able to do it in a way that works with the access method imposed by the S/390 system. Without intimate knowledge of the mainframe's access method and I/O process, there will be no reliable data transfer.
This matter of safely storing transferred data is a very real problem, not just for mainframes and open systems but between open systems servers too. There needs to be some way of ensuring that transferred data doesn't overwrite data that is already safely stored. That means processing the transferred data through the file or database system that controls the placement of data on storage. Unfortunately, storage to storage transfers do not use file or database systems and cannot yet ensure data will not be overwritten during the process.
The ESCON One-Way Street. In an ESCON world, there is no concept of a device suddenly waking up and deciding to initiate a data transfer. Mainframe devices speak only when they are expected to. This pretty much shuts the door on initiating an ESCON/SAN data transfer from an open systems server, storage or data mover.
For that reason, device-to-device data transfers from mainframe systems are going to be "pushed" from the mainframe to the open systems side--at least until the access methods are implemented on open systems machines that allow direct access to mainframe storage subsystems and devices. The possibilities of achieving this with FICON are somewhat realistic and make up the final discussion in this article.
Mainframe Direct Access of SAN-Resident Data. Mainframes encounter different problems when trying to access data located in a SAN on open systems storage subsystems. In order for the mainframe system to access the open systems data directly, it must have access to the file system or database system. For this to happen, a filing system and access method needs to be developed for mainframes that allow them to access SAN devices/subsystems and directly read data created by open systems servers. This is certainly within the realm of possibility, but again it is also not a trivial development effort.
ESCON to SAN: Not Too Promising. The conclusion to this discussion is that ESCON to SAN high-speed, device-to-device data transfers are more in the realm of fiction than fact. A few companies specialize in making these kinds of transfers possible through the use of specialized device emulation and application code. However, in general it is not supported by the respective architectures. That makes it difficult and expensive.
FICON as a Data Transfer Enabler. The limitations outlined above could be rectified through the use of FICON as an Upper Layer Protocol (ULP) in Fibre Channel SANs. With FICON as an ULP, there is no need for a gateway like there is currently between ESCON and SAN environments. In other words, the connectivity to all storage subsystems, whether they are mainframe or open systems based, can be accomplished over a single common network infrastructure.
Fig 1 shows a Fibre Channel SAN that carries both FCP and FICON traffic.
The breakthrough aspect of FICON is that gateway systems and device emulation are no longer needed for the two sides to access each other's devices. That reduces the complexity of the situation considerably and opens the door to several new approaches to FICON/SAN data transfers, including:
* Data movers capable of processing CCWs, channel programs SCSI commands, and FICON/FCP protocols. These could be mainframe storage controllers or intelligent open systems data movers. They perform the actual transfer work.
* Data mover control. FICON/SAN data movers would operate under the control of systems software on either the host or open systems side.
* Data conversion engines receive the transferred data and convert it to a format that is usable by the receiving system.
* Data receivers provide the ability for transferred data to be stored safely and properly in the receiving system's storage; including updating the respective file or database system with the correct metadata information.
Fig 2 shows these components in the context of a SAN. There are any number of ways these components could be implemented to create a solution for data transfers between mainframe and open systems storage. While these ideas may seem slightly far-fetched to some readers they are not necessarily improbable; given the pressure on mainframe systems to conform to open systems flexibility, it is not out of the question for IBM or others to make these breakthroughs.
Implementing An Allocation Layer Function In Mainframe To Open Systems Data Transfers
As mentioned, data that comes from a mainframe into the SAN is similar to any data transfer that occurs on a device-to-device level in the SAN-there has to be some way to place the data onto a device or subsystem where its location can be managed correctly by a file or database system.
A distributed file or database system that separates the space allocation function from the higher level application view of the system could allow this to happen. Data arriving at a subsystem with a space allocation function would be processed and written to free space in storage. Then this space allocation function could complete the process by updating the metadata used by the higher level application view running in a system on the SAN. Thereafter, the transferred data would be reflected in the image presented to users and applications.
Fig 3 shows a SAN that provides direct access from a mainframe to data residing on an open systems storage subsystem. Data starts on the mainframe and is sent through the Fibre Channel network over the FICON ULP to an intelligent storage subsystem that supports both FCP and FICON coimnands. Regardless of the protocol used, the data is first processed by an allocation layer function in the intelligent storage subsystem.
Data Sharing With Mainframes
If file transfers can be accomplished with FICON, the next integration step would be data sharing between mainframes and open systems. S/390 systems are already very good at sharing data among themselves through the IBM's Parallel Sysplex Coupling Facility.
The Coupling Facility acts like a distributed lock manager that manages the access to blocks of data held in systems' local caches. Each system using the Coupling Facility coordinates its cache operations with the Coupling Facility and is connected to it over a high-speed link. As each system accesses data from disk it registers those data blocks with the Coupling Facility and sets a local cache variable called a local state vector. The local state vector has a single bit that determines whether or not the block is valid, or usable, asdown shown shown in Fig 4.
It is possible that open systems computers could someday be able to participate in a Parallel Sysplex cluster by implementing Parallel Sysplex technology over Fibre Channel and FICON. Other processes would be needed to make the necessary data format conversions as well as providing the link and device protocols needed. All of this would be a significant amount of work, but could be made easier as FICON becomes a published standard.
Fig 5 shows a hypothetical installable file system running in an open systems machine, accessing a shared storage subsystem, also being accessed by a mainframe processor.
In the next few years, the development of FICON as a Fibre Channel ULP stands to bring significant changes to the way open systems servers can access mainframe-resident data. The development of specialized storage subsystems and installable file systems- or installable access methods, to be more correct could allow open systems to play on an even field with mainframe processors for the access to corporate data.
Marc Farley is the vice president of marketing at SanCastle Technologies (Huntington, NY .
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Technology Information|
|Publication:||Computer Technology Review|
|Date:||Apr 1, 2000|
|Previous Article:||Ultra160 SCSI Technology Adapts Itself To TA-04 Linux.|
|Next Article:||Ours is just a little more extensive.|