Printer Friendly

Managing a distributed environment.


Many companies are now experimenting with complex distributed computing environments.

Thus far, few sites are running a distributed application environment successfully. These environments typically include:

* multiple operating systems--usually a mainframe OS such as MVS or VMS, with UNIX and DOS,

* multiple computing platforms (e.g. LANs, PCs, minis, mainframe),

* and, increasingly, multiple database and file environments.

These complex environments and on-going pressures to improve productivity are forcing IS (information systems) managers to re-examine how to distribute and manage data, programs, and applications efficiently over many sites and environments on a timely basis.

These requirements are made even more challenging by data security considerations. Once data leaves the warm, secure central computing environment, how do you ensure its safe passage to a remote site?

How do you manage backup, recovery, and physical storage at removal sites?

Once data leaves the central site, there are few security measures, either with off-line storage or networked data. Encryption is rarely used.

Distributed information service management is an extremely labor intensive job, and therefore not done appropriately.

Database administration, even on one site with one application, is also very labor intensive.

Applications in production today were not designed to be distributed.

There is not enough information about data, backups, or archives for system managers to do an adequate job of responding to user requests without help from DBAs and programmers.

Users who need to retrieve data quickly from anywhere other than their local PC are facing major resistance from IS and frustration in tracking, finding, and retrieving the data.

Most organizations cannot afford to place systems people at every site or maintain a large central support staff to administer and support these requirements. To make distributed computing operable and feasible, automated facilities and service management tools are essential.

This process, which encompasses data concurrency and service management, is called "distributed application management."

Evolving Complexities

Almost every organization is moving data across hardware platforms and operating systems. Most of the associated effort is spent in moving and synchronizing that data.

For example, a division of a large aircraft manufacturer sends employee information from a human resource system running on a Sun UNIX environment to a remote IBM mainframe, weekly; a large defense contractor downloads information from their IBM MVS payroll system to a DEC VAX, once a month; while a natural gas company regularly downloads information from their VAX to a Macintosh.

Why all this data movement?

Typically, the vehicles used to process large volumes of data are the older mainframe-based environments made up of hardware, operating system software, and application software.

However, the users who need to analyze and report on this data need fast, interactive, easy-to-use hardware and software, which often exists on other processing environments. This need for identical data in two physical locations at the same time is becoming a very common occurrence.

The result is that data gets moved to such environments. This co-existence of data in multiple locations is called "data concurrency."

For example, let's examine a company that is planning to implement distributed computing for human resources and payroll applications.

The organization employs several thousand, and has received budget approval for several hundred UNIX-based servers, hosting a variety of PCs and terminals.

These servers will become "clients" to a large IBM mainframe, which is and will remain the repository of all master payroll information. There are also several LANs and several dozen VAXes across the organization whose information must be tied into this system as well.

Portions of the human resource and payroll applications may run on the server or on local PCs. The database environment starts out as Oracle on UNIX and PCs, DB2 on the IBM mainframe. Other applications on Ingres and Sybase also must be supported over the coming years.

A poll of several hundred sites across North America in the last six months indicates that a mixed environment like the one described here will be a common production shop in three to five years. Imagine the challenge of distributing and installing new software releases to each of several hundred sites, let alone moving data around daily.

Organizations can't afford to do this manually, nor can they tolerate "verbose" network traffic with today's processing requirements.

To determine what was needed to make the distributed solution operable for the company, they compiled the following solution operable for the company, in the form of a list of major musts:

* Select data from relational tables to move.

* Select files to move.

* Select software to move.

* Move selected data from relational database tables between two servers: UNIX to UNIX, UNIX to VAX; or Oracle UNIX and MVS DB2 (in both directions); or Server and Client (UNIX to UNIX, UNIX to PC), in both directions, repetitively and ad hoc.

* Move files (see above).

* Move software (same).

* Schedule and automatically install new software releases at remote sites.

* Do configuration management.

* Do overnight archive/backup/recovery/retrieval.

* Automate operations logging and scheduling.

* Implement production service management facilities--query and reporting utilities.

* Do automatic mail notification of results.

* Compress/encrypt data.

* Automate data purging from production milieu.

* Utilize multiple device drivers.

* Verify data synchronization between sites.
COPYRIGHT 1991 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:network security
Publication:Communications News
Date:Jan 1, 1991
Previous Article:Open-system E-mail security; putting a new twist on an old problem.
Next Article:Now's the time to open Europe to U.S. products... says NATA's Ed Spievack.

Related Articles
Cleaning up your desktop assets.
Tivoli Systems Sells $3.5 Million in Products and Services to Medaphis Corp; Leading Provider of Physicians Management Services Will Use Tivoli...
Order out of chaos. (Internet Focus).
Network design for security concerns.
New global education program from CyberGuard.
Network Appliance Boosts Perimeter Security Defenses with New Web Gateway Appliance; NetApp(R) NetCache(R) C1300 Delivers on Uncompromised Security...

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters