Printer Friendly

Major technology trends for the 1990s.

Computers are the steel of the information society. So any examination of trends in technology must begin with a discussion of what's happening in computer "metallurgy." I'd like to discuss three areas: parallelism, the impact of processor design, and--the driving force behind it all--level of integration.

Level of integration starts with the dark and still somewhat mysterious art of fabricating little tiny patterns on chips of silicon. We've spent the last 25 years learning how to manufacture smaller and smaller transistor patterns on silicon.

Smaller transistors are important for two reasons: first, they are faster; second, the smaller the patterns, the more transistors we can put in the same area. So, smaller patterns mean more powerful and faster computers.

The first microprocessors, which appeared 20 years ago, were made with about 3,000 transistors. The new microprocessor designs of today are made with over a million. In the next year or two, microprocessor designs will have four to five million transistors, and up to 100 million at the end of the decade. For the first time, we'll have more transistors on a chip than we'll know what to do with.

We've already seen the effect of putting more transistors on a single chip in what I call the incredible shrinking personal computer. Ten years ago, when personal computers were relatively new, a PC was a big board with lots of chips on it. Several additional option boards--to handle a monitor, a disk drive, a keyboard--were necessary to make a complete system.

Now our entry-level personal computers, at least, are on a single board much smaller than the original personal computer board, and, in fact, the whole PC is about 10 or 20 chips plus the memory chips.

But in the next generation, the functioning of personal computes will get down to a processor chip and maybe two or three other chips plus memory. That's all there is, all that's needed to make something that'll run everything a PC can run today.

We may find that the PC itself, the system box, might just disappear. The board that holds that logic will be small enough to be packaged in with something else. It can be made part of the keyboard, say, or part of the monitor, or maybe even part of the power plug that goes into the wall.

In most things in life, we're used to the idea that the bigger something is, the more powerful it is. A bigger motor has more horsepower than a smaller motor. A bigger crane can lift more than a smaller crane. That's how the physics of everyday objects works. Bigger is more powerful.

But the physics of what makes up information systems works the other way. The smaller items get, the more powerful they are. And the closer together things are, the smaller the transistors are, the faster they switch.

As microcomputer-based systems have gotten smaller, they've also gotten more powerful--and at quite an amazing rate. In conventional mainframe computer design, history records at 20- to 22-percent compound annual improvement rate. So we typically get a new generation of mainframes about every three-and-one-half years. The new generation typically starts out having about twice the uniprocessor performance of the previous generation and about the same list price as the previous generation when it was new.

But with microprocessor-based systems, the annual improvement rate is 35 to 40 percent. So in the three-and-a-half years it takes to double the capacity of a mainframe, the microprocessor quadruples. During the last 10 years, as each generation of mainframes has become twice as fast, microcomputers have become four times faster. Or, put another way, with each generation, microcomputers close half of the absolute performance gap with mainframes.

Today, that gap is within less than a factor of two of being gone altogether. That's a startling difference from where things were 10 years ago, when we first formed our perceptions about the differences between mainframe and microcomputer technology.

In fact, this super level of integration suggests to us that microcomputers will not just catch mainframes, but surpass them. The fastest, highest-performance processor architecture experiments done in the future will be done on single chips of silicon, on microprocessor-based designs. That, to me, is the startling conclusion we draw from what we have learned about the advantages of the microprocessor.

The art of science of

parallel machines

No matter how powerful an individual computer is, organizations always have problems that are bigger. They want to combine the capability of several--or many--machines.

There are two approaches to putting together uniprocessor computers to make bigger machines: shared memory and message passing.

The shared-memory approach, also referred to as tight coupling, connects all of the processors to all the memory. This design has been used in mainframe multiprocessors for something like two decades.

The principle advantage of this approach is that the operating systems used to run it can be fairly straightforward extensions of the multiprogramming operating systems used on individual computers. The shared-memory approach, then, is in effect the first parallel computing for which we've had software.

The drawbacks to this approach come from the interconnect. Because the interconnect carries every request from every processor, the connection network itself delays the execution of each instruction. This delay shows up in the total band width demanded of this interconnect and the time it takes to get a word out of memory, across the interconnect, and to the processor--what we call the latency.

When the latency gets longer than the clock time of the processor, there's nothing the processor can do. It just has to wait. So performance falls off as we add more processors. The more processors we want to put on, the more wait states we incur, and the more performance we steal from each processor.

The other approach is message passing, or loose coupling. Here, each computer has strictly local memory, strictly local peripherals. The processors communicate by sending messages to each other over the interconnect. The advantage of this approach comes principally from the fact that when a processor sends a message to another processor, it can do something else while it's waiting for the response. Because each processor runs its own copy of the operating system, it typically runs different jobs for different users.

As a result, message-passing systems can grow to hundreds or even thousands of processors without losing performance.

Nothing is perfect, of course. These are two major disadvantages to message-passing systems. First, because each processor requires a copy of the operating system, a message-passing system will use more memory for operating systems than does a shared-memory system, which has only one copy.

The other, and more important, disadvantage is that the programming model for message-passing computers is really the same programming model that is used for distributed computer systems. This means that it takes more than simple extension of uniprocessor software ideas to exploit message-passing hardware. The good news is that operating systems software is now beginning to take advantage of the message-passing design.

The message behind all of this is that mainframes are in decline. They had a big, early lead in absolute performance. But that lead has been gobbled up a factor of two each generation until it's just about to disappear.

True, mainframes haven't been sitting still. Tightly coupled, shared-memory mainframe complexes have improved dramatically. But now the share-memory micro-computer designs are at just about the same level of absolute performance as are mainframe designs. And there's a big difference in cost: the largest shared-memory, microcomputer-based designs cost in the range of $1 million per system, whereas the biggest mainframe systems cost fro $20 to $30 million per copy.

Message-passing parallel computers represent the biggest machines that can be architected. These systems can include 500, 1,000, or 4,000 processors. Nobody actually buys or installs the very largest systems possible. But users can--and do--buy systems, for the cost of a mainframe, that can do twice, ten times, or even 20 times the amount of work.

The future networks

One of the principal opportunities we face with networks is the rise of open standards. But we need universal application of standards to do the world of data communications what's already been done for the world of voice communications--universal connectivity.

But getting connected is just the beginning. Just because you have a telephone connection with someone in a far off land doesn't mean you can understand what that person is saying. We have to have standards at the higher-level protocols--the network management protocols, electronic mail, electronic file transfer, cooperative computing system-to-system protocols--so that our systems can communicate with one another.

Networks during the '90s will be dramatically affected by the fall of copper. Fiber optics have already by and large replaced the copper infrastructure for international and national wide-area networks. Fiber is beginning to get into all the other levels of networking: metropolitan, campus, departmental, down to the peripheral networks of computers. So the new protocols and the interface standards will be developed around fiver, which can give us as much as 1,000 times the band width per dollar of copper-based networks.

Another revolution in networks is the elimination of wires entirely, using radio frequency transmission. Wires are expensive to install and maintain, and they are confining. With wireless networking, you can pick up your smaller, more-powerful-than-ever microprocessor, and carry your network with you.

Risk-free storage

Another development is in storage systems. Disk arrays are changing the way we think of storage. Disk arrays, which are particularly suited to using small, high-capacity disks, use a planned redundancy scheme that results in higher performance (because data is transferred from all the disks at the same time rather than one at a time) and reliability. If one disk fails, the array controller instantly reconstructs the data from the failed disk. The software never sees the disk failure.

The disk array solves one of the biggest problems with computer systems--failure in the storage system. With disk arrays, the highest availability systems--which traditionally have been built with special operating systems, special hardware, special applications, special everything--now can be built with standard operating systems. This in turn will improve the quality of service that we increasingly depend on from our computer systems.

Object-oriented software

One of the problems in wiring software is that people keep writing the same things over and over again--the same algorithm, for example. Back in 1967, somebody said, let's write programs with reusable fragments of code, identify them, catalog them, and make them accessible to other programmers to use. They did, and they called it object-oriented programming.

Studies have shown that object-oriented programming can improve programmer productivity somewhere between two and five times. With the cost of programming and software maintenance being what it is, that improvement in productivity is important.

But if object-oriented program is so great, why hasn't it taken over the world? Because it takes a big investment--in training, in tools, in techniques, and in building up an object library. So you get two-to-five-times productivity improvement, but you get it only after several years of investment.

But now we're learning something new about object-oriented programming: you can treat any existing program as an object. You can put a little object-oriented veneer around the existing program, which we call encapsulation, and you've got something that can interact with a new, highly productive environment built with object-oriented techniques.

Object-oriented programming is the first really practical way to provide a transition between old-style computing and the new way of computing.

To put it into economic terms, if you go into any big organization, you'll discover, say, 1,000 application programs used to run the business. Typically, 100 of these programs consume 90 percent of the company's computer resources. So if you can get computer hardware that's 10 times more cost effective, it's pretty easy to say it'd be worth changing these 100 programs because they're 90 percent of current resources. These improvements would reduce your costs to a tenth of what they are now.

The problem is the other 900 applications. Because they consume 10 percent of your computer resources, it doesn't matter if they run 10 times faster. The payback time versus the effort required to change them is simply too long. This is what we call the legacy problem. What do you do with those 90 applications?

Encapsulation gives the first sensible answer to the legacy problem. You do nothing, absolutely nothing, with these programs. Leave them running on the same hardware they're currently running on. Simply encapsule the old programs, so that they can be treated as objects with a rich interaction with the new environment.

This technology enables a company to shift a significant amount of its investment and human capital away from the older techniques to the newer, more cost-effective hardware and more productive development techniques. At the same time it doesn't break down what exists, which has been the problem with any new technology.

Adding all these things up, we're entering a new era of computing. The big challenge is how to get from the old way to the new way.

Object-oriented programming gives us an answer. We still have to carry forward the applications that were built for the previous era. The importance of object-oriented technology is not just that it's a key part of the new high-productivity environment, but it serves as a bridge to the new world of technology.
COPYRIGHT 1991 Financial Executives International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Report: Information Technology; computers
Author:Neches, Phillip M.
Publication:Financial Executive
Date:Jul 1, 1991
Previous Article:Early planning trims 1991 taxes.
Next Article:The LAN brain exchange.

Related Articles
Retail trends in the 1990s: the aging population will trigger a host of retailing innovations in the decade ahead.
1990s' trends lead to... 21st century predictions.
Productivity Developments Abroad.
Component orders rise in January.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters