Miniaturization Does It All.
The first large-scale, general-purpose electronic digital computer, called ENIAC (Electronic Numerical Integrator and Computer) was completed at the University of Pennsylvania in 1946. It weighed 30 tons and covered 1,500 square feet of floor space. It contained 18,000 vacuum tubes, half a million soldered joints, 700,000 resistors, 10,000 capacitors and 6,000 switches. It needed an enormous amount of power and was subject to failure and errors if tubes or other parts blew. Today the same computing capability can be packed into a hand-held package!
Donald Procknow, vice chairman and chief operating officer of AT&T Technologies, observes: "The number of components we can put on a silicon chip has been doubling every year for the last 20 years. This trend shows every sign of continuing for at least another 10 years. Today the 256K puts half a million transistors on a chip the size of your fingernail. It seems clear that we'll see regular production of chips with a million transistors in the not-too-distant future. You can't really say what the outer limit is. That's part of the fascination of this business. We're still testing our limits."
Microprocessors are basically integrated circuits that are capable of performing the control, arithmetic logic and processing functions of a computer. Using large-scale integration techniques, designers are able to concentrate tens of thousands of discrete logic elements on a fingernail-sized chip of silicon.
During the past decade, circuit designers and large-scale integration technology have advanced to where today's microprocessors have computing capability on a single chip equal to small mainframe computers of a few years ago. The addition of memory and interface circuits for input and output of information can enable a microprocessor to function as a microcomputer or minicomputer.
Listen to Bell Labs President Ian Ross: "Microelectronics is a technology that after more than two decades of remarkable progress is still advancing exponentially and promises to continue on this path for many years to come. For example, in each of the last 20 years we have doubled the number of components on a chip of silicon, and we're still doing so every year and a half.
At the same time, the equivalent cost per transistor has become 1,000-fold cheaper. Today at AT&T we are producting the 256K memory, a chip the size of your fingernail containing over a half million components. And we are closed to achieving the design of a manufacturable" megabit chip" . . . one continuing a million components.
"As a result of such progress we have reached the stage where the capability of single-chip processor has surpassed that of a mainframe computer of 15 years ago and is now challenging that of today's minicomputer.
"What does the future hold for this remarkable technololgy? We might be only halfway to fulfilling its potential. I see the possibility of our reaching chips containing 100 million components per square centimeter of silicon, and perhaps by the turn of the century a billion components on a chip about the size of today's 20-cent postage stamp. (I might add here that such a chip at that time may cost a lot less than the postage one might need to mail a letter in the year 200 . . . if indeed we are still sending mail that way!) The individual circuits in such chips, by the way, will operate at a switching speed of about 10 picoseconds, a picosecond being one-trillionth of a second."
We Dvorak, writing in the Bell Labs Record earlier this year, tells about the new WE 32100 chip. Says Dvorak: "At a time when many chip producers are still debugging their first 32-bit microprocessor, AT&T Bell Laboratories is about to manufacture a second-generation micro that redefines the state-of-the-art: the WE 32100 microprocessor.
"In the contest for the supermicro, this chip is a megamicro. It squeezes 180,000 transistors onto a quarter-inch square of silicon and offers more processing power and at higher speed than most advanced desk-top computers.
"The WE 32100 central processing unit (CPU) will bring these strengths of AT&T's 3B computer family when it replaces the first-generation WE 32000 microprocessor. Users of the current 3B family won't have to worry about obsolete software, however, because the new chip can run program written for the WE 32000 CPU.
"The WE 32100 micro easily out-muscles the standard 16-bit chip used in most microcomputers built today. It can process twice as much data a second as a 16-bit chip, and it can address any one of 4.3 billion locations on off-chip memory, compared to the 65,536 locations referenced by a 16-bit address chip.
"The WE 32100 chip, as the WE 32000 chip before it, is a true 32-bit microprocessor. Every data and address bus has 32 wires; every register (on-chip temporary memory) holds 32 bits; and it can fetch and process up to 32-bits of data at a time. It has a tremendous reach into main memory, being capable, for example, of accessing the names and telephone numbers of the more than 100 million people who own telephones."
An experimental computer memory chip that can store more than a half-million bits of information, nearly twice the capacity of any chip yet reported, has been announced by engineers from IBM's semiconductor facility in Essex Junction, Vermont. The new component is a 512K-bit dynamic random access memory (RAM) chip and it is the first complete chip ever to use an electronic technique called "plate pushing" to read data out of its storage cells. Plane pushing--previously used only in greatly simplified test circuits--produces an electrical signal nearly twice as large as that produced by conventional data reading methods. With this stronger signal, it is possible to increase the ship's density and reliability while maintaining high performance. The new chip measures 7.96 millimeters by 8.6 millimeters and data can be retrieved from its storage cells in 120 nanoseconds. The smallest photolithographic images in the chip's circuit pattern are just 1.5 micrometers wide, about 1/50 the diameter of a human hair.
Reduction is size and increase in capacity of memory devices is important to further miniaturization. Image a plastic card about the size of a standard credit card containing the equivalent of 8,000 pages of typewritten text: 40 million bits of information. Imagine the Encyclopedia Brittanica, complete with illustrations, encoded on something the size of a phonograph record. These and other marvels of information compression only hint at the potential impacts of optical.
Optical memory systems, some say, have the potential to be the clear winners in the race to increase recording densities and reduce costs. Some analysts have compared the advent of optical memory systems to Gutenberg's invention of movable type.
The number of tiny components that can be crammed onto a silicon chip is doubling about every eighteen months (see chart). Today, we have over half a million components on a chip the size of your fingernail. And the equivalent cost of a transistor is less than one-hundredth of a cent, a thousand-fold cheaper than the cost of a quality transistor 20 years ago. Today these single-chip processors have surpassed the capability of mainframe computers of 15 years ago and are now challenging the ubiquitous minicomputers.
Magnetic disk memories are declining in cost at a rate of approximately 50 percent every five years. Nonetheless, optical one-tenth as expensive as these conventional disk memories by 1986.
There can be a ten thousand-fold increase in the intelligence of microelectric chips. Already AT&T scientists are said to be attempting to build integrated circuits with interconnections as rich as those among nerve cells in the human brain.
What about the future? How far can we go in this technology?
Listen again to Bell Labs President Ian Ross: "There is no question that we will surpass a million components on a chip and reach ten million. My guess is that we may ultimately reach a hundred million.
In short, I believe that fundamental physical limits will not retard progress through this decade, that we will continue to see near-exponential growth in numbers of components, as well as remarkable increases in speed of devices. When we talk about ultimately limits, we are considering production devices whose structures, might be fabricated smaller than 10-.sup.1 microns. We envisioning working devices that would be 10-.sup.1 microns on a side, with switching speeds of 10-.sup.21 seconds. That's 1000 times smaller and faster than today's devices!
At this point, one might raise the question: Is there a need for so much logic capability? Perhaps the answer lies in the fact that at each stage of advancement in microelectronics we have been able to apply the new capability provided quite productively in our telecommunication systems . . . in switching, in transmission and in operations support systems. As we move further into the development of full information services . . . voice, data, video . . . we will need all the memory and logic we can get our hands on.
|Printer friendly Cite/link Email Feedback|
|Date:||Sep 1, 1984|
|Previous Article:||Teleconferencing Catches On.|
|Next Article:||Personal Computer Use Grows.|