Printer Friendly

Mix and match computing.

"So many galaxies ... so little time."

Astrophysicist Margaret J. Geller's lament could just as easily have come from other researchers similarly mired in mountains of data. just replace "galaxies" with such terms as genes, subatomic particles, polymer configurations, ozone readings, or seismic measurements.

To meet data processing and computational challenges, researchers have turned increasingly to high-performance computers. A few years ago, the automatic choice would have been a supercomputer located at a national, regional, or state supercomputing center.

Now, many centers are starting to offer a range of different computers to meet diverse needs, including graphics computers for visualization and multiprocessor machines for heavy-duty calculation. At the same time, a number of research groups are exploring the possibility of using extensive networks of ordinary desktop computers to match or even surpass the performance of a single conventional supercomputer.

To many researchers, the "mix-and-match" mode of computing that results from linking different machines provides an attractive, cost-effective alternative for relieving the work load of the heavily burdened supercomputers.

Over the last decade, Geller and her co-workers at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., have painstakingly and systematically recorded the redshifts of thousands of galaxies. Redshifts are increases in the characteristic wavelengths of light emitted by stars. Caused primarily by the expansion of the universe, they allow researchers to estimate the distances of galaxies from Earth. By combining these distance measurements with a database of galaxy positions in the sky, astronomers can construct step by step a three-dimensional map of the distribution of galaxies in the universe.

Geller and her colleagues have measured the redshifts of galaxies that lie within long, thin strips across the sky, Taken together, these wedge-shaped slices reveal that galaxies tend to clump into thin shells, like the walls of enormous soap bubbles hundreds of millions of light-years across (SN: 11/25/89, p.340).

To obtain these insights, the researchers used computers that provide three-dimensional views of the data. But it took a lot of experience and manipulation of the pictures on the computer screens to pick out the salient features.

As an experiment in alternative methods of visualizing huge amounts of data, Geller recently worked with graphics specialists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, to animate the redshift survey. Using images of real galaxies, the NCSA team created the illusion of a journey through the universe.

This sequence became part of a 40minute film illustrating how science is done. "I've been showing the film to standing-room-only audiences at various universities," Geller says. "People react to the graphics in an extraordinary way,"

The team also converted one slice of the redshift data into a virtual-reality environment (SN: 1/4/92, p. 8). By looking through a stereoscopic viewer mounted on a boom, Geller could inspect computer-generated images of the galaxies, and the scene would change as she moved her head or body,

"We were able to navigate through the slice ... without having to have somebody preprogram the path for us;' Geller says. "It certainly was extraordinary to have the sensation of really traveling through [the slice] and being in command."

"Had we had [this kind of capability] when we first obtained the data, there are a lot of things we would have known more quickly," she adds.

Geller's experience at NCSA illustrates one aspect of the changes that have occurred in supercomputing at the four national supercomputer centers, which were established by the National Science Foundation in 1985 (SN: 3/2/85, p. 135).

Located at the University of Illinois, Cornell University, the University of Pittsburgh, and the University of California, San Diego, the centers originally were geared toward testing the power and versatility of supercomputers for scientific computation. Over the intervening years, these powerful machines attracted thousands of users -- so many that researchers now must sometimes wait days or weeks to run their programs.

At the same time, it became evident that additional, specialized computers were needed to handle the prodigious output of the supercomputers. So the centers gradually added various machines for such tasks as visualization and graphics, and hired the staff required to support these activities. This approach gave researchers like Geller access to graphics and visualization techniques normally affordable only to Hollywood studios or large oil companies.

Now, the primacy of the traditional supercomputer -- a single, enormous, multipurpose machine - is itself being challenged. Faced with supercomputer prices ranging from $15 million to $30 million apiece, many groups are looking for alternative approaches for increasing computational capacity,

"We're at a critical moment in supercomputing," says Larry L. Smarr, director of the NCSA.

One possibility being explored is the linking of workstations -- the kind of microprocessor-based computers that most researchers have sitting on their desks - into coordinated clusters to perform certain kinds of computations. Although such networks may take longer to solve a particular problem, the total cost of the machines involved is far less than the price of a single conventional supercomputer.

Moreover, because these desktop machines often sit idle for lengthy periods, connecting them into networks so that they can work together on large problems increases their effectiveness. Such arrangements also permit greater flexibility in selecting the number and types of computers required for a particular application.

Last year, a physicist and two computer scientists provided one of the more dramatic examples of what a collection of high-performance computers, scattered around the United States, could accomplish when linked together.

Hisao Nakanishi of Purdue University in West Lafayette, Ind., was interested in the physics underlying what happens to the shape of polymer strands passing through a membrane or trapped in a porous material such as sandstone. Confined to the material's pores, the chains of molecular units that make up polymers bend and twist in ways that differ from those possible in a liquid.

Nakanishi turned to Vernon Rego of Purdue and Vaidy Sunderam of Emory University in Atlanta for help with the computer simulations he needed to investigate this aspect of polymer physics. The team concentrated on the question of how the straight-line, end-to-end length of a polymer increases as the polymer grows into a chain and eventually traverses a cube containing an array of randomly placed obstacles. Of special interest was the "critical" case in which the cube contains just enough obstacles to provide only a single connected region comprising all the open paths along which the polymer chain can grow from one side of the cube to the other.

The researchers realized that doing the simulation on a scale large enough to yield meaningful results on a single Cray supercomputer would require several days to several weeks of computer time. As an alternative, they developed special software that treats a cluster of separate computers as a single machine, with computations divided among the participating computers.

Nakanishi and his collaborators had access to computers at Purdue, Emory, Florida State University, California Institute of Technology, Oak Ridge (Tenn.) National Laboratory, and the University of Tennessee. The most elaborate arrangement they tested combined 48 IBM RS/6000 computers, 80 Sun Spare workstations, and two Intel i860 hypercube computers. In 10 minutes, this configuration did computations that would take three hours on a Cray Y-MP.

That was good enough for the PurdueEmory group to earn first prize in the 1992 Gordon Bell competition. This award recognizes significant achievements in the application of high-performance computers to scientific and engineering problems. The judges describe the winning entry in the January issue of COMPUTER.

Although the Purdue-Emory scheme represents an important first step, the 1ogistics of handling such a network of computers remains exceedingly complicated. Indeed, the software required for binding the system together represents the main bottleneck. In many instances, software deficiencies keep these systems from running as efficiently as possible.

Nonetheless, researchers are optimistic that such problems will eventually be solved. Smarr envisions the development of a national "metacomputer" -- an array of different types of computers linked by a high-speed, high-capacity network to act as a single computer.

In a sense, each national supercomputing center already acts as a metacomputer, invisibly shuffling programs and files from supercomputer to massively parallel machine to graphics computer to mass-storage device to workstation. Ordinarily, users need specify only what they would like done, and the center's software takes care of the details of when, where, and how.

Smarr would like to see this concept extended to networks of computers on a national scale. By automatically adjusting to the power and speed required for solving a particular problem, such systems would provide greater flexibility for scientists working on a wide range of applications.

"But we're not there yet:' Smarr cautions.

As one step toward "scalable supercomputing" and the development of a national information infrastructure, the four national supercomputer centers last year announced the formation of a national MetaCenter (SN: 11/28/92, p. 374). Center staffs are now working together to establish standards so that people can use any computer, or set of computers, at any center.

"This also allows the centers to specialize, rather than trying to be everything to everybody," Smart says.

In response to the rapid changes in computer technology, the National Science Foundation is reviewing the role of high-performance computing in scientific research and reevaluating the rationale for the national supercomputing centers. Chaired by Lewis Branscomb of Harvard University, the panel charged with the review expects to present its report and recommendations later this month.
COPYRIGHT 1993 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:alternatives to supercomputers
Author:Peterson, Ivars
Publication:Science News
Article Type:Cover Story
Date:May 1, 1993
Words:1569
Previous Article:Unusual tubes emerge from boron nitride.
Next Article:Delinquent developments.
Topics:


Related Articles
Four new centers for supercomputing.
Fresh start for NSF supercomputer centers.
Big-league computing.
IBM SUPERCOMPUTER HELPS U.S. GOVERNMENT IDENTIFY OBJECTS IN SPACE.
WORLD'S FIRST 1,024-PROCESSOR SGI ORIGIN 3000 SERIES SERVER INSTALLED AT NETHERLANDS NATIONAL SUPERCOMPUTING FACILITY.
IBM SUPERCOMPUTER TO POWER INFO WAREHOUSE FOR STAPLES; HEIR TO `DEEP BLUE' WILL HELP RETAILER ANALYZE OPERATIONS MORE EFFICIENTLY.
CRAY RESEARCH RELENTS\Silicon Graphics buys independent supercomputer firm.
U.S. SUPERCOMPUTER PACT CREATES SUPER FUROR.
Army high-performance computing center powered by SGI servers critical in assessing Missile Defense Systems.
GM speeds time to market through blistering fast processors: General Motors' vehicle development process gets a big boost from the latest in...

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters