Printer Friendly

GM speeds time to market through blistering fast processors: General Motors' vehicle development process gets a big boost from the latest in supercomputers.

Supercomputing, what General Motors (GM; Detroit, MI) calls "high-performance computing" (HPC), is so valuable that the company will be more than doubling its supercomputing capacity by early next year. Its new supercomputers run at nine teraflops--9 trillion calculations per second, which is the equivalent of computing in one second one calculation per second for just over 285,000 years.

Impressive, yes, but one might wonder: "What's it good for?" In GM's case, a lot. Supercomputing is a crucial tool for vehicle development, from concept models at the early stages of design to more detailed models as the designs evolve. It is also one more "enabler" to global competitiveness.

THE VERY MODERN MODEL OF A SUPERCOMPUTER

This is not GM's first supercomputer. It's an upgrade to an existing system. Consider it a scheduled upgrade, given that much of GM's computing equipment is on a three-year lease. By swapping out equipment every three years, GM stays on the leading edge of computing technology and ahead of its computing needs.

GM's previous supercomputing system consisted of computers from IBM and Silicon Graphics Inc. (Mountain View, CA) combined in a four-way system. (A four-way system is one based on a microprocessor that can issue two integer and two floating-point instructions for every processor clock cycle, as opposed to just one of either.) For software, in addition to Unigraphics for computer-aided design (CAD) and product lifecycle management (PLM), the primary computer-aided engineering (CAE) analysis software at GM includes MSC-Nastran from MSC.Software Corp. (Costa Mesa, CA) for structures, LS-Dyna from Livermore Software Technology Corp. (Livermore, CA) for crash simulations, and computational fluid dynamics software from Fluent Inc. (Lebanon, NH) for aerodynamics.

GM's new HPC system consists of two models of 64-bit IBM supercomputers. The first model, the pSeries p655 1.7 GHz 8-way, based on IBM's Power4 processor, was up and running at the beginning of 2004. The Power5-based supercomputer will be delivered later this year and running in early 2005. The Power5 processors, which supersede the Power4, are "considerably faster than the Power4s," says Tom Tecco, director of CAE, CAT, and electrical systems in the IS & S Group of GM. Once everything is installed, the entire computing system will probably be about five to six times faster than what GM had, and there'll be more of it to crunch through more computer jobs.

GM will deploy these supercomputers to all of its design centers around the world. This lets the designers and engineers share the results of analysis a "little bit more easily" than if the systems were different, explains Tecco. (All the regions use the same versions of software, as well.) While analysis jobs are usually run regionally, if a shortage of capacity should occur in one region, GM can run the job on the super-computer in another region.

BUT WHAT'S IT GOOD FOR?

The single biggest application of GM's supercomputing is for crash simulation: full-frontal, offset-frontal, angle-frontal, side-impact, rear-impact, to name a few types of crashes. Crash worthiness, explains Bob Kruse, GM's executive director of Vehicle Integration for North American Engineering, "while done very early in the development process, gets validated very late in the development process--meaning, after you've already completely built the vehicle." Unlike in the aerospace industry, the government doesn't let automakers simply submit math simulations that say their vehicles meet federal motor vehicle safety standards. Instead, the automakers crash the real things. Using silicon and math to crash cars into walls is much faster and cheaper than crashing actual cars into walls. Plus, the availability of virtual cars, walls, and crashes is virtually limitless. By using the supercomputer early in vehicle development, says Kruse, "by the time we go through the time and the expense of driving a vehicle into a wall, we are pretty damn sure how it's going to perform."

[ILLUSTRATION OMITTED]

Such simulations can run at different speeds, different angles, at different vehicle load structure. "That iteration in math is a very efficient way to go," says Kruse. "The more you do, the higher the fidelity--the sophistication--of the models. The more sophisticated the models, the more parts, the more things, and the more dimensions you try to analyze." However, this means the more compute capacity you need. (Given that crash testing is the epitome of non-linear problems, read: "complex," you need a tremendous amount of computer power just to start with.)

GM has other complex non-linear problems, such as those in thermodynamics. For instance, GM simulates the airflow through the front end of a vehicle to predict thermal characteristics under the hood. Another example involves brakes, which generate lots of heat. GM simulates airflow across brake calipers and shoes to determine if a baffle needs tweaking or fins added to ensure the brakes don't overheat. Another batch of non-linear problems involves aerodynamics. More and more of these are moving from the wind tunnel to the virtual world.

To be honest, admits Kruse, "it's the business case around the safety and crash worthiness that drives [the justification for the new compute environment]." (The other engineering analysis problems come along for the ride, so to speak.)

IS SUPERCOMPUTING WORTH IT?

GM predicts that the speedier and more detailed digital designs and validations from the new supercomputer will yield benefits in three key areas: time, product development, and cost savings.

GM claims that the new supercomputer will help collapse the time-to-market for some vehicles from the current 48 months to a mere 18 months. This will happen even though some of GM's simulations, even in the HPC environment, run all night before providing results. Some run a day and a half. "That's still way faster than the real world, where you have to design, fabricate, acquire, build, assemble, then test parts," says Kruse. "This supercomputer lets me run more sophisticated analysis routines faster." That, in turn, makes GM quicker at turning the ideas from somebody's sketchbook into a finished vehicle on a showroom floor.

Product development costs will significantly drop. In particular, the full-body car digital computer simulations afforded by HPC help cut down the number of costly full-size crash vehicles that need to be built--and crashed. GM has been able to reduce the number of crash vehicles it needs by more than 85%--at a cost of $500,000 per vehicle crash test. The other benefit, which Tecco says is hard to put a dollar amount on, is that the supercomputer lets GM explore a lot more design alternatives than the conventional approach, namely destructive testing.

Because GM's engineering centers are using the same hardware and software, collaborative product development gets a boost. For instance, the GTO team, working with the engineering and manufacturing staff at Holden Ltd. in Australia, was able to share large data files between GM's 16 engineering centers worldwide. This let the engineering team easily work in both Michigan and Australia.

LESSONS LEARNED

Tecco assumes that most of the Tier 1 automotive suppliers are looking at ways to become faster and more cost effective in creating better-quality designs. Computer simulations help. Whether that requires a supercomputer is up for debate. "Obviously the OEMs have a much higher need for large [compute] capacity because they're building models of the fully integrated system, whereas suppliers are typically looking at subsystems." However, continues Tecco, some of those subsystems can get fairly complex, which would in turn drive the need for supercomputing.

While neither GM nor IBM would disclose the price of the supercomputers, rest assured it wasn't cheap. Also rest assured that GM performed a rigorous cost justification before purchasing these babies. Kruse explains: Start with the number of jobs waiting in queue plus the amount of time to run each job. These become the basis for a "straightforward business calculation to what HPC capacity you need and the glide path to make sure you have the necessary capacity."

On the other side of this equation are GM's objectives for how fast it wants jobs to run, how many jobs it will keep in queue, and how long jobs should take. "We have lots and lots of metrics to help us understand how efficiently we're using our existing compute capacity, how much our demand is growing, and how much capacity we're going to need in the future," continues Kruse. Knowing this, GM then goes through a "fairly extensive benchmarking process" in selecting its next-generation supercomputer, adds Tecco. The benchmarking involves creating a typical mix of computing jobs, typical both in size and applications, and then running this mix on the different supercomputing systems available on the market. The results, says Tecco, are just one of many "inputs to our decision making."

The fact is, GM has been quite public about its commitment to having the necessary tools and capabilities to be successful in the global automotive marketplace. Supercomputing is just one part of that. "We think it gives us an edge," says Kruse. But he also notes that "each of the auto manufacturers has different capabilities for math simulation--and how they've integrated that into their development process. It's not just 'the guy that has the biggest supercomputer wins.' [The winners are] the guys who properly integrate the capability into their vehicle development process to get the best vehicle to market the fastest. The reason why we bought a faster supercomputer is so that we can do just that. We don't sell math model results; that's not our business. Let's not lose the fundamental reason why we [purchased the new supercomputer]: So we can have better vehicles in the marketplace faster. Period.

By Lawrence S. Gould, Contributing Editor
COPYRIGHT 2004 Gardner Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Digital Domain; General Motors
Comment:GM speeds time to market through blistering fast processors: General Motors' vehicle development process gets a big boost from the latest in supercomputers.(Digital Domain)(General Motors)
Author:Gould, Lawrence S.
Publication:Automotive Design & Production
Geographic Code:1USA
Date:Jul 1, 2004
Words:1590
Previous Article:Aluminum's deja vu defense: plodding along while steel blunted each of its advantages and delivered common products to automakers at lower prices,...
Next Article:Diagramming sentences can save your company billions: this isn't some sort of cryptic plea to hire hungry, unemployed English majors. Rather, a...
Topics:


Related Articles
Roy Roberts retires from GM.
TRUCKIN'; GM UPS ANTE IN BATTLE FOR PICKUP SUPREMACY : CHEVROLET LAUNCHES '99 SILVERADO.
CHRYSLER DEVELOPING ELECTRIC CAR USING GASOLINE TO POWER FUEL CELL.
GM BUYS BACK STOCK, LEAVES DIVIDEND ALONE.
EFFORT TO SAVE EV1 CARS FAILS.
GM's hybrids: beyond the urban environment.
The competitive landscape 2006.
Rapidly changing fortunes.
Motown misery: the world's biggest car maker, General Motors, is sick--very sick; some have even mentioned the bankruptcy word.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters