Printer Friendly

The PC: the next generation FEA workstation.

It has taken only 11 years for the personal computer, or PC, to rival the speed and performance capabilities of workstations. And that's good news for engineers involved in finite element analysis (FEA) who need a great deal of computational power but welcome the convenience of PCs. Test comparisons conducted recently by design and analysis software manufacturer Algor Inc. (Pittsburgh) show that in fact the PC has caught up, see Table 1. However, it is important to remember that overall performance depends not just on CPU speed, but on the graphics board, bus drive, and other elements as well.


The maturation of the PC as a platform for FEA may surprise skeptics because of the relatively short time it took the computer to reach its current stature. It was in August 1981 that Armonk, N.Y.-based IBM Corp. introduced the microprocessor it called the PC. Today's PC is a far cry from the 1981 original. The 32-bit 80386, which is practically an entry-level machine these days, was the first computer to be considered a workstation alternative.

The 80486 Workstation

The 80486 directly competes with workstations. Its one chip combines a microprocessor (80386), a floating-point math coprocessor (80387), a memory cache controller (82385), and 8 kilobytes of RAM cache.

For a long time, the 640-kilobyte maximum memory limit imposed by DOS was a problem. Back in August 1981, 640 kilobytes seemed like plenty of memory. Switching to OS/2 or Unix operating systems, which do not have the 640-kilobyte barrier, is an alternative, but the user community refuses to change. No one wanted to give up the familiar DOS. Now, everyone can keep DOS with the introduction of extenders that allow DOS applications to run into extended memory. In February 1988, Algor introduced its Hyper series of products that under DOS take advantage of extended memory. With this technology, software can access as much extended memory as is available - similar to a mainframe. Engineers now can perform and solve monstersize problems.

The War of the

Operating Systems

DOS extenders have increased the life span of DOS for many years. Eventually, a pure 32-bit operating system will emerge. Microsoft's Windows 3.1, which must be used with DOS (preferably 5.0), attempts to satisfy some of the demands of software developers but it is still a 16-bit system at heart. IBM's new product, OS/2 version 2.0, introduced in April 1992, is a true 32-bit system. It requires an 80386 chip or better, 4 megabytes of system RAM, and 31 megabytes of free hard disk. OS/2 can run standard DOS and Windows applications as well as applications specifically written for OS/2. Applications that take full advantage of protected memory will come in the future.

By the end of 1992, Microsoft will release its 32-bit system, Windows NT (New Technology), which will be a direct challenge to OS/2. Windows NT, which is rumored to have a Unix flavor, will run on RISC and Intel processors. Fortunately, the competition between OS/2 and Windows NT will keep prices down. Dark-horse candidates could also emerge as winners. The Apple/IBM alliance will introduce an operating system called Taligent. Sun Microsystems Inc. (Mountain View, Calif.) spun off a software division that is marketing Solaris. And finally Steve Jobs, a cofounder of Apple and now president of Next Computer Inc. (Redwood City, Calif.), is ready to ship a PC version of his heralded NextStep OS.

How Fast is Fast?

Whetsone, Dhrystone, Linpack, SPEC benchmark suite, mflops, and mips are all names to measure how fast a computer runs a program. In the world of FEA, Algor has established a few standard engineering stress-analysis models. By running the models on various PC and workstation configurations, the capabilities of the machines are readily quantified. The clock is the judge.

Unfortunately, configuring an optimum PC today is not simple, given the many options available. What was a throughput bottleneck yesterday, goes away only to be replaced with another bottleneck today. For example, hard disks were a bottleneck, but with electronic disks in RAM and disk caching, the CPU speed is now critical.

A Guide to Specifying

And Purchasing a PC

Manufacturers can build a computer to exact specifications. A PC can perform like a workstation if it has the newest technology. With the power of mass marketing, the purchase price will be lower than that of a conventional workstation.

The fundamental rules to keep in mind are: Faster is better than fast and silicon is better than iron. So when purchasing a PC there are several things that should be considered to ensure that the computer will effectively respond to the demands of the task at hand. * Source. If you can replace the batteries in a flashlight or remove the cover of a PC, you can order your computer from a mail-order house. Most mail-order firms are reputable, and service is not an issue since "repairing" is nothing more than swapping components. * CPU. Buy the fastest 80486DX you can afford. Do not buy a 80486SX and an 80487. The 80486DX has a built-in math coprocessor that is ideal for FEA-type analysis and an 8-kilobyte memory cache. Machines of 33 MHz are readily available; 40- and 50-MHz machines are already shipping in quantity. In February 1992, Intel released a line of chips called the clock-doubler 486 family (officially referred to as the 486DX2). A clock-doubler processor, when replacing the regular 25-MHz DX chip in a machine designed to run at 25 MHz, will run internally at 50 MHz and will automatically slow down externally as the system board requires. There are some BIOS (basic input output system) chip-compatibility issues to be ironed out; the first round of machines to incorporate this chip will be of the OEM variety. By late 1992, there should be a retail version of the DX2 so that owners can do the upgrading. Clock-doublers are currently shipping for 25-MHz machines, but expect a 33-MHz doubler that will run at 66 MHz later this year. A 50-MHz doubler has been rumored. Soon, anyone will be able to double the speed of his or her computer by simply popping in these chips. * Bus. The latest, best, and most popular 32-bit bus is the EISA bus. It should prove to be a good investment for a few years. Owners of pure IBM machines have no choice - they get the MCA bus. * Hard disk. The first XT had 10 megabytes of storage and an access time of 110 milliseconds. An entry-level workstation PC should have at least 100 megabytes with an access time approaching 15 milliseconds. Serious FEA users should consider a 320-megabyte disk as a minimum. Since DOS 5.0 increased the partition size from 512 megabytes to 2 gigabytes, monster machines with 2 gigabytes are feasible. * Disk interface. The disk interface is the connection between the disk drive and the system bus. It determines the speed at which data can be transferred between them. The original ST506 and the ESDI were device-level interfaces that required a separate controller, both are now obsolete. The new wave is toward system-level interfaces, where the controller electronics are on the drive and require only an unsophisticated host adapter (IDE or SCSI). The IDE (integrated drive electronics), which conforms to ANSI's AT attachment standard, can transfer up to 4 megabytes per second and is favored for 200-megabyte disks and smaller. Small Computer System Interface (SCSI) currently sports two standards. SCSI-1 was adopted in 1986 and has a data-transfer rate of 5 megabytes per second, 20 percent faster than IDE and twice as fast as ESDI. SCSI-2, adopted in 1990, can support 10 megabytes per second on a single cable and 40 megabytes per second on a double cable. Besides being faster, it can support up to seven devices (including printers, scanners, and CD-ROM) daisy chained together. SCSI is the interface of choice for larger-capacity hard drives. There is one small problem: you cannot plug just any SCSI peripheral into any SCSI adapter because vendors have not settled on one driver protocol. There are three protocols fighting to become the standard, Layered Device Driver Architecture (LAD-DR) from Microsoft; ASPI (Advanced SCSI Programming Interface) from Adaptec Inc. (Milpitas, Calif.), and Common Access Method (CAM), a proposed ANSI standard in concert with Apple Computer, a long-time SCSI advocate. Since Adaptec is an adapter board manufacturer, rival manufacturers did not want to endorse a competitor's standard; thus CAM was created. It appears that ASPI has a good shot at becoming the de facto standard. De facto standards based on market acceptance almost always beat out committee standards. Finally, with SCSI, multiple disks can be combined into one logical volume, a technical accomplishment for FEA users who want to do monster-size problems. Software vendors have long used disks as virtual memory to further extend the PC into the domain formerly held by mainframes. Because the SCSI standard is still unsettled, an adapter must be checked with the hard drive - or get a written compatibility claim. * Memory. Buy as much memory as possible. In the technical world of computing, everything improves with more memory. Memory is cheap and it is getting cheaper. A million bytes are now $17 and dropping. The 80486 can address more than 4000 megabytes (4 gigabytes) of extended memory, a long way from the original PC's 1 megabyte. Now that software applications can take advantage of this memory through DOS extenders, it will be put to good use. This is particularly true for FEA problems. Algor's Hyper series products can now run large-size problems totally in memory.

The memory can be added through memory boards on the external bus. However, this is the least-preferred way because the memory access is slow due to the bus speed limitation (although MCA boards are reported to be quite fast). It is best to add memory right on the system board, usually in the form of single in-line memory modules (SIMMs), because then memory access is at the fastest rate possible. The initial wave of 80386 machines allowed for a maximum of 16 megabytes on the system board. With the 80486 machine, manufacturers are allowing for up to 256 megabytes on the system board. * Disk caching. It is faster to read or write to memory chips than to perform the same tasks with magnetic media. Hard disk. designs have a lower limit of about 12 milliseconds for average seek time, far slower than the time required to access data in memory. A disk cache dynamically duplicates part of the contents of your hard disk in fast RAM so that it can be read (and sometimes written to) at RAM speed. DOS provides a dumb disk cache through its BUFFERS command in the CONFIG.SYS file. It is called dumb because it blindly loads data from the hard disk to memory that has been allocated in the CONFIG.SYS command line. A more sophisticated disk cache has a memory-management program scheme that attempts to anticipate an application's requests for disk data. Software that provides this service is available from several commercial sources. PC-Kwik from Multisoft Corp. (Beaverton, Ore.) consistently gets good reviews. Microsoft's SMART-DRV.SYS (free with Windows 3.1) is another such program. All these methods borrow the RAM from the system.

A more sophisticated alternative is a caching disk controller in which the caching RAM is on the controller. Controllers with up to 16 megabytes of cache memory are available, but their current high prices are hard to justify over the software approach, which many believe is as good. No matter which method is selected, caching is recommended when hopping between several applications. If a dedicated power program such as Algor is being run, however, running with a little cache and the maximum amount of extended memory should be considered. A couple of test runs should determine the best combination. * Memory caching. The time it takes a processor to execute a command is directly controlled by the clock. The time it takes the processor to retrieve information from memory is called the processor bus cycle. For the current family of Intel processors, it takes two processor cycles to execute one bus cycle (P/B ratio). The inverse of the CPU clock speed times, the P/B ratio is a direct measure of how fast the memory must be. The first PC had a clock of 4.77 MHz, thus the minimum memory speed was 420 nanoseconds. The RAM in the early days was faster, in the 160-nanosecond range. In other words, the RAM was faster than the CPU. Today's situation is reversed. A 33-MHz machine would require 60 nanoseconds or faster RAM. Dynamic RAM can just meet this requirement, as the fastest readily available RAM clocks in at around 60 nanoseconds. Faster CPUS are now RAM speed limited. As a consequence, today's faster computers include memory caching, which uses superfast static RAM (SRAM) to cache the transfer between the CPU and system memory. These chips are right on the system bus and are officially called secondary cache. SRAM chips can run as fast as 15 nanoseconds, fast enough to barely keep up with 50-Mhz 80486s. The intelligence to do this is hard coded into a new breed of chips called caching controllers, which are now an integral part of the system board. Computer manufacturers can get quite creative with designing and specifying these controllers. A bus cycle in which the data requested are not in the cache is called a cache miss (cache misses dramatically slow the processor down). Similarly, when the data are already in the cache, it is called a cache hit. The ratio of cache hits to the number of attempts to access the cache is called the hit rate, which is the key to performance. A well-designed memory cache should have a hit rate well above 95 percent. SRAM memory caches usually are 64, 128, or 256 kilobytes in size. The organization of the cache can be fully associative, direct-mapping, or set-associative, all intended to increase the hit rate.

The 80486 CPU provides the ultimate in memory caching. It has 8 kilobytes of memory, called primary memory cache, built right into the chip. This is one of the reasons that the 80486 chip is the workstation CPU of choice.

The Graphic Wave

Nowhere has change been more visible than in the graphics area, with technological improvements occurring at an almost exponential rate. IBM was slow to offer engineering-quality graphics boards. The initial PC resolution was 320-by-200-pixels with four colors, or 640-by-200 pixels in black and white - not good enough for CAD and FEA. The lack of good graphics was an opportunity for third-party graphics board manufacturers to enter the market.

In August 1984, IBM announced the 80286-based AT computer and the EGA graphics card, which offered 16 colors at a resolution of 640-by-350 pixels. In April 1987, IBM announced the VGA graphics board with a 640-by-480-pixel resolution and the 8514/A graphics board with 1024-by-768 pixel resolution. Board manufactures introduced Super VGA boards with 800-by-600 and 1024-by-pixel 768 resolutions. Each board had its own specifications, making it difficult for software vendors. Clone vendors banded together in April 1989 and formed the Video Electronics Standards Association (VESA).

The new kid on the. block is IBM's XGA with 1024-by-768 pixel resolution and 256 colors, which was introduced in October 1991. This is the first board to uncork the 16-bit bus bottleneck, as it can use a 32-bit bus.

What's Next?

Predicting the future is easy. Computers will get smaller, faster, cheaper, more graphical, and more powerful. DOS will continue to improve and is destined to run in protected mode; that is, it will crack the 640-kilobyte barrier. A graphical user interface sitting on top of DOS, such as Windows, is a given, although the engineering community may elect to use the interface supplied with the major applications they use. For example, Algor provides one built right into its products.

Intel has been king of the mountain during this whole PC revolution. It is not taking the RISC workstation challenge lightly. In 1991, Intel spent $1.5 billion on research, development, and capital expenditures. In 1992 it expects to spend $2 billion. The PC, as we know it, is not going to stop improving with that kind of support behind it. Sometime in the second half of this year, Intel is expected to begin shipping the P5 (80586). It will be superscalar, which means that more than one instruction can be decoded, dispatched, and executed during each clock cycle. (The 80486 can handle one instruction per clock cycle; the 80386 can handle one per two cycles.)

The P5 will have two integer units and a floating-point instruction. Integer operations will be twice as fast as the 80486, while floating-point operations will run five to seven times faster. The P5 will have two internal memory caches, one for instructions and one for data. The Intel i860, which is not a DOS machine but is currently available, is similar to the P5. Many advanced software companies are working with the i860 in anticipation of the P5.
COPYRIGHT 1992 American Society of Mechanical Engineers
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Computer Integrated Mechanical Engineering; finite element analysis
Author:Paulsen, W. Charles
Publication:Mechanical Engineering-CIME
Date:Sep 1, 1992
Previous Article:Molecular simulations open the friction frontier.
Next Article:A brief history of the PC.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters