Printer Friendly

Robust process engineering.

Robust process engineering

Over the last decade or two, American manufacturers have felt much greater pressures from overseas competition. Even for products invented here in the U.S. Recent studies have identified a number of reasons for our manufacturing weakness. Most relate to management practices, however, one reason is in engineering, that is, how manufacturing processes are developed and run.

American engineers look for technological breakthroughs while many of our foreign competitors focus on incremental improvements. It is felt by experts in this field that the cumulative effect of successive incremental improvements can be very large and may outpace those efforts to achieve technological breakthroughs. In addition, technological breakthroughs are often accompanied by heavy investment and significant risk, while the incremental approach does not. Because of the combination of minimum risk with large potential benefits, increased attention is being given to process improvement.

While many of the techniques available in the past for process improvement have been expensive and often missed the mark, new techniques have become available in recent years that are much more effective. These newer systems involve the use of the new mathematical models and computers with much greater calculating ability. Of the new software packages available, "Catalyst/RPE" from Catalyst, Inc. of Littleton, MA, is one that looks quite interesting.

What's the problem?

If American manufacturers are to regain the competitive position lost in recent years, it is necessary for them to produce products that match customer expectations for performance, reliability and quality. In addition, this must be done with faster response than the competition and at a lower price. This can only be done through the design, development and production processes.

One of the first requirements is simplification of product designs to enhance both manufacturability and product quality. It is possible to design quality into a product. One striking example of this came out of an MIT survey. In this survey, it was found that the quality advantage of Japanese designed cars was the same regardless of whether the cars were built in Japan or in U.S. factories. The quality advantage was independent of assembly, it had been designed into the product.

The next step is process development. In this phase, parameters affecting each process must be identified and optimized. Traditional approaches generally involve quick fixes to an identified process. Unfortunately, this usually degenerates into an endless cycle of fire fighting with no permanent solution ever being found.

One of the more popular methods to determine proper settings on parameters involves "best case/worst case" approaches. Two runs are performed using inputs expected to result in the best and worst outputs (products). If both results are "satisfactory" (in spec), it is assumed that any input settings between best and worst will produce a good product. Problems with this technique are that 1) there is no way to verify the directional effect of each input on the output; 2) we may have competing output requirements (e.g. ease of closure vs. tightness of seal) that prevent judgement of which inputs affect which outputs; 3) "process noise" (inherent process variability) may overshadow any of the individual results; and 4) if there is a broad range of outputs, there is no way to determine what action to take to enhance any one output since all input effects are combined.

One of the approaches used to optimize a process is the one-variable-at-a-time method. The method calls for the engineer to identify all the pertinent inputs and outputs, hold all but one of the inputs constant while the first is optimized, then proceed in a similar manner with all the remaining input variables. This approach has been used for much scientific study, but has two problems: 1) it is expensive and 2) interacting effects are ignored and usually lost. The result is a less than optimal process in spite of significant testing cost.

Full factorial approaches can be used. It overcomes the technical limitations of the one-variable-at-a-time method.

However, this approach is rarely practical in the real world because of the number of tests that need to be run. For example, in a system with five input variables that are to be measured at three settings, 243 experiments must be run, measuring all outputs on each.

How can the problem be made manageable?

How about "robust process engineering." Manufacturing processes are said to be "robust" if it is relatively insensitive to normal changes in processing conditions. Products are said to be robust if they perform well under a wide variety of service conditions.

Robust process engineering is a method of dealing with both the systematic effects of changing inputs and random noise. It is a manufacturing approach that allows engineers to create manufacturable products, stable processes and keep scrap losses low. It is also affordable.

Traditional methods of process development focus on the systematic effects produced by inputs on the outputs. They do not address process variability. While some methods can measure output uniformity, they do little to improve it.

G. Taguchi, one of the experts in this area, introduced a new idea some time ago - the variability of an output is also a function of process inputs.

In other words, process variability can also be reduced by manipulating the inputs. Using this technique, it is possible to not only shift the response to the desired value, but also to reduce the variability of the responses.

The engineer's goal is to select factors settings that will put all responses on target while minimizing the variability around those targets. In order to balance these objectives, utility functions are introduced into the system.

What are utility functions?

Utility functions are mathematical tools that measure how well a process meets its objectives. There are three types of utility functions to match up with three possible output goals:

* Matching an output response to a target.

* Maximizing an output response.

* Minimizing an output response. These are shown graphically in figure 1.

In the first case (a), the value of the function is at its maximum (1) when the output response is on target. As the response moves away from the target, the value drops until it reaches zero at the specification limits. For anything outside the specification limits, the value becomes negative.

If output is to be maximized, (b) is used. This function approaches its maximum (1) as the output grows. At the lower spec limit, the function becomes zero, then goes negative as the output level drops further.

If the output is to be minimized, (c) is used. Here the value is 1 at an output of zero. From there, it diminishes until it reaches zero at the upper spec limit, then goes negative at higher output levels.

In each case, the utility function has maximum value at the best possible output, zero at marginally acceptable output values and negative as the output moves out of spec. This mimics the real world where economic value is added by a process that makes good products and is lost when scrap is produced. When there are multiple outputs, the total utility is the average of the individual output utilities. The objective of robust process engineering is to find the settings for the controllable inputs that maximize the total utility. Maximization of the total utility is done through five steps:

* Analysis - identification of important variables.

* Planning - create a test plan.

* Test - collect output data from the trial runs.

* Evaluation - analyze the data generated.

* Interpretation - decide how to run the process.

In the analysis step, a determination is made about what to study. What inputs seem the most influential and which outputs are the most important. Typically, outputs should be related to some dimension of customer satisfaction. If the output isn't important to the customer, it doesn't make much sense to spend money on controlling it.

Once the variables, input and output, are identified, the test plan must be determined. Again, Taguchi has popularized a system of "orthogonal arrays" as a basis of approaching process studies. These combine economy of runs with unambiguous analysis.

The orthogonal arrays are a subset of a larger class of optimal designs. Until recently, test plans for optimal designs were impractical for most applications because of the prohibitive number of calculations required. With the advent of these new technologies, it is preferable to Match the test plan to the engineering requirements rather than distort the engineering requirements to fit a test plan.

Once the test plan is established, data can be collected from it. Since the cost of making runs to generate data is usually the most significant cost, test plans used should be designed for efficient use of runs rather than for simplicity of calculations.

Once the data is collected, it can be analyzed to find a mathematical model that adequately describes the relationship(s) between the inputs and outputs. Graphic display of the analytical results is generally best. With the speed of most computers today, the engineer can actively interact with the computer to modify the analysis and data if necessary.

After the model has been fit to the data, it can be used to select input settings to optimize performance. This can be done automatically by the computer program or can be done interactively with the engineer. Following optimization, it is possible to predict future performance, economically and otherwise.

What will Catalyst do?

Catalyst/RPE is a software package that is designed for engineers for the performance of process studies. Advanced statistical or computer training is not needed. The system allows interactive control of the work to be done as well as analysis.

Users can graphically describe the process, defining input factors and responses to be measured. Specification limits for each of the responses can be set along with the type of desired response; i.e. maximize, minimize or center on target.

The software includes the capability to design the experiment, taking into account interactions, quadratic relationships or user selected designs. The software will then determine the number of runs needed and generate the proper design. If desired, the user can input the number of runs that he can afford to run and/or direct the system to minimize the number of experimental runs. There are also additional designs available in a built-in catalog.

Once the design is established, the software will tell the user what input factor settings to use for each test run and what responses to measure. To aid in the process, a checklist is printed for each run.

After the data is collected and entered in the computer, the software will fit the model to the data. It is displayed as a matrix of plots with a row for each response and a column for each input factor.

Once this information is complete, it is possible to simulate other operating conditions and evaluate the system response to the new operating conditions. An error bar shows the variability of each response in the system. Initial interpretation displays show input factor settings selected by the computer to optimize the process behavior.

The approach matches target values and favors input factor settings that minimize variation. From this it is possible to vary any or all of the input settings to see what happens to the system.

The software includes full report generating capability. Reports are generated as a standard word processing document, so that editing or style revisions are easily accomplished.

What about cost?

Cost is more and more becoming an overriding factor in any manufacturing process. And process development costs can be significant. They include the cost of production time, materials used and the cost of personnel time involved in the study. Tradition, ad-hoc, approaches often appear attractive because up-front costs are low. However, their real costs in terms of lost and inefficient production can quickly overshadow the cost of a properly run process study.

Robust process engineering studies will cost more upfront. But with efficient test plans, the overall cost of time, materials and personnel is still modest. Using the newer techniques, the number of distinct runs required for a process study is only a few more than the number of input variables of interest. Even with the measurement of process noise, the total number of runs will usually be limited to a small multiple of the number of input variables.

The benefits can be far reaching, revealing a wide variety of findings. Some factors will affect product quality while others affect yields. Some factors may have little direct effect on the product, but will affect process variability, cost or speed.

Use of these techniques does not require any major changes in organizational structure, business strategy or capital equipment. It is based on making incremental improvements in the current system, focusing on what is valuable to the customer. The bottom line is improved competitive position and profitability.

The Catalyst system is not sold. It is licensed only. Fees range from $10,000 to $100,000 per year, depending on the size of the organization and what its use will be. It is designed to be run on Macintosh IIX, IICX or IICI equipment. If the hardware is not available, hardware cost can range from $6,000 to $12,000. It is not available for IBM systems.

The licensing fees include on-site training for personnel. Use of the software itself is reported to be quite simple. The biggest area of training is in proper design of experiments for efficient use of the system. The software is designed for use by engineers, not statistical experts.

Even for short run, job-shop situations, the system is reported to have benefits. With this type of production, repeatability of the machines becomes a major factor in variability of the process.

Companies that are using the system have reported savings from six to 25 times the annual cost of the license fee.

Summary

Over the last several years, more and more emphasis is being placed on competition and efficiency of operations. There is a transition taking place in our businesses. That transition will eliminate unproductive operations that can only produce marginal quality merchandise.

Total quality management, continuous improvement and other buzz words are the order of the day. Necessarily so with the competition we face from the overseas markets. In the last decade we have seen a significant erosion of our markets to the Far East because they have been able to produce both higher quality and lower cost than we could here.

Only by adopting modern, innovative techniques and optimizing our production lines will be able to compete in the world market of the 1990s. Using tools such as state-of-the-art computers and design packages such as this one will be essential if we are to be a part of that market. [Figures 1 and 2 Omitted]

Jon Menough, contributing technical service editor
COPYRIGHT 1990 Lippincott & Peto, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Menough, Jon
Publication:Rubber World
Date:May 1, 1990
Words:2460
Previous Article:Biennial polymer symposium planned.
Next Article:Industrial rubber products: carbon black and compound performance.
Topics:


Related Articles
Catalyst/RPE: a manufacturing blockbuster?
Tecnomatix Receives $850,000 Order From Renault.
For Robust Products. (Marginal).
CONQUEST AWARDED $140M SYSTEMS ENGINEERING AND MANAGEMENT CONTRACT WITH U.S. INTEL AGENCY.
Oshkosh Truck integrates simulation into casting design process. (New Installation).
Time-saving targets.
Walker Die Casting achieves success with simulation software.
Driving toward static-free electronics: a new roadmap highlights electrostatic discharge trends and needs.
Four new journals to be launched by IEEE in 2006.
Simulation software is robust and fast.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters