Printer Friendly

Bound by experiment: the best, most practical design means limiting the size of the study.

In a bit of a strategic departure from the normal range of electronic manufacturing topics, the EMPF would like readers to consider the importance of preparation in the arena of designed experiments in order to properly qualify a manufacturing process. This particular topic has relevance because of the associated scope of work the EMPF encounters in numerous projects from material R&D to manufacturing process optimization, which require appropriate experimental designs to ascertain significant data.

There are three important DOE rules of engagement:

Plan the experiment with a realistic goal in mind. One experiment may not reveal all the pertinent information needed to optimize the manufacturing process. Figure 1 reflects a process to build a PWB that may contain as many as 200 sub steps. The practicality of a full-blown experiment incorporating all the subsequent processes is unrealistic. Breaking down the individual areas into manageable experimental units is more practical in assessing where the greatest variability occurs in the manufacturing process. The type of experiment used also will depend on the type of data required and the stage of development the manufacturing process has achieved.

[FIGURE 1 OMITTED]

One possible approach indicates that a screening experiment may be appropriate for a number of processes that are relational or sequentially adjacent to each other. An optimization should occur at each process to determine relationships properly without introducing excess variability that would mask true variability between processes. Finally, a nested ANOVA (analysis of variance) may be used to determine where variability occurs in the larger manufacturing flow. Whatever experimental methodology is used, many smaller experiments will yield greater information and prevent the costly mistakes frequently encountered in one grand experiment.

Define the constraining factors and block where appropriate. Experiments are expensive, and it is not always possible to run the experimental unit in totally random fashion. A recent example of this occurred when cost and time precluded running a solder paste experiment with four different paste types, two different panel finishes, four atmospheric oven conditions, and two peak reflow temperatures. Even with a fractional factorial design, the practicality of running each board in randomization was prohibitive. In cases such as these, and they are more common than not, it is advisable to incorporate a blocking scheme that can be used to effectively monitor the effects of process sequence and bias on the experimental results. This may require more than one block, for example, noting the day the experimental runs were conducted. It could be all runs with high peak reflow temperatures and a specific solder paste were processed on Day 1. If those blocks are not incorporated into the data, any effects from those conditions will be confounded and unreliable. It must be proven that on Day 1, there were no anomalies that would skew the results; for instance, that the oven temperature was recorded accurately and the solder paste not mishandled or given any extraordinary treatment the other pastes would not receive.

Replicate. It is better to replicate experimental runs than to expand the experimental factors and levels with no replication. Replicating runs will generally increase the statistical significance of the data. A rule of thumb is to incorporate 25% of the experimental runs as replicates. Randomize the sequence of the experiment (this can be done in Excel effectively) and select the first 25% of the runs sequenced. Note that replicates are a representation of the experimental unit, not siblings. An example of a sibling is running an experimental unit (i.e., having the same factors) concurrently under the same conditions. A true replication is run as a randomized and independent event, having the same experimental levels and conditions.

Experimental design can be a costly and inconclusive process if the proper precautions are not taken to ensure a statistically sound probability of detecting true variability or causation. When DoE is properly implemented, process improvements can be instituted that will help decrease cost and improve product reliability.

The American Competitiveness Institute (acuisa.org) is a scientific research corporation dedicated to the advancement of electronics manufacturing processes and materials for the Department of Defense and industry. This column appears monthly.
COPYRIGHT 2009 UP Media Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Tech Tips
Comment:Bound by experiment: the best, most practical design means limiting the size of the study.(Tech Tips)
Publication:Circuits Assembly
Date:Mar 1, 2009
Words:687
Previous Article:How REACH impacts electronics components use: seven steps on navigating the new set of regulations for those importing assemblies to the EU.
Next Article:Use of wettable and non-wettable nozzles: a study finds nozzle performance is tied to diameter.


Related Articles
Statistical shortcomings in traffic studies.
In case you missed it.
Cost-effective green retro-fitting of an existing building.
Fonck named U.S. Fusion head.
Sample size calculations in clinical research, 2d ed.
Overcoming Your Parents' Divorce.
Urban trees can enhance water infiltration.
Spurious conclusions on analog free thyroxine assay performance.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |