Measurement system analysis and control.
Decisions are only as good as the information on which they are based. Therefore, it is vital to know how much faith can be placed in data used in making decisions. With this in mind, gauge control and improvement programs are fundamental components of any company.
Regardless of the industry, every company has its own measurement systems for monitoring the performance of a process. In foundries, such systems evaluate matters as simple as product unit dimensions or as complex as the chemical analysis of a nonhomogeneous bulk material.
Typical industrial decision-making involves the evaluation of measurement data and action based on results. Therefore, the integrity of the data is crucial to the decision-making process.
Two types of errors can occur when making a decision based on measurement results. Type I errors involve rejecting material that is actually good while Type II errors occur when accepting material that is actually bad. The risks involved in making decisions based on measurement data can be minimized with thorough control and analysis of measurement systems.
Measurement system analysis is the identification and quantification of the components of error within the system. This analysis provides results for several purposes. It can be used to determine the confidence levels the user has in the results and whether the calibration frequency of the measurement device is adequate. Also, the data generated in the analysis can be used to quantify measurement system improvement requirements and results.
Control of measurement systems involves the identification and calibration of the measurement device as well as the control of its use. These devices or gauges provide data needed to make the key decisions affecting quality of the product at various stages of production. Generally, they are required to be part of a gauge control program. In industrial applications, the identification of critical gauges is facilitated by the use of a process control plan or process flow diagram.
Any instrument used to evaluate incoming materials and final product status should be included in the program. Once this portion of a gauge control program has been established, it can be expanded to include all tools that are not considered critical to the process.
Gauge control typically involves:
* the physical placement of gauge identification numbers on the gauges themselves;
* the removal and segregation of gauges of questionable status;
* the physical placement of calibration status identification on the gauges (typically a sticker stating the date of the last calibration, who performed the calibration and the date the next calibration is due);
* the inspection, identification and calibration of new gauges before they can be used.
The rule of gauge control generally can be stated as follows: No one shall make a measurement via a gauge with a questionable calibration status.
Calibration of the measurement devices is divided into method and frequency, both of which are usually detailed in the gauge instruction manual furnished with the device. Control of the measurement instruments, as well as the frequency of their calibrations, is dictated by the working environment.
The method used to calibrate an instrument typically involves the use of a standard--a material in which the quantity to be measured is a known value. Standards used to calibrate measurement devices in the U.S. usually are required to be traceable to the National Institute of Standards and Technology, formerly the National Bureau of Standards.
As explained in Juran's Quality Control Handbook, by J.M. Juran, the primary reference standards are the apex of an entire hierarchy of reference standards. At the base of the hierarchy stands the huge array of "test equipment." These instruments are calibrated against "working standards" which are used solely to calibrate these laboratory and shop instruments. In turn, the working standards are gradually related to the most accurate primary reference standards through one or more intermediate secondary reference standards or transfer standards.
Standards should be as similar as possible to the product normally measured. The frequency of calibration depends on the workload the gauge has to bear, how critical the measurement quantity is to the process and the harshness of the environment in which the gauge is used. A good starting point in the development of calibration frequencies for each gauge is the manufacturer's recommendation, which typically is included in the gauge instruction manual. Refinement of the frequency of calibration is necessary for improvement of the gauge control program.
Accuracy and Precision
As cited by Juran, the error associated with a measurement system generally falls into two areas--accuracy and precision. The accuracy of the instrument or measurement system is defined "as the extent to which the average agrees with the 'true' value of that unit of product. Irrespective of accuracy of calibration, an instrument will not give identical readings even when making a series of measurements on a single unit of a product.
"The ability of the instrument to reproduce its own measurements is called its precision, and this varies inversely with the dispersion of the multiple or replicated measurements," says Juran. These two components of error exist in every measurement system, yet its relative or combined importance is a function of the importance of the measured quantity.
One of the more common measurement variation study techniques used to evaluate measurement systems is the Gage Repeatability and Reproducibility study (Gage R&R study). A Gage R&R study is actually a designed experiment using a simple analysis of variance to determine the amount of error attributable to the measurement system as a percentage of the specification range. Procedures for conducting the study as well as software packages interpreting the data also are available.
A reference called the Measurement Systems Analysis Reference Manual was published recently as a joint effort of the quality and supplier assessment staffs at Chrysler, Ford and General Motors. Working under the auspices of the American Society for Quality Control Supplier Quality Requirements Task Force, the staff produced this manual, which can be obtained from the Automotive Industry Action Group.
Quantitatively, measurement variability can be broken down into various components. Mathematically, the variance of the observed measurements result from contributing variations. These contributing factors include variations in operators, materials, test equipment, procedures, etc., depending on the measurement system. Methods exist to quantify each of the components of the overall measurement system variance and can be used in an overall quality improvement program.
Reducing the components of error in measurement systems is a primary goal in many quality improvement programs.
For insight on one of the more common measurement variation study techniques, CMI regularly offers courses on Gage R&R. For more information, call 800/537-4237.
1. J.M.Juran, ed., "Juran's Quality Control Handbook," Fourth Edition, McGraw-Hill, Inc., pp 18.61-18.64, 18.66 (1988).
For more information on the Automotive Industry Action Group circle No. 342 on the Reader Action Card.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Quality in the '90s|
|Author:||Lively, Douglas M.|
|Article Type:||Cover Story|
|Date:||May 1, 1992|
|Previous Article:||Understanding ISO 9000.|
|Next Article:||A new technique for producing as-cast ductile iron.|