Printer Friendly

Interlaboratory testing programs as a quality assurance tool for the rubber industry.

Interlaboratory tests take many forms. There are studies to validate a test method or measurement techniques, generate precision statements, assign values to reference materials, investigate causes of systematic error/bias, assess lab performance against known values, or assess lab performance and uniformity of results through comparison with other labs' results. Interlaboratory tests can have various names, depending on their purpose - round robin, cross-check, collaborative trial, proficiency tests, etc. Tests that are used to fulfill accreditation or certification requirements by assessing ]lab performance are often referred to as proficiency tests. The terms round robin and interlaboratory test are used in this article to indicate an ongoing testing program that assesses lab performance through comparative statistics.

While some readers may be familiar only with proficiency tests, and assume that these interlaboratory comparisons arose to fulfill lab accreditation requirements, the truth is that numerous industries and groups have benefited from interlaboratory testing programs for years, including paper and paperboard; coal; containerboard; petroleum products; fasteners and metals; color and appearance; oils and fats; plastics; cement; forensic science; medical labs; environmental labs, and of course, the rubber industry.

The National Bureau of Standards (NBS, now known as the National Institute of Standards and Technology or NIST) established the first formalized, large-scale interlaboratory testing program for rubber in 1969, and issued its first report in 1970. The program initially focused on vulcanized rubber. The first testing round had 72 participants, a large number for a first-of-its-kind venture. Participation remained strong and it became clear that the NBS program was more than a research study. Responsibility for the program's operation was transferred to the private sector in the mid 1970s, and the rubber interlaboratory testing program continues today with several of the original members.

Many of the items discussed in this article could apply to a round robin conducted in any industry. And not every round robin will offer all of the features and benefits outlined. The examples, however, are drawn from a rubber interlaboratory program in order to better illustrate how the interlaboratory test can serve as a quality assurance tool for the rubber industry.

Reasons for participating

Quite simply, either a lab wants to join a round robin or it must join. Among the labs that must participate, we usually find external events requiring interlaboratory tests as part of the lab's quality assurance program - a company seeks ISO 9000 or QS9000 registration; a lab seeks accreditation from A2LA, NVLAP or similar organizations; regulations in some industries may require certification of analysts/technicians. Another key outside force is a customer or potential customer who demands that its supplier demonstrate the validity of test results.

A company may voluntarily submit itself to "grading" by an outside proficiency testing provider. When the first formalized round robin was established for the rubber industry, all participants were volunteers. Accreditation was just a word many people could not spell. Even today, accreditation may not be appropriate or cost effective, particularly if the lab's data are used by scientists rather than used commercially. Still, lab managers everywhere have the same need to evaluate their lab and technicians and to assure all of those visitors to the lab that everything is under control.

A less common, though equally valid, reason for joining a round robin originates in the company cross-checks that are often used to compare lab performance of multiple locations. These comparisons are needed because duplicate procedures and quality manuals do not always ensure similar measurement results. These cross-checks, however, can be expensive and burdensome for a company. A formalized, independent round robin is cost effective and ensures that lab performance is verified at least several times each year.

Figures 1 and 2 are actual tensile strength results over a one-year period for two different locations of the same organization performing the same test on the same materials. The individual labs may be in control, but can the same thing be said about the organization?


Ultimately, whether by imperative or by choice, the lab is seeking an objective way to prove measurement competence.

Features of successful round robins

The foundation of an interlaboratory test is an established and accepted test method. Established implies that the method is capable of yielding reproducible results. It is understood that any variability between labs exceeds within-lab variability. Accepted connotes a method that is widely used and has commercial value. The majority of such test methods are published by a standards-writing organization, but there are disciplines, such as forensic sciences, where standardized methodology is nascent or nonexistent, yet critical tests are performed daily. Fortunately, the rubber industry is a mature industry with a myriad of recognized test methods that are frequently reviewed.

There must exist an economic imperative to perform the test method. A company is not going to spend much time or money participating in an external round robin if the test is not mandated by regulation, specification or customer. Decreasing budgets and staffs cannot allow it. The selection of a "popular" test method for a round robin ensures the large, diverse population required to generate meaningful statistics.

The desire to conduct a particular round robin, however, is not enough. Homogeneous test samples constitute the third and most difficult component. Going beyond the concept of "sameness" of the samples, the provider must seek sample material that is also neutral to the test so that one is judging the lab's measurement capabilities rather than the variability of the material. Moreover, these samples ideally should be available at a reasonable cost proportional to what participants are willing to pay for the round robin. An ongoing interlaboratory testing program must be assured of a steady supply of homogeneous samples.

The successful interlaboratory testing program will recognize that flexibility is a required element, as the industry might make significant deviations to test methods, thereby affecting results. Alterations may be unintentional or deliberate; they may be dictated by equipment constraints or result from ambiguities in the method. And it seems to be the proclivity of humans to seek shortcuts. Why test one dumbbell at a time when you can pull five, ten or more simultaneously? Why die out and then nick a dumbbell when you can die and nick at the same time? Carefully constructed data reporting sheets and questionnaires will allow the test provider to recognize departures from expected procedures. In the end, the test provider learns as much as the participant. Examples of how an interlaboratory test evolves follow.

Some differences in procedure impact results. For example, when a moving die rheometer (MDR) round robin was first introduced, the use of film was optional but had to be reported by participants. Two compounds, SBR and EPDM, were tested, and approximately one-half of the labs used film in accordance with their normal practice. The between-lab variability for the EPDM's torque values was always greater than expected. The provider began to distribute film to participants and requested its use by all labs for both samples. The SBR results were unaffected, but the between-lab variability for the EPDM compound dropped dramatically. Use of distributed film is now a required pan of the test.

A similar procedural difference in a different test may yield a different outcome. Although the use of film is not recommended in the Mooney viscosity interlaboratory test, participants may use film between the die and the specimen if it is part of their standard lab procedure. A subanalysis conducted during each testing round has found that for the raw rubbers tested, film use does not affect Mooney values or between-lab variability.

Sources of variability in Mooney results lay elsewhere. In the late 1980s and early 1990s, the rubber industry began to seriously question the role of mill-massing in the Mooney viscosity test. In 1990, participants in an ongoing Mooney viscosity round robin were asked a series of questions concerning their mills and mill-massing procedures. A look at just one of the questions, in this case the mill-roll opening, is illustrative: 17% of the respondents did not know their mill-roll opening; 70% reported a mill-roll opening identical to that specified in the test method; and, 13% reported an opening other than that specified in the test method. When responses to all questions were analyzed, the provider concluded that only 25% of participants definitely were using standard mill conditions.

Starting in 1992, the provider eliminated mill-massing of samples in three of the four testing rounds conducted during the calendar year. During the third quarter of each year for four years, participants were asked to test a polymer in both its massed and unmassed state in order to determine the effect on between-lab variability over a range of Mooney values (table 1).

Table 1 - coefficient of variation (between-lab)
Year Material Massed Unmassed

1992 NBR 35-5 2.7% 2.3%
1993 NBR 33-3 3.9% 3.5%
1994 NBR 30-8 3.8% 2.0%
1995 SBR 1502 4.3% 2.5%

The within-lab variability was generally unaffected by mill-massing. Although not intended as a pure scientific study of mill-massing, the consistent results from a stable group of participants were convincing enough. In keeping with industry trends, the mill-massing portion of the test was eliminated completely in 1996. Other factors that have been examined over the years include dumbbell preparation, thickness measurements of dumbbells, time delays in hardness readings, extensometers and rheometer models.

One of the crucial aspects of a round robin - statistical techniques to determine outliers and assess performance - seems to receive minimal attention in many discussions of interlaboratory testing programs. This is surprising, because these statistical techniques are intrinsic to the test design. Robust methods (i.e., few prior assumptions required) are preferred; the provider should not impose limits on the data, such as that labs will agree within a specified range. Comparative statistics accept real world variability and establish a best value for a property/material by using the most agreed upon value. The most sophisticated statistics are just a worthless string of numbers, however, if the labs don't understand the numbers.

Poor performance can impact a lab's status with an accreditor or a customer. An effective interlaboratory testing program will assist participants by giving them some direction for corrective actions. One way to accomplish this, is through a meaningful presentation of data. Results of all participants, summary data, crucial test conditions (e.g., time, temperature), any deviations from the test method reported, and materials tested are among the many items to be included in the report. Any lab identified as an outlier must be able to see why the values were excluded.

A new addition in round robins is a report geared to the individual lab. One such individual report is a trend chart. In June of 1996, a rubber products manufacturer and longtime participant in an interlaboratory testing program became concerned when its tensile modulus results were found to be slightly high relative to the other participants in the test. The test provider reviewed past results with the lab and demonstrated not only that the lab's modulus results were out-of-line with previous testing rounds, but also showed that the lab's elongation results relative to the other labs were becoming lower with each testing round. With this knowledge (low elongation, high modulus), the company was able to make the necessary adjustments to its equipment and reviewed testing procedures with technicians. This situation illustrated how an ongoing testing program can reveal trends in testing and convinced the provider that inclusion of trend charts with a summary report would allow a lab to turn historical data into a predictive and/or diagnostic quality assurance tool.

Actual trend charts are reproduced to illustrate how a lab can track performance over time. The stress at 100% elongation data (figure 3) and ultimate elongation data (figure 4) suggest problems within some of the individual testing rounds; these outliers would have been identified in the report corresponding to each testing round. The trend charts also indicate an inconsistency between testing rounds that might have gone unnoticed.


The stress at 300% elongation results (figure 5) are certainly acceptable within each testing round. Analyzing the four consecutive testing rounds together begs the question, is there a trend? If so, is there a problem? The next round will provide more answers.


A regular schedule of shipments, analyses and reports for the round robin ensures that participants receive consistent feedback regarding performance and have time to take corrective action.


Quality is the sum of the efforts made by the lab. An external testing program can never replace a lab's internal quality tools: calibration, SPC, SRMs, routine maintenance of equipment, in-house cross-checks, etc. But external programs can certainly supplement the overall "audit" of the lab.

There are limitations of the round robin as a quality assurance tool. The samples distributed for testing may represent an artifact rather than a product for a lab. Moreover, the range of test values may differ from the lab's usual range. Acceptable performance in a particular test cannot guarantee the same performance on a different material or in a different range. It is important to note here that some products/materials that are inherently variable cannot be used successfully in a round robin, and therefore only an artifact may be available. Moreover, the interlaboratory test results represent one moment in time. Poor performance in one test round may be cause for corrective action, but should not indict a lab. It is the lab's performance over time, as well as its total quality assurance program, that matters.

Large group statistics ensure that judgment criteria are robust and cannot be unduly influenced by one lab. As all aspects of the interlaboratory test, such as actual test conditions, deviations and instrumentation, however, are self-reported, the provider should exercise caution in drawing conclusions.


How can a lab impress a customer? By going outside of its comfortable internal range and testing unknown material and then obtaining results that correlate with other labs testing the same material.

Customers and suppliers may use the term partnerships to describe their relationships, but this does not mean that internal numbers are no longer viewed suspiciously. The customer's scientists want to be reassured that the results coming out of the supplier are valid. A supplier with an objective means of proving his numbers - specifically participation in an independently-operated, ongoing round robin - saves time and money. It's also reasonable to expect that at some point, the lab will make an error. Labs that have experienced problems with their internal measurements may find the only way to prove that they are back in line is through participation in an external testing program.

In addition to reducing arguments between customers and suppliers, a good quality assurance tool should translate into savings in production and training costs. A consistently low or high test result, or any uncertainty regarding the accuracy of a test result, could increase manufacturing costs by requiting additional processing and/or quantities of expensive raw materials to assure meeting production specifications. A company can contain production costs by recognizing out of control results.

After being excluded as an outlier in a robber report for erratic tensile stress results, a major building products company (and new round robin participant) examined its tensile equipment and discovered that the software had been corrupted; as a result, the software was using incorrect elongation values in the tensile modulus calculations. The lab manager informed the test provider that this problem might have gone undetected for months had he not been able to see so convincingly in the report how significantly his lab's data differed from the other participants. Tensile stress at 30t)% elongation is a critical test value for this lab, so uncovering this problem early on potentially reaped enormous savings.

A round robin gives the lab manager one more opportunity to verify that training procedures for technicians are adequate. Some managers incorporate a technician's round robin results in the performance review process. A few managers have even told the test provider that some technicians, after completing a Rubber Division educational course, are asked to test the round robin samples as a type of final exam.

A well-run, long-term interlaboratory testing program will ultimately provide an overview of the industry it is serving. The test method tells you how it should be done. A round robin can tell you what is being done. Even a one-time interlaboratory test can cause a stir. A research study involving tear strength (dies C, B and T) was conducted in 1995 to investigate ASTM Method D624 as a proficiency test. Two compounds in the form of cured plaques were distributed; participants prepared their specimens from these plaques for testing. The die B tear strength data were the most interesting, and a breakdown of results according to specimen preparation is presented in table 2.

Table 2

All labs combined Grand mean 79.65 kN/m
 between-lab std. dev. 16.42 kN/m
 coefficient of variation 20.6%

Razor blade used Grand mean 75.88 kN/m
to create nick between-lab std. dev. 10.90 kN/m
 COV 14.4%

Nicking die used Grand mean 92.12 kN/m
 between-lab std. dev. 12,97 kN/m
 COV 14.1%


All labs combined Grand mean 39.17 kN/m
 between-lab std. dev. 10.08 kN/m
 coefficient of variation 25.7

Razor blade used Grand mean 34.685 kN/m
to create nick between-lab std. dev. 5.0925 kN/m
 COV 14.7%

Nicking die used Grand mean 49.63 kN/m
 between-lab std. dev. 6.3175 kN/m
 COV 12.7%


All labs combined Grand mean 22 of 23 labs
 between-lab std. dev.
 coefficient of variation

Razor blade used Grand mean 12 of 12 labs
to create nick between-lab std. dev.

Nicking die used Grand mean 8 of 8 labs
 between-lab std. dev.

The procedure used to prepare the nick in the die B specimen (razor blade or nicking die) appeared to affect tear strength results, though to what extent is not known. The between-lab variability is so high that ultimately this difference in dumbbell preparation may not be meaningful. All of the causes of interlaboratory variability in this test merit further investigation. It is duly noted that a nicking die is nonstandard per ASTM Method D624. Why even consider that data? Because one-third of the participants in this one-time study routinely use a nicking die in spite of its prohibition by the test method. This makes the nicking die a common deviation, and this deviation, if shown to be a major source of between-lab variability, will impact commercial testing results.

Just as monitoring trends within a lab can offer new insight, perhaps an examination of an interlaboratory's summary data may realize an unexpected benefit. The database of information that is developed and used by a test provider to track material and lab performance may help ascertain what differences can realistically be expected commercially and by scientists. The statistics presented in tables 3-7 were generated from proficiency tests; they did not come from repeatability and reproducibility studies. These tables are not intended to replace any existing precision statements. However, they offer a rarely seen look at real world results.


At a minimum, an interlaboratory testing program as described in this article will give assurance of measurement competence. The additional roles that the round robin plays in a lab, and the benefits which are derived from the round robin, depend upon the lab manager or quality manager. Even after 30 years, labs still seem to find new ways to incorporate interlaboratory test data into their quality program.

The most useful round robin is dynamic, reflecting changes in test methodology and improvements in lab capabilities. As the interlaboratory test evolves, more detailed information is developed from the statistics. The value and meaning of the data presented in the last section of this article are just beginning to be explored. There seems to be an unlimited supply of things to learn from an interlaboratory testing program.

Table 3 - Mooney viscosity - ASTM D1646
Material Year ML 1+4 Sx Sr SR Inc.

SBR 1502 1997 45.194 0.642 0.306 0.710 57
SBR 1500 1998 45.493 0.773 0.274 0.820 57
 Pooled 45.344 0.711 0.290 0.767

NBR 33-5 1997 49.001 0.835 0.326 0.895 56

NBR 35-8 1997 69.200 1.086 0.426 1.165 58
NBR 35-8 1998 79.745 1.237 0.732 1.434 54
NBR 35-8 1998 79.086 1.271 0.442 1.345 55
 Pooled 76.010 1.201 0.552 1.319

Material Year Rep. r R %CVx %CVr %CVR
SBR 1502 1997 60 0.866 2.009 1.4 0.7 1.6
SBR 1500 1998 62 0.775 2.321 1.7 0.6 1.8
 Pooled 0.822 2.171
NBR 33-5 57 0.923 2.533 1.7 0.7 1.8
NBR 35-8 60 1.206 3.297 1.6 0.6 1.7
NBR 35-8 1997 58 2.072 4.058 1.6 0.9 1.8
NBR 35-8 1998 58 1.251 3.806 1.6 0.6 1.7
 1998 1.561 3.734

Sx = Btwn.-lab. STD; Sr = Repeatability STD; SR = Reproducibility STD; %CV = Coefficient of variation; r = 2,83 x Sr; R = 2.83 x SR; Inc. = No. of labs included; Rep, = No, of labs reporting

Table 4 - Mooney viscosity - ASTM D1646
Material Year ML 1+4 Sx Sr SR Inc. Rep.
Butyl 1996 46.69 0.961 0.247 0.992 58 60
 1997 47.53 0.817 0.406 0.911 54 59
 1997 47.33 0.728 0.271 0.776 56 57
 1997 47.04 0.883 0.392 0.965 58 60
 1997 47.38 0.745 0.309 0.806 57 60
 1998 45.35 1.034 0.263 1.067 54 58
 1998 45.11 0.637 0.289 0.699 57 62
 1998 45.14 0.763 0.342 0.835 55 58
 Pooled 46.45 0.830 0.320 0.890

 ML 1+8

 1996 44.76 0.880 0.210 0.905 57 62
 1997 45.61 0.951 0.361 1.017 56 58
 1997 45.30 0.869 0.228 0.898 57 58
 1997 45.06 0.947 0.322 1.000 61 61
 1997 45.40 0.776 0.231 0.809 59 61
 1998 43.33 1.091 0.253 1.120 55 58
 1998 43.27 0.784 0.187 0.806 54 61
 1998 43.28 0.932 0.239 0.962 54 57
 Pooled 44.50 0.909 0.260 0.945

Material Year r R %CVx %CVr %CVR
Butyl 1996 0.699 2.807 2.1 0.5 2.1
 1997 1.149 2.578 1.7 0.9 1.9
 1997 0.767 2.196 1.5 0.6 1.6
 1997 1.109 2.731 1.9 0.8 2.1
 1997 0.874 2.281 1.6 0.7 1.7
 1998 0.744 3.020 2.3 0.6 2.4
 1998 0.818 1.978 1.4 0.6 1.5
 1998 0.968 2.363 1.7 0.8 1.9
 Pooled 0.905 2.516

 1996 0.594 2.561 2.0 0.5 2.0
 1997 1.022 2.878 2.1 0.8 2.2
 1997 0.645 2.541 1.9 0.5 2.0
 1997 0.911 2.830 2.1 0.7 2.2
 1997 0.654 2.289 1.7 0.5 1.8
 1998 0.716 3.170 2.5 0.6 2.6
 1998 0.529 2.281 1.8 0.4 1.9
 1998 0.676 2.722 2.2 0.6 2.2
 Pooled 0.735 2.674

Sx = Btwn.-lab. STD; Sr = Repeatability STD; SR = Reproducibility STD; %CV = Coefficient of variation; r = 2.83 x Sr; R = 2.83 x SR; Inc. = No. of labs included; Rep. = No. of labs reporting

Table 5 - oscillating disk cure meter - ASTM D2084 - 160 [degrees], C, [+ or -] are T'90 (min.)
Material Year Average Sx Sr SR Inc. Rep.
SBR 1997 13.486 0.765 0.182 0.786 50 58
 1997 13.519 0.796 0.183 0.817 50 58
 1998 14.192 0.852 0.369 0.927 56 59
 1998 14.034 0.801 0.315 0.859 53 57
 1998 13.803 0.970 0.394 1.045 52 52
 Pooled 13.807 0.840 0.302 0.892

EPDM 1998 14.398 0.822 0.468 0.944 56 59
 1998 13.900 0.877 0.450 0.984 53 57
 1998 12.931 0.879 0.680 1.107 52 52
 Pooled 13.743 0.860 0.543 1.014

Material Year r R %CVx %CVr %CVR
SBR 1997 0.515 2.224 5.67 1.3 5.8
 1997 0.518 2.312 5.89 1.4 6.0
 1998 1.044 2.623 6.00 2.6 6.5
 1998 0.891 2.431 5.71 2.2 6.1
 1998 1.115 2.957 7.03 2.9 7.6
 Pooled 0.856 2.523

EPDM 1998 1.324 2.672 5.71 3.3 6.6
 1998 1.274 2.785 6.31 3.2 7.1
 1998 1.924 3.133 6.80 5.3 8.6
 Pooled 1.536 2.870

Sx = Btwn.-lab. STD; Sr = Repeatability STD; SR = Reproducibility STD; %CV = Coefficient of variation; r = 2.83 x Sr; R = 2.83 x SR; Inc. = No. of labs included; Rep. = No. of labs reporting

Table 6 - oscillating disk cure meter - ASTM D2084 - 160 [degrees] C, [+ or -] 1 [degrees] arc minimum torque (dN.m)
Material Year Average Sx Sr SR Inc. Rep.
SBR 1997 3.665 0.369 0.261 0.451 49 58
 1997 3.671 0.368 0.208 0.421 49 58
 1998 3.665 0.491 0.140 0.511 54 58
 1998 3.551 0.426 0.162 0.454 50 57
 1998 3.893 0.555 0.169 0.580 52 52
 Pooled 3.689 0.448 0.193 0.487

EPDM 1998 8.776 0.999 0.191 1.016 54 58
 1998 9.520 0.978 0.237 1.007 50 57
 1998 10.675 1.245 0.249 1.269 52 52
 Pooled 9.657 1.081 0.227 1.104

Material Year r R %CVx %CVr %CVR
SBR 1997 0.739 1.276 10.08 7.1 12.3
 1997 0.588 1.193 10.03 5.7 11.5
 1998 0.396 1.445 13.41 3.8 13.9
 1998 0.457 1.285 11.99 4.6 12.8
 1998 0.480 1.640 14.25 4.4 14.9
 Pooled 0.546 1.377

EPDM 1998 0.540 2.875 11.38 2.2 11.6
 1998 0.671 2.849 10.28 2.5 10.6
 1998 0.703 3.591 11.66 2.3 11.9
 Pooled 0.642 3.124

Sx = Btwn.-lab. STD; Sr = Repeatability STD; SR = Reproducibility STD; %CV = Coefficient of variation r = 2.83 x Sr; R = 2.83 x SR; Inc. = No. of labs included; Rep. = No. of labs reporting

Table 7 - oscillating disk cure meter - ASTM D2084 - 160 [degrees] C, [+ or -] 1 [degrees] arc maximum torque (dN.m)
Material Year Average Sx Sr SR Inc. Rep.
SBR 1997 25.213 1.419 0.295 1.448 52 58
 1997 25.309 1.398 0.221 1.415 52 58
 1998 24.449 1.390 0.260 1.413 54 58
 1998 24.816 1.323 0.262 1.348 54 57
 1998 25.249 1.308 0.290 1.340 52 52
 Pooled 25.007 1.368 0.267 1.394

EPDM 1998 36.30 2.40 1.04 2.60 54 58
 1998 36.54 2.36 1.08 2.59 54 57
 1998 37.19 2.41 1.37 2.76 52 52
 Pooled 36.68 2.39 1.17 2.65

Material Year r R %CVx %CVr %CVR
SBR 1997 0.835 4.099 5.63 1.2 5.7
 1997 0.627 4.003 5.52 0.9 5.6
 1998 0.735 4.000 5.68 1.1 5.8
 1998 0.742 3.815 5.33 1.1 5.4
 1998 0.822 3.792 5.18 1.2 5.3
 Pooled 0.756 3.944

EPDM 1998 2.942 7.354 6.60 2.8 7.2
 1998 3.070 7.322 6.46 3.0 7.1
 1998 3.869 7.802 6.47 3.7 7.4
 Pooled 3.319 7.500

Sx = Btwn.-lab. STD; Sr = Repeatability STD; SR = Reproducibility STD; %CV = Coefficient of variation; r = 2.83 x Sr; R = 2.83 x SR; Inc. = No. of labs included; Rep. = No. of labs reporting
COPYRIGHT 2000 Lippincott & Peto, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Comment:Interlaboratory testing programs as a quality assurance tool for the rubber industry.
Author:Leete, Janine L.
Publication:Rubber World
Geographic Code:1USA
Date:Jan 1, 2000
Previous Article:Electric in-situ measurement of vulcanization.
Next Article:Instrumentation, test equipment suppliers.

Related Articles
Mixing efficiency and quality: a view from a synthetic rubber producer.
Beyond looking glass: why proficiency testing is crucial to laboratory quality control.
Precision and sensitivity in QA testing.
Storage stability of FKM compound based on a bisphenol AF/onium cure system and its potential as a standard reference compound.
Viscous heating and reinforcement effects of fillers using the rubber process analyzer.
Rubber training courses planned.
Utilization of the rubber process analyzer in Six Sigma programs.
ASTM course focus is rubber testing.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters